Planet RDF

Subscribe to Planet RDF feed
Updated: 1 day 8 hours ago

Semantics for Privacy and Shared Context

Mon, 12/15/2014 - 17:01

Categories:

RDF

Roberto Yus, Primal Pappachan, Prajit Das, Tim Finin, Anupam Joshi, and Eduardo Mena, Semantics for Privacy and Shared Context, Workshop on Society, Privacy and the Semantic Web-Policy and Technology, held at Int. Semantic Web Conf., Oct. 2014.

Capturing, maintaining, and using context information helps mobile applications provide better services and generates data useful in specifying information sharing policies. Obtaining the full benefit of context information requires a rich and expressive representation that is grounded in shared semantic models. We summarize some of our past work on representing and using context models and briefly describe Triveni, a system for cross-device context discovery and enrichment. Triveni represents context in RDF and OWL and reasons over context models to infer additional information and detect and resolve ambiguities and inconsistencies. A unique feature, its ability to create and manage “contextual groups” of users in an environment, enables their members to share context information using wireless ad-hoc networks. Thus, it enriches the information about a user’s context by creating mobile ad hoc knowledge networks.

Hadoop

Sat, 12/13/2014 - 14:13

Categories:

RDF
What it is and how people use it: my own summary.

Schema.org v1.92: Music, Video Games, Sports, Itemlist, breadcrumbs and more!

Thu, 12/11/2014 - 15:28

Categories:

RDF
We are happy to announce version v1.92 of schema.org. With this update we "soft launch" a substantial collection of improvements that will form the basis for a schema.org version 2.0 release in early 2015. There remain a number of site-wide improvements, bugfixes and clarifications that we'd like to make before we feel ready to use the name "v2.0". However the core vocabulary improvements are stable and available for use from today. As usual see the release notes page for details.

Please get in touch via the W3C Web Schemas group or our Github issue tracker if you'd like to share feedback with us and the wider schema.org community. We won't go into the details of each update in today's blog post, but there are a lot of additions and fixes, and more coming in 2015. Many thanks to all those who contributed to this release!

DCMI/ASIST Webinar: The Libhub Initiative: Increasing the Web Visibility of Libraries

Wed, 12/10/2014 - 23:59

Categories:

RDF
2014-12-10, In this webinar, Eric Miller, President of Zepheira, will talk about the transition libraries must make to achieve Web visibility, explain recent trends that support these efforts, and introduce the Libhub Initiative -- an active exploration of what can happen when libraries begin to speak the language of the Web. As a founding sponsor, Zepheira's introduction of the Libhub Initiative creates an industry-wide focus on the collective visibility of libraries and their resources on the Web. Libraries and memory organizations have rich content and resources that the Web can not see or use. The Libhub Initiative aims to find common ground for libraries, providers, and partners to publish and use data with non-proprietary, web standards. Libraries can then communicate in a way Web applications understand and Web users can see through the use of enabling technology like Linked Data and shared vocabularies such as schema.org and BIBFRAME. The Libhub Initiative uniquely prioritizes the linking of these newly exposed library resources to each other and to other resources across the Web, a critical requirement of increased Web visibility. Additional information about the webinar and and registration can be found at http://dublincore.org/resources/training/#2014miller.

Huffduffer / Radiodan Digression – NFC control

Wed, 12/10/2014 - 17:37

Categories:

RDF

I’d like to be able to change the URL of the RSS feed using NFC (~RFID). This is a tiny bit of over-engineering, but could also be very cool. I have a couple of NFC boards I’ve been planning on playing with for a while.

One’s an

Open Semantic Framework 3.1 Released

Tue, 12/09/2014 - 17:08

Categories:

RDF
Structured Dynamics is happy to announce the immediate availability of the Open Semantic Framework version 3.1. This new version includes a set of fixes to different components of the framework in the last few months. The biggest change is deployment of OSF using Virtuoso Open Source version 7.1.0.

We also created a new API for Clojure developers called: clj-osf. Finally we created a new Open Semantic Framework web portal that better describes the project and is hopefully easier to use and more modern.

Quick Introduction to the Open Semantic Framework

What is the Open Semantic Framework?

The Open Semantic Framework (OSF) is an integrated software stack using semantic technologies for knowledge management. It has a layered architecture that combines existing open source software with additional open source components. OSF is designed as an integrated content platform accessible via the Web, which provides needed knowledge management capabilities to enterprises. OSF is made available under the Apache 2 license.

OSF can integrate and manage all types of content – unstructured documents, semi-structured files, spreadsheets, and structured databases – using a variety of best-of-breed data indexing and management engines. All external content is converted to the canonical RDF data model, enabling common tools and methods for tagging and managing all content. Ontologies provide the schema and common vocabularies for integrating across diverse datasets. These capabilities can be layered over existing information assets for unprecedented levels of integration and connectivity. All information within OSF may be powerfully searched and faceted, with results datasets available for export in a variety of formats and as linked data.

A new Open Semantic Framework website

The OSF 3.1 release also triggered the creation of a new website for the project. We wanted something leaner and more modern and that is what I think we delivered. We also reworked the content, we wrote about a series of usecases 1 2 3 4 5 6 and we better aggregated and presented information for each web service endpoint.

A new OSF sandbox

We also created an OSF sandbox where people can test each web service endpoint and test how each functionality works. All of the web services are open to users. The sandbox is not meant to be stable considering that everybody have access to all endpoints. However, the sandbox server will be recreated on a periodic basis. If the sandbox is totally broken and users experiment issues, they can always request a re-creation of the server directly on the OSF mailing list.

Each of the web service pages on the new OSF portal has a Sandbox section where you see some code examples of how to use the endpoint and how to send requests to the sandbox. Here are the instructions to use the sandbox server.

A new OSF API for Clojure: clj-osf

The OSF release 3.1 also includes a new API for Clojure developers: clj-osf.

clj-osf is a Domain Specific Language (DSL) that should lower the threshold to use the Open Semantic Framework.

To use the DSL, you only have to configure your application to use a specific OSF endpoint. Here is an example of how to do this for the Sandbox server:

;; Define the OSF Sandbox credentials (or your own):
(require '[clj-osf.core :as osf])

(osf/defosf osf-test-endpoint {:protocol :http
                               :domain "sandbox.opensemanticframework.org"
                               :api-key "EDC33DA4D977CFDF7B90545565E07324"
                               :app-id "administer"})

(osf/defuser osf-test-user {:uri "http://sandbox.opensemanticframework.org/wsf/users/admin"})

Then you can send simple OSF web service queries. Here is an example that sends a search query to return records of type foaf:Person that also match the keyword “bob”:

(require '[clj-osf.search :as search])

(search/search
 (search/query "bob")
 (search/type-filters ["http://xmlns.com/foaf/0.1/Person"]))

A complete set of clj-osf examples is available on the OSF wiki.

Finally the complete clj-osf DSL documentation is available here.

A community effort

This new release of the OSF Installer is another effort of the growing Open Semantic Framework community. The upgrade of the installer to deploy the OSF stack using Virtuoso Open Source version 7.1.0 has been created by William (Bill) Anderson.

Deploying a new OSF 3.1 Server Using the OSF Installer

OSF 3.1 can easily be deployed on a Ubuntu 14.04 LTS server using the osf-installer application. It can easily be done by executing the following commands in your terminal:

mkdir -p /usr/share/osf-installer/

cd /usr/share/osf-installer/

wget https://raw.github.com/structureddynamics/Open-Semantic-Framework-Installer/3.1/install.sh

chmod 755 install.sh

./install.sh

./osf-installer --install-osf -v Using a Amazon AMI

If you are an Amazon AWS user, you also have access to a free AMI that you can use to create your own OSF instance. The full documentation for using the OSF AMI is available here.

Upgrading Existing Installations

Existing OSF installations can be upgraded using the OSF Installer. However, note that the upgrade won’t deploy Virtuoso Open Source 7.1.0 for you. All the code will be upgraded, but Virtuoso will remain the version you were last using on your instance. All the code of OSF 3.1 is compatible with previous versions of Virtuoso, but you won’t benefit the latest improvements to Virtuoso (in terms of performances) and its latest SPARQL 1.1 implementations. If you want to upgrade Virtuoso to version 7.1.0 on an existing OSF instance you will have to do this by hands.

To upgrade the OSF codebase, the first thing is to upgrade the installer itself:

# Upgrade the OSF Installer
./usr/share/osf-installer/upgrade.sh

Then you can upgrade the components using the following commands:

# Upgrade the OSF Web Services
./usr/share/osf-installer/osf --upgrade-osf-web-services="3.1.0"

# Upgrade the OSF WS PHP API
./usr/share/osf-installer/osf --upgrade-osf-ws-php-api="3.1.0"

# Upgrade the OSF Tests Suites
./usr/share/osf-installer/osf --upgrade-osf-tests-suites="3.1.0"

# Upgrade the Datasets Management Tool
./usr/share/osf-installer/osf --upgrade-osf-datasets-management-tool="3.1.0"

# Upgrade the Data Validator Tool
./usr/share/osf-installer/osf --upgrade-osf-data-validator-tool="3.1.0"
  1. Enterprise Information Integration
  2. Enterprise Semantic Search
  3. Semantic Publishing
  4. Open Government
  5. Distributed Collaboration
  6. Indicators and Sustainability

Huffduffer Radiodan part 3 – using the switch with Radiodan code

Mon, 12/08/2014 - 17:08

Categories:

RDF

Yesterday I got a microswitch working – today I want to make it turn on the radio. Given that I can only start my podcasts in client-side mode at the moment, I’ll have to bodge it a bit so I can actually see it working (Andrew is going to help me understand the server side API better tomorrow).

So my plan is to

1. Switch back to the radiodan “magic button” radio
2. Make my switch replace the default on button
3. Make my switch turn the radio on when it’s open

and if I have time

4. attach a simple rotary encoder for volume
5. attach an RGB LED for the status lights

For the magic button radio, we have had some simple PCBs made to make the soldering easier for two RGB LED-rotary encoder-buttons, but it doesn’t fit in my new box, and anyway, I want to see how difficult it is without the PCB.

It’s 3pm now and I have about an hour free.

Switch back to the radiodan “magic button” radio

This is easy. I’ve sshed in and do this:


pi@radiodan-libby ~ $ sudo radiodan-device-type

Current device type: radiodan-type-huffduffer
Device type required. Valid types radiodan-type-example, radiodan-type-huffduffer, radiodan-type-magic

so I do


pi@radiodan-libby ~ $ sudo radiodan-device-type radiodan-type-magic

and reboot

Once we’re back up, if I go to

Huffduffer Radiodan

Fri, 12/05/2014 - 10:15

Categories:

RDF

I’ve been wanting a physical radio that plays podcasts for a long time, and it’s something we’ve discussed quite a lot in the

YouID Identity Claim

Tue, 12/02/2014 - 08:20

Categories:

RDF

ni:///sha-1;aGxsejhHVGRzbmJsRDd0LzBJbHFCN0Y3aTdjPQ?http=ods-qa.openlinksw.com


DCMI invites public comment on draft LRMI in RDF

Fri, 11/28/2014 - 23:59

Categories:

RDF
2014-11-28, DCMI invites public comment on a draft RDF specification for LRMI version 1.1. The draft RDF specification can be found at http://dublincore.org/dcx/dcmi-terms/drafts/2014-11-30/. The one month public comment period is from 1 December 2014 through 31 December 2014. The RDF specification is intended to embody the current Learning Resource Metadata Initiative version 1.1 term declarations at http://dublincore.org/dcx/lrmi-terms/.

DCMI joins as inaugural member of DLMA

Fri, 11/28/2014 - 23:59

Categories:

RDF
2014-11-28, DCMI, along with IMS Global) and the International Digital Publishing Forum (IDPF), announce the formation of the Digital Learning Metadata Alliance (DLMA). The DLMA will focus on coordination of adoption and development of existing metadata standards in support of digital learning and education. For more information about DLMA, visit http://dlma.org and read the IMS Global press release at http://www.imsglobal.org/pressreleases/IMSPR20141024.pdf.

Automatic Semantic Tagging for Drupal CMS launched

Fri, 11/28/2014 - 14:30

Categories:

RDF

REEEP [1] and CTCN [2] have recently launched Climate Tagger, a new tool to automatically scan, label, sort and catalogue datasets and document collections. Climate Tagger now incorporates a Drupal Module for automatic annotation of Drupal content nodes. Climate Tagger addresses knowledge-driven organizations in the climate and development arenas, providing automated functionality to streamline, catalogue and link their Climate Compatible Development data and information resources.

Climate Tagger for Drupal is a simple, FREE and easy-to-use way to integrate the well-known Reegle Tagging API [3], originally developed in 2011 with the support of CDKN [4], (now part of the Climate Tagger suite as Climate Tagger API) into any web site based on the Drupal Content Management System [5]. Climate Tagger is backed by the expansive Climate Compatible Development Thesaurus, developed by experts in multiple fields and continuously updated to remain current (explore the thesaurus at http://www.reegle.info/glossary). The thesaurus is available in English, French, Spanish, German and Portuguese. And can connect content on different portals published in these different languages.

Climate Tagger for Drupal can be fine-tuned to individual (and existing) configuration of any Drupal 7 installation by:

  • determining which content types and fields will be automatically tagged
  • scheduling “batch jobs” for automatic updating (also for already existing contents; where the option is available to re-tag all content or only tag with new concepts found via a thesaurus expansion / update)
  • automatically limit and manage volumes of tag results based on individually chosen scoring thresholds
  • blending with manual tagging

click to enlarge

“Climate Tagger [6] brings together the semantic power of Semantic Web Company’s PoolParty Semantic Suite [7] with the domain expertise of REEEP and CTCN, resulting in an automatic annotation module for Drupal 7 with an accuracy never seen before” states Martin Kaltenböck, Managing Partner of Semantic Web Company [8], which acts as the technology provider behind the module.

Climate Tagger is the result of a shared commitment to breaking down the ‘information silos’ that exist in the climate compatible development community, and to provide concrete solutions that can be implemented right now, anywhere” said REEEP Director General Martin Hiller. “Together with CTCN and SWC laid the foundations for a system that can be continuously improved and expanded to bring new sectors, systems and organizations into the climate knowledge community.”

For the Open Data and Linked Open Data communities, a Climate Tagger plugin for CKAN [9] has also been published, which was developed by developed by NREL [10] in cooperation with CTCN’s support, harnessing the same taxonomy and expert vetted thesaurus behind the Climate Tagger, helping connect open data to climate compatible content through the simultaneous use of these tools.

REEEP Director General Martin Hiller and CTCN Director Jukka Uosukainen will be talking about Climate Tagger at the COP20 side event hosted by the Climate Knowledge Brokers Group in Lima [11], Peru, on Monday, December 1st at 4:45pm.

Further reading and downloads

About REEEP:

REEEP invests in clean energy markets in developing countries to lower CO2 emissions and build prosperity. Based on strategic portfolio of high impact projects, REEEP works to generate energy access, improve lives and economic opportunities, build sustainable markets, and combat climate change.

REEEP understands market change from a practice, policy and financial perspective. We monitor, evaluate and learn from our portfolio to understand opportunities and barriers to success within markets. These insights then influence policy, increase public and private investment, and inform our portfolio strategy to build scale within and replication across markets. REEEP is committed to open access to knowledge to support entrepreneurship, innovation and policy improvements to empower market shifts across the developing world.

About the CTCN

The Climate Technology Centre & Network facilitates the transfer of climate technologies by providing technical assistance, improving access to technology knowledge, and fostering collaboration among climate technology stakeholders. The CTCN is the operational arm of the UNFCCC Technology Mechanism and is hosted by the United Nations Environment Programme (UNEP) in collaboration with the United Nations Industrial Development Organization (UNIDO) and 11 independent, regional organizations with expertise in climate technologies.

About Semantic Web Company

Semantic Web Company (SWC, http://www.semantic-web.at) is a technology provider headquartered in Vienna (Austria). SWC supports organizations from all industrial sectors worldwide to improve their information and data management. Their products have outstanding capabilities to extract meaning from structured and unstructured data by making use of linked data technologies.

Introducing the Linked Data Business Cube

Fri, 11/28/2014 - 12:09

Categories:

RDF

With the increasing availability of semantic data on the World Wide Web and its reutilization for commercial purposes, questions arise about the economic value of interlinked data and business models that can be built on top of it. The Linked Data Business Cube provides a systematic approach to conceptualize business models for Linked Data assets. Similar to an OLAP Cube, the Linked Data Business Cube provides an integrated view on stakeholders (x-axis), revenue models (y-axis) and Linked Data assets (z-axis), thus allowing to systematically investigate the specificities of various Linked Data business models.

 

Mapping Revenue Models to Linked Data Assets

By mapping revenue models to Linked Data assets we can modify the Linked Data Business Cube as illustrated in the figure below.

The figure indicates that with increasing business value of a resource the opportunities to derive direct revenues rise. Assets that are easily substitutable generate little incentives for direct revenues but can be used to trigger indirect revenues. This basically applies to instance data and metadata. On the other side, assets that are unique and difficult to imitate and substitute, i.e. in terms of competence and investments necessary to provide the service, carry the highest potential for direct revenues. This applies to assets like content, service and technology. Generally speaking, the higher the value proposition of an asset – in terms of added value – the higher the willingness to pay.

Ontologies seem to function as a “mediating layer” between “low-incentive assets” and “high-incentive assets”. This means that ontologies as a precondition for the provision and utilization of Linked Data can be capitalized in a variety of ways, depending on the business strategy of the Linked Data provider.

It is important to note that each revenue model has specific merits and flaws and requires certain preconditions to work properly. Additionally they often occur in combination as they are functionally complementary.

Mapping Revenue Models to Stakeholders

A Linked Data ecosystem is usually comprised of several stakeholders that engage in the value creation process. The cube can help us to elaborate the most reasonable business model for each stakeholder.

Summing up, Linked Data generates new business opportunities, but the commercialization of Linked Data is very context specific. Revenue models change in accordance to the various assets involved and the stakeholders who take use of them. Knowing these circumstances is crucial in establishing successful business models, but to do so it requires a holistic and interconnected understanding of the value creation process and the specific benefits and limitations Linked Data generates at each step of the value chain.

Read more: Asset Creation and Commercialization of Interlinked Data

Highlights of the 1st Meetup on Question Answering Systems – Leipzig, November 21st

Mon, 11/24/2014 - 09:09

Categories:

RDF

On November 21st, AKSW group was hosting the 1st meetup on “Question Answering” (QA) systems. In this meeting, researchers from AKSW/University of Leipzig, CITEC/University of Bielefeld, Fraunhofer IAIS/University of BonDERI/National University of Ireland and the University of Passau presented the recent results of their work on QA systems. The following themes were discussed during the meeting:

  • Ontology-driven QA on the Semantic Web. Christina Unger presented Pythia system for ontology-based QA. Slides are available here.
  • Distributed Semantic Models for achieving scalability & consistency on QA. André Freitas presented TREO and EasyESA which employ vector-based approach for semantic approximation.
  • Template-based QA. Jens Lehmann presented TBSL for Template-based Question Answering over RDF Data.
  • Keyword-based QA. Saeedeh Shekarpour presented SINA approach for semantic interpretation of user queries for QA on interlinked data.
  • Hybrid QA over Linked Data. Ricardo Usbeck presented HAWK for hybrid question answering using Linked Data and full-text indizes.
  • Semantic Parsing with Combinatory Categorial Grammars (CCG). Sherzod Hakimov. Slides are available here.
  • QA on statistical Linked Data. Konrad Höffner presented LinkedSpending and RDF Data Cube vocabulary to apply QA on statistical Linked Data.
  • WDAqua (Web Data and Question Answering) project. Christoph Lange presented the WDAqua project which is part of the EU’s Marie Skłodowska-Curie Action Innovative Training Networks. WDAqua focuses on answering different aspects of the question, “how can we answer complex questions with web data?”
  • OKBQA (Open Knowledge Base & Question-Answering). Axel-C. Ngonga Ngomo presented OKBQA which aims to bring cutting edge experts in knowledge base construction and application in order to create an extensive architecture for QA systems which has no restriction on programming languages.
  • Open QA. Edgard Marx presented open source question answering framework that unifies QA approaches from several domain experts.

The meetup decided to meet biannually to fuse efforts. All agreed upon investigating existing architecture for question answering systems to be able to offer a promising, collaborative architecture for future endeavours. Join us next time! For more information contact Ricardo Usbeck.

Ali and Ricardo on behalf of the QA meetup

Product Space and Workshops

Fri, 11/21/2014 - 11:12

Categories:

RDF

I ran a workshop this week for a different bit of the organisation. It’s a bit like holding a party. People expect to enjoy themselves (and this is an important part of the process). But workshops also have to have outcomes and goals and the rest of it. And there’s no booze to help things along.

I always come out of them feeling a bit deflated. Even if others found them enjoyable and useful, the general stress of organising and the responsibility of it all mean that I don’t, plus I have to co-opt colleagues into quite complicated and full-on roles as facilitators, so they can’t really enjoy the process either.

This time we were trying to think more creatively about future work. There are various things niggling me about it, and I want to think about how to improve things next time, while it’s still fresh in my mind.

One of the goals was – in the terms I’ve been thinking of – to explore the space in which products could exist, more thoroughly. Excellent articles by

Announcing GERBIL: General Entity Annotator Benchmark Framework

Thu, 11/20/2014 - 09:08

Categories:

RDF

Dear all,

We are happy to announce GERBIL – a General Entity Annotation Benchmark Framework, a demo can be found at! With GERBIL, we aim to establish a highly available, easy quotable and liable focal point for Named Entity Recognition and Named Entity Disambiguation (Entity Linking) evaluations:

  • GERBIL provides persistent URLs for experimental settings. By these means, GERBIL also addresses the problem of archiving experimental results.
  • The results of GERBIL are published in a human-readable as well as a machine-readable format. By these means, we also tackle the problem of reproducibility.
  • GERBIL provides 11 different datasets and 9 different entity annotators. Please talk to us if you want to add yours.

To ensure that the GERBIL framework is useful to both end users and tool developers, its architecture and interface were designed with the following principles in mind:

  • Easy integration of annotators: We provide a web-based interface that allows annotators to be evaluated via their NIF-based REST interface. We provide a small NIF library for an easy implementation of the interface.
  • Easy integration of datasets: We also provide means to gather datasets for evaluation directly from data services such as DataHub.
  • Extensibility: GERBIL is provided as an open-source platform that can be extended by members of the community both to new tasks and different purposes.
  • Diagnostics: The interface of the tool was designed to provide developers with means to easily detect aspects in which their tool(s) need(s) to be improved.
  • Portability of results: We generate human- and machine-readable results to ensure maximum usefulness and portability of the results generated by our framework.

We are looking for your feedback!

Best regards,

Ricardo Usbeck for The GERBIL Team