The new version of schema.org includes the TV and radio extension we have been working on with Jean-Pierre Evain (EBU) and Dan Brickley (for schema.org). This update offers improved support for describing TV and radio shows, for example:
- 'The Hungry Earth' is the 8th episode of the 5th season of Doctor Who
- It is broadcast on BBC One London at 18:15, on the 22nd of May 2010
- It is available on BBC iPlayer for two weeks after that
Embedding such data in web pages means that it can be aggregated by search engines, which can then provide more information around TV and radio search results, as well as providing instant answers to queries such as 'when is the next Eastenders episode on?'. We built on previous efforts in modelling such information, such as TV Anytime, EBU Core and the BBC Programmes Ontology. The RDFa definition from which the extension was built in schema.org has equivalence links to concepts in these ontologies.
This update also maintains backwards-compatibility with previously existing concepts (e.g. TVEpisode, TVSeries and TVSeason) and properties.
The concepts related to this update are the following:
- Series (e.g. 'Doctor Who')
- Season (e.g. 'Doctor Who, Season 5')
- Episode (e.g. 'Doctor Who, Season 5, The Hungry Earth')
- Clip (e.g. 'Clip from The Hungry Earth')
- RadioSeries, RadioSeason, RadioEpisode and RadioClip, their radio-specific sub-classes (e.g. ‘20th of November 2013 episode of the Today Programme on BBC Radio 4)
- TVSeries, TVSeason, TVEpisode and TVClip, their TV-specific sub-classes
- PublicationEvent, a publication of an episode or a clip, for example a broadcast or an availability on a catch-up service.
- BroadcastEvent, e.g. 'The Hungry Earth is broadcast on BBC One London at 18:15, on the 22nd of May 2010'
- OnDemandEvent, e.g. 'The Hungry Earth is available on BBC iPlayer for two weeks after broadcast'
- BroadcastService, e.g. 'BBC One London belongs to the BBC One parent service'
- broadcaster, e.g. ‘the BBC’
Of course, this extension is just the beginning at better support for broadcast and media-related data. We focused on basic information and fixing a few issues in the first schema.org release for the time being, but there are more areas to explore. As consensus builds around them, possible extensions could also be proposed to schema.org or published directly in RDFa as extension markup using existing vocabularies such as the ones published by the EBU or the BBC. This include for example support for segmentation (tracklists, chapters, etc.) and their links to media fragments, more detailed contributor/character information (e.g. 'David Tennant plays the Xth Doctor in this episode') and the description of multiple video, audio and data tracks (e.g. ‘This episode has two audio tracks, one in French and one in English’).
Regardless of any future improvements, schema.org's TV and Radio vocabulary now provides a stable basis for Web sites to share rich descriptions of TV/Radio content. See schema.org's full listing of vocabularies to see this work in its wider context.
TPAC 2013 got under way in Shenzhen, China, this month. The RWW group didn’t have a session this year, as not too many from the group were able to travel, however, hopefully we’ll have a room in next year’s event.
Five star Linked Data in JSON got one step closer, as JSON LD moves to candidate recommendation. Additionally, RDF 1.1 JSON Alternate Serialization was released as a note. An interesting report my McKinsey’s was published showing that Open Data can unlock 3 to 5 trillion dollars in value each year.
Congratulations to our co-chair, Andrei Sambra, who successfully defended his PhD thesis on “Data ownership and Interop. in Decentralized Social SemWeb”. There was also some interesting discussion this month on advanced used of ACLs.Communications and Outreach
Henry Story and Andrei Sambra, among others, were at a well attended 4 day workshop in Paris, hosted by Mozilla, entitled “Weave the web we want”.
Some of you may be interested by the interview I gave to the lod2 group. I tried to talk about the advantages of read and write linked data as well as pointers to this, and other, community groups.
YouID, featured last month, is now available in both the IOS and Android app stores. Congrats on bringing the goodness of linked data identity to the two biggest mobile platforms.
On Monday, December 2 at 1.30 pm in Room P-702 (Paulinum), Mohamed Morsey will give a final rehearsal for his PhD defense “Efficient Extraction and Query Benchmarking of Wikipedia Data”. Guests are encuraged to both provide feedback about improvements to the talk and ask realistic questions.
As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.
Efficient Extraction and Query Benchmarking of Wikipedia Data
The thesis consists of two major parts:
- Semantic Data Extraction: the objective of that part is to extract data from semi-structured source, i.e. Wikipedia, and transform it into a networked knowledge base, i.e. DBpedia. Furthermore, maintaining the up-to-dateness of that knowledge base to be always in synchronization with Wikipedia.
- Triplestore Performance Evaluation: normally the semantic data is stored on a triplestore, e.g. Virtuoso, in order to enable the efficient querying of that data. In that part we have developed a new benchmark for evaluating and contrasting the performance of various triplestores.
Two live demos of PoolParty Semantic Integrator demonstrate new ways to retrieve information based on linked data technologies
Linked data graphs can be used to annotate and categorize documents. By transforming text into RDF graphs and linking them with LOD like DBpedia, Geonames, MeSH etc. completely new ways to make queries over large document repositories become possible.
An online-demo illustrates those principles: Imagine you were an information officer at the Global Health Observatory of the World Health Organisation. You inform policy makers about the global situation in specific disease areas to direct support to the required health support programs. For your research you need data about disease prevalence in relation with socioeconomic factors.
Datasets and technology
About 160.000 scientific abstracts from PubMed, linked to three different disease categories were collected. Abstracts were automatically annotated with PoolParty Extractor, based on terms from the Medical Subject Headings (MeSH) and Geonames that are organized in a SKOS thesaurus, managed with PoolParty Thesaurus Server. Abstracts were transformed to RDF and stored in Virtuoso RDF store. In the next step, it is easy to combine these data sets within the triple store with large linked data sources like DBPedia, Geonames or Yago. The use of linked data makes it easy to e.g. group annotated countries by the Human Development Index (HDI). The hierarchical structure of the thesaurus was used to collect all concepts that are connected to a specific disease.
This demo was developed based on the libraries sgvizler to visualize SPARQL results. AngularJS was used to dynamically replace variables in SPARQL query templates.
Another example of linked data based search in the field of renewable energy can be tried out here.
Today we invite guest blogger Gerardo Capiel, VP of Engineering of Benetech, who joined the Consortium to participate in the Digital Publishing Interest Group (DPUB IG). This is cross-posted on the Digital Publishing Blog.
In October of this year, Benetech joined the World Wide Web Consortium to more deeply participate in the evolution of standards that enable educational content to be born accessible. Our first opportunity for deep engagement came earlier this month with the W3C Technical Plenary and Advisory Committee (TPAC) meeting in Shenzhen, China. During the weeklong TPAC meeting, many of the W3C Working Groups that develop W3C Recommendations met to advance their projects and provide an opportunity for others to observe and inform. TPAC was also a great opportunity for face-to-face collaboration with others in the field.
One of the people I had an opportunity to collaborate with in person was Charles McCathie Nevile, aka Chaals, who has long been involved in W3C activities and accessibility. Chaals works for Yandex one of the mainstream search engines, which leverages Schema.org. Schema.org is collaboration between Bing, Google, Yahoo and Yandex to create and support a standard set of schemas for structured data markup on web pages. These standardized schemas enable webmasters to improve the discoverability of their content. You can experience the benefits of this standardized markup by searching Google for ‘Potato Salad’, clicking ‘Search Tools’ and now having the ability to filter search results by the properties defined at http://schema.org/Recipe, such as ingredients, cook time or caloric content.
For the past year thanks to funding from the Gates Foundation Benetech has led a working group to propose a set of Schema.org accessibility properties that can be used with existing schemas to enable the discovery of accessible educational resources. The need for a standard set of properties surfaced during our participation as a launch partner for the Learning Registry in 2011. While the Learning Registry can leverage Schema.org properties defined by the Learning Resources Metadata Initiative (LRMI), such as educational alignment, there was no standard set of properties that would enable an educator to find closed-captioned videos for hearing impaired students or algebra textbooks that used MathML – an accessible format for mathematical expressions. Schema.org was the ideal place to define these properties, because they would not only benefit tools, such as the Learning Registry, but would benefit broader Open Web Platform technologies and mainstream search engines, such as Google and Yandex.
During TPAC Chaals, Markus Gylling (CTO of IDPF and co-chair of the W3C Digital Publishing Interest Group) and I were able to resolve the remaining concerns that Schema.org representatives had with the proposed properties. The following week, Dan Brickley, the editor of Schema.org, publicly announced that Schema.org would be adopting the accessibility properties proposed by the Accessibility Metadata project and IMS Global Access for All.
Soon after Dan’s announcement the IDPF updated their EPUB 3 Accessibility Guidelines to recommend to the publishing industry the use of those properties with digital textbooks and ebooks. These Guidelines have received broad support from the American Association of Publishers (AAP) and the National Federation of the Blind (NFB). I hope the Guidelines will be instrumental to the newly introduced ‘Technology, Education and Accessibility In College and Higher Education Act’ (TEACH) by U.S. Congressman Tom Petri.
At TPAC I also participated in three W3C group face-to-face meetings. The first was with the Digital Publishing Interest Group (DPUB IG), which Benetech is a member of along with Adobe, Google, Hachette Livre, IBM, Pearson and many others. The mission of the group is to provide a forum for experts in the digital publishing ecosystem for technical discussions, gathering use cases and requirements to align the existing formats and technologies needed for digital publishing with those used by the Open Web Platform. The goal is to ensure that the requirements of digital publishing can be answered, when in scope, by the Recommendations published by W3C.
Per the charter of the group Suzanne Taylor from Pearson and I put together a set of use cases related to accessibility for Digital Publishing. I had the opportunity to discuss the use cases related to image and diagram accessibility with the SVG Working Group, which is preparing the SVG 2.0 specification for final call at the end of this year. The Scalable Vector Graphic format (SVG) is a format that has been deemed the third most important feature of EPUB 3 that publishers need to adopt by the AAP EPUB 3 Implementation Project. SVG currently contains a number of mechanisms that enhance the accessibility of digital publications particularly those in the STEM field. As a result, SVG is an excellent standard to build upon to further address the needs of students with disabilities.
The first use case I discussed was SVG as a fallback and bridging solution for the accessibility of mathematical expressions. Currently, MathML is recommended as the format that publishers should use for accessibility. However, MathML adoption among traditional reading systems has been abysmal and there is no sign that it will soon improve. Google’s Chrome browser recently dropped support for MathML and MathPlayer, a popular Microsoft Internet Explorer (IE) plug-in, which is used by students with disabilities is no longer supported in IE 11. Furthermore, MathML does not work with many mainstream reading systems, such as the Kindle, and even when there is visual rendering support, there is little to no accessibility support. As a result many publishers, such as O’Reilly and Inkling, have resorted to converting mathematical expressions from MathML to SVG or PNG graphics, which either results in a loss of the information needed by blind or vision impaired (BVI) students or increases the cost to publishers of complying with accessibility requirements.
My proposal to the SVG Working Group was for the SVG 2.0 specification to support embedding MathML within SVG along with granular verbal descriptions of the expression as a lowest common denominator for assistive technology. This approach would broaden compatibility and not take away information that could be leveraged by future assistive technologies. I also recommended that SVG express explicit support for the recently drafted ARIA 1.1 describedAt property that enables MathML and corresponding verbal descriptions to also be referenced by an external URI (URL). This URL would provide access to the source MathML and alternative formats, such as Nemeth Braille, and would enable educators and disability services professionals to correct and improve MathML markup, which may render correctly visually, but poorly aurally.
The SVG group was very open and responsive to my proposal and Richard Schwerdtfeger from IBM took the first action by adding to the SVG 2.0 specification draft support for aria-describedat. With these standards in place our plans to take to market a prototyped tool for publishers and other content creators to convert MathML to described SVG become even more compelling.
Next I discussed with the SVG Working Group research and development that Benetech had undertaken to use Open Web Platform technologies to implement the sonification features of MathTrax. MathTrax is a graphing tool for blind and low vision middle and high school students to access visual math data and graph or experiment with equations and datasets.
Doug Schepers, a staff member of the W3C, demonstrated a project we had collaborated on to sonify graphs of mathematical expressions, such as a parabola, using the Web Audio API and the nascent Web Speech API supported in Safari 6.1 and the upcoming Chrome 33. We discussed that in order to generalize this approach work with SVG graphs generated by other tools, such as D3.js, we needed standard semantics to identify which SVG elements represented data and the x and y axis. Richard Schwerdtfeger suggested that new ARIA roles be enumerated for this purpose. I look forward to moving this forward with Rich.
One of the advantages of the SVG format for accessible images and graphs is that textual descriptions of the whole image or individual elements can be included within SVG. This is superior to the use of the alt property with HTML image elements, because descriptions can be lengthier than those typically used with the alt property and they are portable with the image content. Unfortunately SVG descriptions are limited to text and can’t use rich HTML markup to incorporate tables, lists or links. I recommend reading some of the recommendations by the DIAGRAM Center on the use of structured elements for image descriptions, particularly in the STEM field.
I was in luck with this use case as the SVG Working Group was already considering supporting HTML within SVG 2.0. My use case was further confirmation that SVG could benefit from this support and an action was created specific to this use case.
Because we need a way to deal with back titles that don’t leverage these standards for rich image descriptions or have insufficient descriptions, I discussed the need for mechanisms to support the crowdsourcing or post-production addition of image descriptions to titles that have already been published. Previously I discussed that ARIA 1.1′s describedAt property is one enabling standard that tools, such as DIAGRAM Center’s Poet can leverage. Markus Gylling and I also believe that the emergent Open Annotations specification provides another approach. The good news is that Doug Schepers from the SVG Working Group believes this will be possible based on his talks with the developers at Hypothes.is who are building one of the first tools to leverage the Open Annotations specification.
Besides aural and textual modalities, the content of SVG graphs can also be conveyed via tactile modalities. Educators to blind and visually impaired students have commonly used tactile graphics to make graphical content accessible. SVG is the recommended digital format for creating tactile graphics that can be printed with specialized printers called embossers. These specialized printers are expensive, thus the DIAGRAM Center and others have been funding and conducting research on the use of 3D printers for making 2D SVG graphics accessible. Given MakerBot’s recently announced mission to put a 3D printer in every classroom, this is very exciting.
Haptics are another promising technology for making SVG graphics accessible via a tactile modality. The DIAGRAM Center recently partnered with ETS to research tablet-based haptic display of graphical information and explore the inclusion of this technology in EPUB 3 textbooks.
Ideally the same SVG graphic could be printed with ink, an embosser or a 3D printer. Based on discussions with SVG Working Group we determined that a solution could be to leverage CSS media queries, which today are used to format web pages for print. An action item was taken for Doug Schepers and Tab Atkins who also sits on the CSS Working Group to work on tactile, 3D and haptic media queries for SVG.
Finally I presented to the SVG Working Group the need for SVG images to be easily reusable within a document and across documents without the need for the HTML image element, which limits much of the advanced capabilities of SVG. The SVG Working Group recommended that iframes be used to inline the same SVG graphic across multiple locations in a document. This seems like a reasonable approach and I will be further discussing it with our DIAGRAM Center partners, the DPUB IG and developers of assistive technologies.
I was quite pleased with how open and curious the SVG Working Group was to issues around accessibility. This was exactly the type of collaboration that the Digital Publishing Interest Group was meant to create.
My last day at TPAC was spent with the W3C Independent User Interface (Indie UI) Working Group. Their mission is to facilitate interaction in Web applications that are input method independent, and hence accessible to people with disabilities. For example currently Web application authors, wishing to intercept a user’s intent to ‘zoom in’ on a map view, need to ‘listen’ for a wide range of events that vary by operating system or device, such as CTRL-Plus, Command-Plus, pinch/zoom touch gestures, etc. Assistive technologies further expand and complicate the possible interactions that web developers must account for.
Ideally, content and applications automatically adapt to the users preferences. To make this possible the Indie UI team is also drafting a user context specification. The Indie UI team was very interested in learning about our work with Schema.org to specify accessibility properties. One use case enabled by leveraging both of these specifications is applications that automatically narrow search results for users based on their preferences. For example if the user context indicated that the user had a preference for videos with captions, then a search query could automatically narrow results to videos with captions.
I’m very excited to see all these standards coming together to enable a born accessible future and reaping its benefits. I’d like to thank the W3C for putting on a great event, which crystallized for me the importance of the W3C’s mission.
Although Jakob literally said “the podling is leaving the nest”, I think what we are actually leaving is the burrow to set sights higher. After 11 months under incubation, finally last week Apache Marmotta has become top-level project of the Apache Software Foundation. Thanks all who have contributed to it!
I think Sebastian has already written a great summary of the story behind Marmotta so far. But this does not end here, not all! Now the project has to demonstrate the potential of Linked Data technologies going to the next level. So I invite all potential developers/researcher/users that are reading these lines to join us in the further development of Marmotta.
- “Learn” everything I heard, saw and experienced in my life, be always with me, without further interaction of myself
- Arrange writing short simple e-mails, that
takes time to write, but includes only basic information (meetings
request, pushing additional information)
“Dear XY, According to the XY meeting I attach the following documents to this e-mail…”
- Remark my ideas that came suddenly from
nothing, and I always forget half of them (the other half I try to
“I saved what you have said in the last minute, the idea name will be ‘Project Plan X’. The date is today 20/12/2013 16:00pm."
- Be always available and answer on
basic questions, that are obvious for me (for example my
colleague asked me where to login to a service, which is quite easy
“XY, please find the requested information on page (html link), please feel free to ask any other question in this respect”
- Remember me on important events (birthdays,
“Your fathers birthday is on the next week”
- Help with ideas, when I write a book, or
during the work, or even where to buy a present
“Do you ever heard about Einstein’s famous quote ‘It’s not that I’m so smart, it’s just that I stay with problems longer’ ”
- Collect automatically information on the web
what I am interested in. I spend every morning 10-15 minutes to
read the latest news, and keep up-to-date. However I want mirror
much more resources, I just not able to read all of them, and only
the minority of the news are interested me.
“I would recommend an article in topic ‘Artifical agents’, the title is ‘New intelligent AI is produced by IBM’, the most interesting part of the article is …”
- Have a personality, at least should act as a
male or female. Maybe it can have more detailed personality like
being cheeky, cheerful, happy or humorous.
“How is your mood today, want to hear some inspiring information about your friend? I just looked up on facebook."
- Care about my safety and health, not just
eating this or that, but if I drive a car, and it is clear that a
bicycle will go straight front of me, help to pay my attention on
“Bicycle will appear from your right”
- Be capable to interact with me and
understand me. Conversation is important for
people, someone who is paying attention on me, always recharges
“What will we do today (together)? How do you feel yourself?”
Civic Services. As we described earlier, this work provides vocabulary such as GovernmentService for describing services of various kinds. As part of this work, we have also updated ContactPoint, which now for example provides a mechanism for describing contact points for services which support users with hearing impairments.
TV/Radio. These long-awaited changes bring a number of adjustments to the existing schema.org vocabulary for TV, including adding parent types such as Series with distinct types for RadioSeries and TVSeries. Many thanks are due to Yves Raimond (BBC) and Jean-Pierre Evain (European Broadcasting Union) for leading this work.
We also made a small but useful improvement to the Organization type, by adding department and
subOrganization properties that relate organizations to each other. This can be used when describing common situations such as a bookshop containing a coffeeshop, or a larger store that includes a pharmacy, when details such as opening hours or contact information vary.
Finally, we made some changes to the Event type, adding support - via an eventStatus property - for canceled, postponed or rescheduled events, as well as a previousStartDate property to help describe rescheduled events more accurately.
On Monday, November 18 at 3 – 4 pm (not 1.30 – 2.30 pm!) in Room P-702 (Paulinum), Andreas Nareike and Natanael Arndt will present their current project “Electronic Resource Management in context of Libraries”, current research and give an prospect of there PhD topics. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.
Electronic Resource Management in libraries
Within the results of this project we want to apply semantic web
technologies, to manage for instance licenses for electronic
journals at the Leipzig University Library. We focus on building a
scalable and reusable, intelligent data management platform.
The platform should accumulate and homogenize the heterogeneous data in in RDF and any other representation formats from different provenances.
Our current research
When reusing ontologies or vocabularies in an application
context there occurs a gap between the domain ontology and the
program logic. In parallel with the software product line method
from software engineering we introduce the application schema as
substantiation of the domain ontology. This application schema
defines a subset of terms from the domain vocabularies resp.
ontologies and enriches the resulting schema by application
While keeping the terminological knowledge in domain ontologies, this approach encourages application engineers to a greater reuse of existing domain vocabularies in combination with software components.
With the aid of the Semantic Web Application Framework OntoWiki we implement a use-case within the context of library information systems as application on the Web of Data.
Thereby we utilize the application to describe forms, component
interfaces and restricted views for rolls in workflows.
Chromecasts are £40-odd quid on Amazon.co.uk, and we’re interested in using them at work, so I got one to look at.
Here’s a walk through of the setup using Mac OS X (10.8.4) on a Samsung 6000-series.
Overall – the setup was a bit longer and more involved then I’d anticipated, and it’s not really clear what’s going on, unless like me you’re working with a device that does more or less the same thing. Basically the device broadcasts a network, you connect to it, and then tell it your wifi connection details, which is what we are doing in
This is the 1-8 November 2013 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media. You may tweet your demos and cool dev/design stuff to @koalie, or write me e-mail. If you have suggestions for improvement, please leave a comment.W3C and HTML5 related Twitter buzz
[What was tweeted frequently, what caught my attention.
Most recent first (popularity is flagged with a figure —number of times the same URIs or tweet was quoted/RTed.]
- (37)The Register: How the W3C met its Waterloo at the Do Not Track vote showdown
- (39) ASCII.jp: 日本語の縦書き文化を守る！W3C、電子出版の国際標準化を推進 (I protect the vertical writing of Japanese culture! Promotion of W3C international standardisation, and electronic publishing)
- (22) RDF Candidate Recommendations: W3C Invites Implementations of five Candidate Recommendations for version 1.1 of the Resource Description Framework (RDF),
- (64) W3C Blog: Welcoming Test the Web Forward to W3C (also via the Testtwf blog)
28 articles this week. A selection follows. Highlights:
- Tim speaks on encryption cracking
[Most recent first. Find keywords and more on our Press clippings]
- The Guardian (7 November), Tim Berners-Lee: encryption cracking by spy agencies ‘appalling and foolish’
- Business Insider (4 November), Why HTML, The Web’s Publishing Language, Is Still Relevant In The App-Crazy Mobile Age
- Ars Technica (31 October), Cisco releases free and libre H.264 code for browsers
- TechCrunch (29 October), Firefox Gets Guest Browsing Mode On Android, Web Audio API Support On All Platforms
- The Register (29 October), Spread the gospel! Tim Berners-Lee’s Open Data Institute goes global
- The Register (28 October), Do Not Track W3C murder plot fails by handful of votes
On Monday, November 11 at 1.30 – 2.15 pm in Room P-702 (Paulinum), Muhammad Saleem will present the ISWC Big Data Track Challenge winning paper “Fostering Serendipity through Big Linked Data” and “DAW: Duplicate-AWare Federated Query Processing over the Web of Data”. As always, Bachelor and Master students are able to get points for attendance and there will be complimentary coffee and Berliners after the session.“Fostering Serendipity through Big Linked Data” and “DAW” by Muhammad Saleem (30 minutes + question time)
Muhammad Saleem completed his bachelor in Computer Software
Engineering from N-W.F.P University of Engineering and Technology
and Master in Computer Science and Engineering from Hanyang
University, South Korea. Currently, he is working as a PhD student
at Agile Knowledge Engineering and Semantic Web (AKSW) University
of Leipzig, Germany. His research interests includes
federated SPARQL query processing over Linked Data, knowledge
extraction and database management.
He will give a brief talk about two papers 1) DAW: Duplicate-AWare Federated Query Processing over the Web of Data 2) Fostering Serendipity through Big Linked Data presented at ISWC2013. DAW is a duplicate-aware approach to SPARQL federated query processing to achieve the same recall while querying fewer number of sources. It can be used in combination with any federated SPARQL query engine to optimize the number of sources it selects, thus reducing the overall network traffic as well as the query execution time of existing engines. While the second paper aim to foster serendipity using Big Data triplification, its continuous integration, and visualization. As a proof of concept, the integration, visualization of the constant flow of bio-medical publications with the 7.36 billion large Linked Cancer Genome Atlas (TCGA) dataset is shown in the paper.
The RDF Working Group has published two Proposed Recommendations today:
- JSON-LD 1.0. JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines. Comments are welcome through 05 December.
- JSON-LD 1.0 Processing Algorithms and API. This specification defines a set of algorithms for programmatic transformations of JSON-LD documents. Restructuring data according to the defined transformations often dramatically simplifies its usage. Furthermore, this document proposes an Application Programming Interface (API) for developers implementing the specified algorithms. Comments are welcome through 05 December.