To get a driving license, humans learn the rules;
so should autonomous vehicles
RDFox for Autonomous Vehicles
White Paper 2020
With so many variables in play, this is hardly an easy question to answer. But thankfully, complex questions are our specialty.
Go to f1.rdfox.tech to find the answer.
This article will introduce one of the world’s most popular RDF data sources and how RDFox can use it to offer a rich and responsive experience for web-based applications by reducing load times to minutes or seconds, and querying in a matter of milliseconds — orders of magnitude faster than the dedicated query service.
Since 2012, the Wikimedia Foundation has been supporting and hosting an open data initiative called Wikidata, a vast database containing information about everything, from objects to people to abstract concepts. We’re talking about it today because unlike its human-centred kin (Wikipedia among many others), it is available for download in RDF format.
The data model of a single item can be represented the following way:
Every entity is represented by a code that, while formulaic, are too detailed for us to cover here. It is however important to understand that these codes represent any and all things from ideas like ‘an instance of’ (wdt:P31) to physical beings like ‘a cat’ (wd:Q146). For a more in-depth breakdown of these codes, see this Mediawiki page.
The Wikidata model of each object can be identified on its Wikipedia page.
Due to its richness and open access, Wikidata is one of the most popular RDF sources in the world. However, because of its immense size users don’t usually download the entire 15 billion fact database but instead focus on the individual sections they are interested in. This is far from a perfect solution but is often seen as a necessary evil, required to load the data and compute results in a reasonable time frame. RDFox changes all of this. Instead of taking a day or more, the initial load can now be completed in less than 3 hours, while queries return results in milliseconds — we’ll get to that later.
SPARQL (SPARQL Protocol and RDF Query Language) queries can be used to extract data in these situations. Designed and maintained by the W3C, SPARQL is considered the standard query language for RDF triplestores. For more information, read our SPARQL fact file, or head to Stack Overflow where SPARQL is a running theme among Wikidata questions.
Wikidata has a live interface (the Wikidata Query Service) that can be used to query the data and view the results. They also provide some example queries to test, such as one that could be used for an online cat store. The following query will select all the instances of cats (?item wdt:P31 wd:Q146), as well as their labels in English.
The simple structure of SPARQL lends itself well to the quick and easy formulation of queries, enabling the user to pinpoint exactly what they need with little effort. However, that's not to say it is an easy thing to understand for the uninitiated, so Wikidata also offers a SPARQL tutorial which you can find here.
Wikidata is a fantastic resource to connect to your existing datasets. Take the cat store for example; they could connect their cat inventory to Wikidata and help customers make decisions by providing more contextual information about the cats as they are browsing.
We can use a construct query to fetch all Wikidata’s information on cats:
CONSTRUCT queries allow you to return RDF triples from those that already exist in the database, so these can be imported without any further processing.
However, Wikidata can take a long time to fetch the data depending on the complexity of the query, and the amount of data it needs to search over. Moreover, Wikidata throttles processing power, so slow queries will often time out. In the trivial cat example, the CONSTRUCT query took 0.9 seconds to retrieve only 706 results. Queries to return larger subsets of data, in particular, those that are more complex and span several ‘layers’ (for example, cat > breed > colour > name etc.), will almost always time out. For someone desperate to know the deeper details of ourfeline friends, this is clearly not good enough. RDFox has the power to change all of this.
A more efficient approach can be to download Wikidata and import it into a faster triplestore — RDFox. When preparing for this article, the initial load took us only 2 hours and 50 minutes for the entire 15 billion triples — a remarkable figure compared to the tens of hours that is commonplace for this magnitude of operation.
RDFox is a high-performance knowledge graph and semantic reasoner, and, most importantly, an in-memory solution. This is the source of its extreme and unmatched querying speed. This is of course crucial, as the faster you can query the dataset, the closer to ‘real-time’ the results are fetched. RDFox also supports SPARQL querying (just like Wikidata) which makes it ideal.
To showcase this power, we took a look at ‘OST Music’ — a hypothetical music streaming service with RDFox at its core that we dreamed up a few months ago. Read our article on OST Music to learn about its superior recommendation system, or using Wikidata in applications more generally.
This time, instead of creating a complex system, we were simply interested in retrieving information in the musical space. We ran three queries of increasing complexity, once over the entirety of the Wikidata data, and once over a sub-graph of the triples containing an instance of a music group, as we did for the streaming service. We also used the Wikidata Query Service as a baseline to provide context for the results.
The three queries were as follows:
A configuration Management Use Case - Powered by metaphactory and RDFox
Download the Joint White Paper