In RDFox v5.0.0, we have changed the way the implementation of OWL works, as well as a bunch of other things (see here).
In this article, we will outline the difference between the two approaches, and why the new one is better. If you’d like to follow along, you can find the data and ontologies in this GitHub repository.
In both RDFox v4 and v5, OWL is supported in 2 syntaxes:
Axioms can be imported in functional-style syntax (fss):
And in Turtle (ttl):
Now, Turtle has the advantage that one is able to ‘query the ontology’ directly. However, it also means that the ontology and data are mixed, which we may wish to avoid.
When working with fss ontologies, there is no change between version 4 and 5. These are imported with:
An ontology can also be applied to a particular named graph with:
Furthermore, in v5 we can specify what named graph an ontology should be applied to directly in the ontology file:
This means that we can have a single file with multiple ontologies, each of which is applied to a particular named graph, and we can import it with just one command:
Note that in this case we only have one ontology with one named graph.
However, when it comes to ontologies in Turtle, there is a difference between v4 and v5. In v4, Turtle ontologies can be enabled at datastore creation with the following flag:
However, ontologies cannot be parsed if they are in a named graph, nor can they be applied to data that is in a named graph. In v5 however, we address this issue.
Let us go through an example of the new OWL implementation in the shell:
Now, this has not parsed the triples of the ontology as actual axioms yet, as evidenced by the fact that no triples were inferred, as you can see by running:
Note above that our cats are not inferred to also be member of other classes, i.e. mammals and animals.
You can see what axioms apply to the data graph with this command:
which returns an empty list, meaning that no axioms apply (yet).
We will instead parse the axioms asynchronously with:
See the docs here for the specifics of the command.
Now we can see that inferences have happened:
We can also export the axioms to a file:
Now, it is generally advisable to have the ontology and data kept in separate named graphs, but there is nothing stopping us from putting both into the default graph and parsing axioms like this:
Note that rdfox:DefaultTriples is the name of the default graph in RDFox.
What we have described so far was ‘data reasoning’, i.e. given a dataset and an ontology, we apply the ontology’s axioms to the dataset to draw inferences.
This is what OWL 2 RL (the fragment supported by RDFox) was designed to achieve.
However, some ‘schema’ reasoning can also be achieved with RDFox. To do this, RDFox can load a predefined set of 15 rules into a named graph containing OWL axioms in Turtle format and derive new such axioms (see more here).
The following command:
will then materialise the fact that :Cat is a direct subclass of :Animal in the :ontology graph.
The rule that derives this fact is:
Here, ?X is bound to :Cat, ?Y to :Mammal, ?Z to :Animal.
There are 14 other rules that derive similar results.
This contrasts with the data reasoning approach. In data reasoning a specific cat, say :sylvester, is inferred to be a mammal with the first axiom.
Then, given that :sylvester is now also a mammal he is further inferred to be an animal, this time with the second axiom.
For devs using RDFox persistence:
Between versions 4 and 5 the persistence format has changed, to improve usability and add SHACL support. However, because OWL support is now more straightforward than before, this is not an issue. We can migrate from v4’s persistence by using the usual transcribe command in the v4 shell (see instructions here).
In short, this command will produce a script (stable across versions) that we can use to recreate the persisted data in v5 because the old database would have had both axioms and triples in it.
If you wish for the axioms to be parsed, you then need to run:
If you further wish to add schema reasoning (as was automatically added in v4), then just run:
We hope this short tutorial has guided you through the latest updates to our OWL implementation. If you haven’t given this a go for yourself yet, you can find the repository here. You can try through this example both in the shell and in REST (with cURL).