The first market-ready knowledge graph designed from the ground up with reasoning in mind.
The two branches of AI, machine learning and semantic reasoning, are often discussed as opposing forces—one a probabilistic black-box and the other reliant upon provable logical inferences. In reality, their strengths can be combined to create a solution greater than the sum of its parts. While exploring RDFox, our in-house experts at Volvo Cars set out to do just that, pairing knowledge graph reasoning with object detection.
With this in mind, there were a few features of RDFox in particular that we wanted to explore:
· OWL2 to build ontologies
· Semantic reasoning for a real-time decision engine
· In-memory data storage and management well suited for edge applications
We wanted to create a project utilizing these features to learn more about RDFox and to create something visually engaging to demonstrate the potential to our colleagues. So, after some brainstorming, we decided to try and feed the database with object detection information.
Object detection is a growing field within computer vision related to identifying objects as well as their relative size and position. In this case, we used an open-source pre-trained model with 80 classes from the MS coco dataset as this suited our purposes for the project. These computer vision models can use various types of input data to extract features for classification, like point cloud data from LiDAR or RADAR. We opted for a trusty web camera to keep things simple. The model we used is the YOLOv5, an open-source PyTorch implementation and further developed version of the YOLO algorithm by Joseph Redmond. Like doctor Frankenstein, he became scared of his own creation and decided to halt his research when he saw its use in military applications and because of privacy concerns. He did an interesting TED talk about the potential benefits this technology has, as well as the dangers it unlocks. With this in mind, we decided to approach the project carefully... First adding the direct information that the model provides.
The detected car and cup from the first figure are passed to RDFox together with a frame. Each object is also given a confidence, as well as information about their relative bounding boxes. If the bounding box of one object overlaps another, it :isOverlapping. If an object bounding box were to completely surround another, that object :isContaining the relatively smaller second object. Each frame is also time stamped and given a count integer from the start of the video stream. After that we decided to add a simple ontology to the graph to give the objects some basic classes. To build on this, one could add all sorts of wonderful meaning to the detected objects using the OWL2 web ontology language.
Our graph is growing. Next, we wanted to explore reasoning, so using rules we implemented a little made up scenario. If a vehicle of any kind is detected in the graph, we mark it with a heads up: 🙂. If there is more than one vehicle in the graph, we can guess that there is a bit more traffic, so we mark this with a somewhat more concerned alert: 😯. Finally, if bounding boxes are overlapping, we guess that this is an immediate traffic situation, like an overtake: 😱.
A configuration Management Use Case - Powered by metaphactory and RDFox
Download the Joint White Paper