Shield AI horizontal logo black

Shield AI Fundamentals: On Knowledge Representation

We’ve spoken extensively about intelligence and learning in artificial systems. Can you elaborate on the concept of knowledge?

Within the context of artificial intelligence, knowledge represents everything that the robot needs to know in order to be intelligent. In essence, it represents the overarching logical framework — the concepts and the logical relationships — that arise to make sense of what the robot sees in the world. It includes different types of environmental conditions, relationships between objects, object properties, as well as how objects, conditions and relationships change over time, and how those relationships might change as a consequence of what the robot is doing. 

How is knowledge shared among systems?

As is the case with people, knowledge for AI systems is largely contingent on logic and relationships. And just as knowledge is unique from person to person, it can differ from robot to robot. We’ve all engaged with individuals who think about the world differently than we do, individuals who use a different line of reasoning based on their unique life experiences. So, it follows that the way that we store, interconnect, and relate knowledge from robot to robot is a challenge in its own right. 

How we share knowledge among robotic systems — or relate knowledge from robot to robot — is the problem of knowledge representation. The idea of knowledge representations can be roughly thought of as the language in which we say things and share information about the world. It’s how we express ideas, how we understand concepts, how we share information about the objects we perceive in the world around us, and so forth. 

In the context of artificially intelligent systems, knowledge representations refer to the way robotic systems represent, store and share information such that they can utilize it in the future to complete complex tasks. It is the notion of how to decompose knowledge to form a knowledge base that is consistent across robots.

Engineering for the purposes of knowledge representation seeks to decompose knowledge in such a way that the robot can logically reason and relate all of the different things that exist in its knowledge base, and do so effectively within the context of other artificial intelligence algorithms. This is where we often introduce taxonomies and ontologies as a way for us to be able to separate and disambiguate between different ideas of things in a sensible manner. For instance, if I were to talk about fruits versus vegetables, you have a sense of the differences. However, in some instances, there may be blurred lines in between, such as the question, “Is a tomato a fruit or a vegetable?” 

We’ve spoken before about human logic being imperfect at times. How does this factor into engineering for the purposes of knowledge representations?

In early stages of developing knowledge representation in robotic systems, we sought to create a taxonomy of prescribed knowledge representations that was based on how we, as humans, understood the world around us. In more recent years, we have moved towards using other training methods, particularly supervised and unsupervised learning. Leveraging deep learning strategies for generating learned knowledge representations, we are able to break down the data in such a way that it is more aligned with the context and nature of the data. 

This becomes particularly interesting as we consider unsupervised learning. In the case of unsupervised learning, the robot’s knowledge base will not necessarily be readily interpretable by a human. 

Let’s consider an example for clarity. Unsupervised learning methods perform well in categorizing large sets of data into different types, or sets of objects. Let’s suppose we task a robot to identify fruits within a set of apples, oranges and bananas using unsupervised techniques. To complete the task, rather than use the semantic names that we recognize — orange, apple, banana — the robot might instead choose to numerically label the objects as object number one, two, three and so forth. In this case, the robot has successfully built a knowledge base of those objects which is effective in that it is able to cleanly and reasonably separate those objects. However, such a knowledge base would not be readily interpretable by a person who isn’t familiar with the numerical object naming. So this one example of how the idea of explainable or understandable AI is also present in the discussion of knowledge representations. 

Want To Learn More?

Get in touch with the Shield AI team today.