Shield AI horizontal logo black

Shield AI Fundamentals: On Resilient Intelligence

A conversation with Professor Nathan Michael, Shield AI’s Chief Technology Officer. This is a continuation of our conversation about Trust and Robotic Systems.

You are the director of the Resilient Intelligence Lab (RISLab) at Carnegie Mellon University. When choosing to name the lab, why did you use the term resilient?

The goal of the research done at RISLab is to improve the performance and reliability of artificially intelligent systems that operate in complex, real-world environments. One of the challenges that arises in the context of intelligence is that when a system learns relationships, it can make mistakes.

I say ‘intelligence’ rather than ‘artificial intelligence,’ because this is true for organic intelligent systems just as it is true for robotic systems and artificial intelligence. For instance, when we, as humans, learn cause and effect, from time to time we will learn the wrong mapping. Maybe we were given a limited amount of data or maybe we just fundamentally did not understand the underlying relationship. Either way, what then happens when that input comes in, is that we conclude something incorrect — we expect one output but we end up with an output that’s different, because of some error in our learned mapping.

So, if we recognize that intelligent systems make mistakes, we must build in an ability for the system to adapt and to overcome, otherwise the ability for that system to operate will ultimately start to break down. This is where resilience becomes important. The concept of resiliency is rooted in psychology. In a psychological context, the idea of resilience refers to an ability to cope with adversity or trauma. It typically refers to an individual’s ability to be able to adapt to negative circumstances, and to mitigate those effects in order to protect themselves, to overcome and to move on without sustaining further damage.

What does it mean for an AI system to be resilient?

We wish to create systems that have the ability to cope with challenges that arise and mitigate them, and to do so in a manner such that they are not only able to survive but to become stronger or better in the future. In order to be resilient, those systems must be able to understand what is wrong, figure out how to overcome those issues or challenges, and then take what it is that they have learned to overcome those challenges and retain them for the future. And that last part is crucial because the process of learning is difficult.

 

Consider how we teach children; we structure their education so that they learn in stages rather than presenting them with all the information upfront. The reason this is done is because there is only so much information that they can process and retain within a certain period of time. The undertaking of trying to constantly learn everything always, is an extremely taxing endeavor. It’s taxing on people — on organic intelligence — and it’s taxing on artificially intelligent systems because they’re having to run these really challenging operations, regularly, and that is computationally expensive. And that computational expense can be likened to the idea of cognitive burden or cognitive overload that humans may encounter while going through learning processes.

However, this constant learning is happening. The idea here is that the system is constantly receiving input signals, figuring out how they correspond to output signals and the expected values, learning that mapping, and then continually adapting that mapping and retaining that mapping for the future. And this idea of constantly having to understand what the system expects to perceive compared to what it actually perceives, learn to deal with or to adapt to those differences, and then preserve and retain that adaptation, that’s this idea of how we actually create systems that can build up sufficient knowledge to expand that knowledge and drive themselves to think for themselves.

Therefore to pursue systems that are resilient and intelligent is to create a system that can introspect, adapt, and evolve. This offers you the ability to create systems that cannot only learn, but improve upon that learning over time, and do so in a framework that is amenable to autonomy, amenable to algorithms, and amenable to execution.

Can you elaborate on what you mean by introspect, adapt, and evolve?

In order to create systems that are capable of operating within truly challenging domains, we must create systems that demonstrate psychological resiliency and embed that within the intelligence. The only way to do this is to create a system that can think for itself. And to create such a system, we need to create a framework to make it possible to collect the data which allows the system to improve upon its own intelligence and to improve upon its own resilience. And that’s this idea of self-directed learning and experience generation through autonomy.

Connecting that back to machine learning, each one of those steps: introspection, adaptation, and evolvement, corresponds to aspects of machine learning. Introspection refers to enabling systems to understand what is wrong. Adaptation refers to the system’s ability to overcome or to take what it knows is wrong and use it to be able to figure out how to correct it. And evolvement refers to the system’s ability to build upon what it has learned to become more capable — it is the notion of knowledge acquisition, retention, and representation.

When these three concepts come together they create systems that are truly able to think for themselves. They come together to direct their own autonomy — to direct how they acquire data, how they process that data to learn from it, and how that learning can drive future learning and autonomous operation. They continuously expand their knowledge, and that is the concept of perpetual refinement.

You’ve mentioned that the goal is to create a system that can “think for itself.” Why is that?

Fundamentally it’s this idea that it would be extremely hard to design a system that’s able to operate in all conditions and achieve great performance because you always have to consider trade-offs. In order to design an optimal system, you must weigh a  variety of factors and consider a wide spectrum of conditions that may manifest or arise. When dealing with real-world environments, there are complex, nuanced relationships that are not obvious. It simply exceeds the cognitive power of a human to map all this complexity, track it, and optimize for all the different conditions encountered.

So if an engineer is trying to design a system that is able to perform at peak levels — and push beyond that of what a human can actually design — the only way to achieve that is to enable the system to be able to think for itself and push its own boundaries. And so the solution is to endow the system with an ability to understand its performance and to be able to expand and push against its boundary conditions. We need to create a system that can perpetuate its own improvement.

To truly create the types of reliable systems our customers require, we must create systems that can push forward and think for themselves in order to address their limitations within the complex environments in which they operate.

An important distinction to recognize as we discuss systems that can think for themselves is that the system’s ability to self-perpetuate its learning progress is bounded — the system can think for itself in terms of which actions it should take to optimize the completion of its task within a given set of parameters, but it is not capable of what we think of as higher-level cognition on par with humans.

Want To Learn More?

Get in touch with the Shield AI team today.