Shield AI horizontal logo black

Using AI to Amplify Human Ability

A conversation with Professor Nathan Michael, Shield AI’s Chief Technology Officer. This is a continuation of our conversation about Trust and Robotic Systems. Using AI to Amplify Human Ability

We’ve spoken extensively about human expectations for AI systems. Are there other examples of something humans expect AI systems to do that is not necessarily inherently built into the systems?

We, as humans, expect intelligent systems to understand us — we’ve become accustomed to it. We form an expectation that through our interactions the other party will understand and interpret our intents, and therefore work with us more effectively. This is the case as we engage with other people and as we engage with animals.

Think of your pet dog, when you engage with that dog you train it and teach it. Eventually, that dog learns to understand you. The more you engage with your dog, the more it understands you and your mood, your behavior, your decisions — why you do what you do. It starts to interpret, anticipate, adapt and engage in a more seamless manner.

The same expectation starts to emerge with artificially intelligent robots. We witness the robot accomplishing impressive, fantastic tasks autonomously and we come to recognize it as intelligent. So, in line with what we have become accustomed to with any intelligent counterparty, we expect that the more we engage with it, the more it will understand what we want. We expect that we should not have to correct the same action multiple times. But the reality is that unless that robot is learning about you, modeling your intent and anticipating what you want, you will continue to have to correct it. Unless the system is learning to understand the nuances of the individual, then this kind of symbiotic relationship where the intelligent system and the person are starting to engage in a fluid, well-formed manner cannot emerge.

Because it is something that we as humans have come to expect, we’re developing robots that can create models that allow them to intuit some of the user’s intentions. We’re doing it with single robot and multi-robot systems. In the single robot case, these models make it possible for humans to engage with the robot and to achieve much higher levels of performance with less effort. When the operator recognizes this, the operator starts to grow confident that the robot “gets” them — that the robot understands what it is that they want to achieve and is working with them to achieve that. A “team” concept evolves, rather than just the operator guiding the robot.

Can you elaborate on how you engineer a robot to model its user’s intentions?

In the background the robot is constantly modeling the user. It’s modeling their intent, modeling how they approach different problems, modeling how they’re changing their decisions and thinking with respect to particular tasks or activities, and then it’s taking that information and using it to change how it interprets the direct input from the user. Essentially, the robot is modeling the user’s behavior over time and modifying its decision-making based on context of the direction it receives and prior experience.

The robot is also able to recognize when a user is improving over time, or if the user is performing suboptimally. For instance, the robot may recognize that the user is having a bad day, and so it will compensate for the user in order to enable better interaction. In practice, what may happen is that the robot recognizes the user’s choices are changing so rapidly or inconsistently, that it can intuit the user is performing suboptimally. So it will adapt its behavior in order to augment, extend, mitigate, and support the user to make sure that he or she can effectively engage with that system.

Why is this important?

This becomes particularly important as the human is operating large numbers of robots because it’s more likely that the individual cannot perceive and understand every complex occurrence that’s happening. It’s not uncommon that operators will make mistakes when they’re working with 30 robots that are flying simultaneously and performing incredibly complex actions. They’ll ask the system to perform a task that may not be what the operator is actually trying to achieve. And so the system itself can engage in intent modeling of the user to augment the operator’s performance. Larger numbers of robots can work together to figure out what the intent of the desired task is and can mitigate, anticipate, adapt in order to overcome user error — such as problematic requests, unsafe requests and suboptimal requests. This is all done in real time. It is performed without any great insight by the system as to what the operator wants, but rather insight into how the operator has engaged in the past.

It’s interesting because as humans then interact with these systems, turning this capability on and off  makes a tremendous difference. It’s the difference between a person walking up and engaging with the system and having to really learn and adapt how to get around that system and its nuances, versus the system learning and adapting to the experience of the individual. It enables anyone to engage with our systems, having never worked with them, and very quickly perform as if they were experts. That ability to amplify the expertise of the operator, to make them more proficient, to make them more capable, is another mechanism by which trust is earned.

How does this relate to the conversations we have had surrounding trust & artificial intelligence?

Connecting this back to the discussion of trust and AI and resilient intelligence, this capability is something that improves the more it is used. We’re building up models of humans, we’re learning about the way that they’re engaging with the system, and we’re adapting strategy based on that as that information and knowledge builds over time. As this continues to compound, the AI system becomes more well-tuned and formulated to the needs of its human operator. This is a fundamental factor in enabling trust in AI and in robotics because it allows the human to know, based on experience, that the system is working with them. It is establishing trust beyond just system reliability. It establishes trust that we can count on the system to achieve the objectives that are set out, but also to deal with complexity and nuances it encounters as it achieves that goal driven by the individual operating the system.

Want To Learn More?

Get in touch with the Shield AI team today.