Shield AI horizontal logo black

Trust and Robotic Systems

A conversation with Professor Nathan Michael, Shield AI’s Chief Technology Officer. This is a continuation of our conversation on Trust and Artificial Intelligence.

What does trust mean in the context of robotic systems?

For a robotic system, trust is about the system engaging as expected, in a consistent manner, time and time again.The more that these systems are perceived to reliably work as expected, the more we build trust.

This concept is not unique to robotics systems. The more we use any engineered system, the more it works as expected, all the time, without fail, the more we learn to trust it. Conversely, if the system starts behaving erratically or failing unexpectedly, we lose trust.

Is there a difference between trust in robotics and trust in AI?

The notion of trust between a robotics system and an artificially intelligent system is quite similar; it comes down to the alignment of expectation of performance and actual performance. In both systems, trust is built when the system takes an action that aligns with expectations.

In terms of engineering trust, robotic systems bridge the physical and digital worlds through hardware and software (as compared to strictly software-based AI systems). Thus trust extends beyond how the software is going to behave, to also consider how the mechanical and electrical systems behave and how those systems integrate together. You have to consider how the manner in which the system thinks changes the way that the hardware operates, and how that impacts the overall reliability of the system. Ultimately that measure of reliability is what will inspire trust or cause one to lose confidence.

How is trust engineered in robotics systems?

Given the statement that we trust robotic systems that engage in a manner that aligns with expectations and do so with consistency and reliability, then we engineer trust in these systems by ensuring that the actions that they take are interpretable, understandable, align with expectations and do so in a reliable manner — meaning that they are able to execute those actions consistently and without failure.

We do that with algorithms. Algorithms inform the robot to do the correct thing and to do it consistently. We use algorithms to engineer systems that not only are able to execute the desired behaviors, but do so in a manner that is consistent, stable, and safe, both for the system as well as for any people or objects within the operating environment.

In terms of multi-robot collaboration, do the robots trust one another?

Yes. Multi-robot systems learn to trust each other in the same way that we, as humans, learn to trust robotic systems: through perception of consistent operation that aligns with expectations. When multiple robots collaborate, information is shared across systems to enable each robot to better understand how the other robots are perceiving and engaging with the world. Each robot is considering what the other robots are seeing and doing in order to extend their own understanding (such as extending their local environment model given additional information). However, they do not implicitly trust the information being distributed from other robots as it is entirely possible that these other robots are making mistakes; these other robots may have problems with their onboard sensing or actuation, or other factors that will cause what they share to be degraded. As such, each robot is establishing a notion of trust for each other robot based on the consistency of the distributed data with their own local expectations.

Trust within multi-robot systems manifests algorithmically as something that can roughly be thought as weights that scale the impact or influence of information received from each robot. If inconsistencies arise, each robot will locally adjust the relevant weights of each observation to modify the impact of the observation to the overall model based on the level of trustworthiness — or consistency in performance — observed. If a robot is determined to be untrustworthy, the other robots will change the way that they account for and interpret the data that is coming from that robot. And, if that robot is sufficiently inconsistent, the robot could be disregarded entirely.

Want To Learn More?

Get in touch with the Shield AI team today.