Chief Technology Officer
Nathan Michael is Shield AI's Chief Technology Officer and an Associate Research Professor in the Robotics Institute of Carnegie Mellon University (CMU). At CMU, Professor Michael is the Director of the Resilient Intelligent Systems Lab, a research lab dedicated to improving the performance and reliability of artificially intelligent and autonomous systems that operate in challenging, real-world and GPS-denied environments. Nathan has authored over 150 publications on control, perception, and cognition for artificially intelligent single and multi-robot systems, for which he has been nominee or recipient of nine best paper awards (ICRA, RSS, DARS, CASE, SSRR). In 2012, following the Fukushima Daiichi nuclear disaster in Japan Nathan was among a team of researchers who developed a two-robot system to assist in the disaster response. Nathan was recognized for his work in Japan in 2014, receiving the Popular Mechanics Breakthrough Award and Robotics Society of Japan Best Paper Award. Over his decades-long his career in research, Nathan has led research programs supported by ARL, AFRL, DARPA, DOE, DTRA, NASA, NSF, ONR, and industry.
What is Coordinated Exploration?
What is coordinated exploration? In the context of Shield AI, coordinated exploration corresponds to the deployment of multi-robot systems in order to explore an environment. These multi-robot systems collectively develop a structural model of that environment by moving through and navigating around it.
How Do Robots Learn Through Exploration?
How does learning appear within the context of coordinated exploration? Within the context of exploration, learning emerges the more the system operates. The system improves with operation because the more that it operates, the more experience it acquires about how the actions it takes given the appearance and nature of the environment will impact the amount and quality of information it is able to gather. And so the more the system engages, the better idea it has of how future actions will improve its performance.
How do Robots Communicate with Each Other?
In terms of multi-robot systems, how do we think about coordination in terms of how robots communicate with each other and learn together? When we talk about multi-robot exploration and teams of robots working together, we're thinking about how those individual robots make decisions and how those decisions impact other systems. As these coordinated teams of robots deploy and explore an environment, some robots will be able to explore the environment better than others. They will be able to more efficiently acquire more information faster.
What is Multi-Robot Exploration?
What do we mean by multi-robot exploration? Exploration for a single robot system in our context considers uncertainty reduction in an unknown environment performed by a single agent. It involves the system making decisions of where it should go in order to acquire information and learn about the environment.
How Artificial Intelligence Manifests Through Exploration
In reference to robotic systems, what do we mean by exploration? Typically, when we talk about exploration with an autonomous robotic system for the types of scenarios that we consider, we're talking about deploying that robotic system into environments that are unknown. The goal of this deployment and exploration is to enable the individual robot to move through that environment, acquire information, and reduce uncertainty about the environment as it goes.
Shield AI Fundamentals: On Resilient Intelligence
You are the director of the Resilient Intelligence Lab (RISLab) at Carnegie Mellon University. When choosing to name the lab, why did you use the term resilient? The goal of the research done at RISLab is to improve the performance and reliability of artificially intelligent systems that operate in complex, real-world environments. One of the challenges that arises in the context of intelligence is that when a system learns relationships, it can make mistakes.
The Relationship Between Intelligence & Learning
How do you define intelligence in the context of robotic systems? Within the context of organic systems -- humans, animals, plants -- there are more formal definitions of what we mean by intelligence. Organic intelligence refers to mental acuteness, the ability to learn, to understand and deal with new situations, and the ability to apply knowledge to think abstractly or to manipulate one’s environment.
Using AI to Amplify Human Ability
We’ve spoken extensively about human expectations for AI systems. Are there other examples of something humans expect AI systems to do that is not necessarily inherently built into the systems? We, as humans, expect intelligent systems to understand us -- we’ve become accustomed to it.
On Humans Interpreting AI and Robot Behavior
How do humans factor into trust of robotic systems? Humans trust engineered systems that adhere to performance expectations. If a system works as expected, we tend to trust it. Interestingly, if the system works as designed but in a manner that does not align with expectations, we will tend to distrust the system (that is, until we better understand how the system is designed to work). Robotic systems are viewed similarly.
Trust and Robotic Systems
What does trust mean in the context of robotic systems? For a robotic system, trust is about the system engaging as expected, in a consistent manner, time and time again.The more that these systems are perceived to reliably work as expected, the more we build trust. This concept is not unique to robotics systems. The more we use any engineered system, the more it works as expected, all the time, without fail, the more we learn to trust it. Conversely, if the system starts behaving erratically or failing unexpectedly, we lose trust.
The Role of Trust in the Evolution of AI
How does trust play into the system’s ability to evolve and adapt to what it is learning? I can best respond with an analogy. If you think about a child, as we teach a child, we are teaching them to engage in the world, to learn about their surroundings, to learn about the implications of the actions that they take and the decisions that they make, and to conclude how that has an impact on their ability to interact with the world and everything in it.
Trust and Artificial Intelligence
How do you think of trust as it pertains to the operation of an AI system? The concept of trust in an AI system is similar to how you trust that when you turn on your car, because it’s in park, the car won’t move. If you turned on your car, and all of a sudden the car started lunging forward, your trust of that engineered system would go down substantially.
Shield AI Fundamentals: On Progress in Swarming
Is the idea of swarming decades away? How do you answer the question of when swarming will be possible in the real world? There are different levels of what is possible, from possible in the lab to possible in the field as reliable products.
Shield AI Fundamentals: On Multi-Robot Collaboration
We’ve heard many terms referring to collaboration between robotic systems, can you elaborate on which terms exist and what each mean? There’s collaboration, coordination, teaming and swarming. Collaboration is when systems work together in order to achieve some common goal -- working together to move some object from point A to point B, or to build a model of the environment.
What Happens When Robots Work Together?
You have dedicated your career to researching collective intelligence. How do you think about teams of robots working together? Where does that begin and how does it work? We start with a single robot. We move to small teams of robots working together. And at the same time, we develop the frameworks that are required to enable concurrent collective intelligence.
Meet Our Leaders: A Conversation with Nathan Michael
How did research concerning multi-agent and multi-robot collaboration become one of the central thrusts of your academic career? I began working with multi-robot collaboration and coordination through my Ph.D. effort. Working with an advisor, I focused my work on teams of robots and how we think about designing feedback control strategies to drive a team of robots to follow a particular path or work together in a certain way, and in a manner that is scalable.