We’ve heard many terms referring to collaboration between robotic systems, can you elaborate on which terms exist and what each mean?
There’s collaboration, coordination, teaming and swarming.
Collaboration is when systems work together in order to achieve some common goal — working together to move some object from point A to point B, or to build a model of the environment.
Coordination is simply when robots make sure that what they are doing is in sync with the objectives of other robots. Coordination will often arise through distributed algorithms to make sure they are achieving some coordinated end effect. You and I might collaborate and move a box — we’d certainly coordinate as we move that box, but we’d also perhaps coordinate as you explore one building and I explore another building, so there are subtle differences in how those terms are used.
Teaming is simply another term for coordination and collaboration within and across groups of robots and potentially with people. Often the term “teaming” is used to highlight the latter case where robots are coordinating and collaborating with people (e.g., a robot-person team versus a coordinated group of robots).
Swarming is a specialization of coordination emphasizing a large number of robots. Consequently, swarming also implies a slightly different class of robot behaviors as the more robots you have, typically the less sophisticated the behaviors will be given the complexities that arise associated with large numbers of robots.
Of those four terms, collaboration is when the robots are really, truly working together to achieve an end effect. Coordination is more about algorithmic alignment during operation. Teaming tends to be used from a jargon perspective when engaging or interacting with humans and swarming tends to indicate teams or groups of robots of non-trivial size.
Thinking about what we are building at Shield AI, how do you classify that and what is the first step?
We’re doing all of it.
First, we are developing systems that collaborate. Because they work together, they’re able to achieve an end effect that would be more significant than if they were working individually. They are collaborating not just with each other, but also with other humans and operators as they engage with these systems. That’s collaboration and — from a jargon perspective — what we would also call teaming.
As those robots are collaborating, they will coordinate in order to make sure they’re doing the right things at the right time, making the right decisions based on the observations of each robot. And that coordination yields collaborative behavior in order to achieve this collective effect.
We’re looking at not just one small group or a few small groups, but we’re actually looking at 10, 20, 30, 40 or even higher numbers of robots, and we’re doing work with 50-plus robots in the lab now. So we’re thinking about collaborative teams of intelligent robots that coordinate at scales that approach that of swarms, if not already swarms, in a manner where they engage and interact with humans, in a teaming sense, in order to achieve an end effect.
How do you go from one to many robots? What is the infrastructure required?
A single robot is a little more straightforward — we worry about questions of perception, cognition and action. How do the individual robots perceive the world? How do they think about the world that they are engaging in and their own objectives? Then, how do they act to achieve that desired end effect?
As we move to multiple robots, we think about each of those areas of perception, action and cognition, and we then consider how individual robots work together as multiple systems. The first step is to achieve a capable system as an individual, then from there we move to distributed perception, distributed cognition, and distributed action – which is often achieved through coordination and collaboration.
Distributed perception refers to enabling the team of robots to perceive the world. This means enabling them both to understand where they are in the world – which is what we call state estimation – and the appearance of the world or structural matter around them as they operate within that world and how that evolves – and that’s what we call mapping. The individual robot is always developing its own state and also figuring out what the world looks like around it – and that is state estimation and mapping.
From there you have distributed perception, where the robots are actually sharing what they are learning about themselves and the world around them with respect to each other in order to achieve a consistent model between them. The most important aspect of that is achieving a consistent relative transform, so that if one robot is figuring out where it is in the world, and another robot is figuring out where it is in the world, they’re able to then share that information to figure out their relative spatial relationship.