In the 1940s, pioneering computer scientist Alan Turing proposed a test, which a machine could pass if it behaved indistinguishably from a human. The test consists of an interrogator that exchanges messages with two players in a different room. One of the players is a machine while the other is human. The interrogator has to determine which of the two players is human. The machine is considered to have human level intelligence and has passed the test if the interrogator constantly fails to identify the human.
The researchers used the Turing test to determine how a given system – not necessarily a human – works. Dr Roderich Gross from the Department of Automatic Control and Systems Engineering at the University of Sheffield explained that the team put a swarm of robots under surveillance and tried to determine which rules caused their movements. To help with this, a swarm of learning robots was also put under surveillance. Recorded motion data of the movements of all the robots was then shown to interrogators.
However, the interrogators in this study were computer programs that learn by themselves, rather than humans. These programs had to distinguish between robots from either swarm. If they correctly categorized the motion data from the second swarm as counterfeit, or those from the original swarm as genuine, they were rewarded. If a learning robot succeeded in fooling an interrogator into believing their motion data were genuine, they also received a reward.
Dr Gross calls his approach ‘Turing Learning’ and explains the advantage of using this methodology is that humans no longer need to tell machines what to look for.
If you for example wanted a robot to paint like Picasso, conventional machine learning algorithms would rate the paintings done by the robot on how closely they resembled a work by Picasso. Someone would however have to program the algorithms with what is considered similar to a Picasso. Such prior knowledge is not required by Turing Learning. The robot would simply be rewarded if it painted something that was considered similar by the interrogators. Turing Learning would in this case be used to learn how to paint and how to interrogate at the same time.
Dr Gross hopes Turing Learning could lead to advances in technology and science. The rules governing artificial or natural systems could be discovered, especially where behavior cannot be easily categorized using comparison metrics. Computer games could become more realistic as virtual players would be able to observe and assume characteristic traits of their human counterparts. They would reveal what makes a human player distinctive from others, rather than simply copy the observed behavior.
One possible application of these results could be to create algorithms that detect abnormalities in non-human behavior. This would be useful for the health monitoring of animals and for the preventive maintenance of cars, airplanes and other machines.
Security applications such as online identity verification or lie detection could also use Turing Learning.
Although Turing Learning has only been applied in robot swarms, Dr Gross and his team plan to next use it to reveal the workings of some animal collectives such as colonies of bees or schools of fish. The results could eventually inform policy for their protection, as a better understanding of the factors influencing the behavior of these animals will be gained.
Ultimately, this could lead to advances in the world of technology where machines would be able to predict human behavior among other things.
The study was published in the journal Swarm Intelligence.