Quantcast
Channel: CNN iReport - Latest
Viewing all articles
Browse latest Browse all 71741

Deceptive robots hint at machine self-awareness

$
0
0

A robot that tricks its opponent in a game of hide and seek is a step towards machines that can intuit our thoughts, intentions and feelings

 

ROVIO the robotic car is creating a decoy. It trundles forward and knocks over a marker pen stood on its end. The pen is positioned along the path to a hiding place - but Rovio doesn't hide there. It sneaks away and conceals itself elsewhere.

 

When a second Rovio arrives, it sees the felled pen and assumes that its prey must have passed this way. It rolls onwards, but is soon disappointed.

 

The behaviour of the deceptive Rovio represents something much more significant than a crude game of robot hide-and-seek. It is a demonstration of an aspect of social intelligence known as theory of mind, which humans only develop around the age of 4 or 5. If robots can be made to display theory of mind in other situations, it could endow them with a sophisticated intelligence. They might then be able to reason the thoughts, intentions and even feelings of people and other robots.

 

"It was definitely exciting to see it work," says Alan Wagner of the Georgia Institute of Technology in Atlanta, who programmed Rovio with colleague Ron Arkin. "We have expanded the boundary of understanding deception and how deception relates to artificial systems."

 

The defining feature of theory of mind is the ability to model the beliefs and intentions of others as distinct to one's own. Robots have previously hinted at this ability by performing a variety of mental tricks (see "Becoming aware").

 

Deception, though, is definite progress. It requires not only the modelling of a distinct mind but also the ability to anticipate and manipulate the actions of others. "A deceiving agent knows what the other agent knows and intends to change what the other agent knows," says Liane Young, a cognitive scientist at the Massachusetts Institute of Technology.

 

To demonstrate artificial deception, Wagner and Arkin recruited two Rovio robots, made by WowWee in Hong Kong, for a game of hide-and-seek. Before the game, the robots were released to learn about the game environment and the effect of their own actions on it.

 

The environment featured three adjacent hiding places. On the path leading towards each of these hidey-holes, the researchers placed a marker pen stood on its end (see diagram).

 

Programmed to learn, the first thing seeker Rovio did was move into one of the caches, knocking the pen over on its way. The pen was reset and the robot repeated the process, 10 times in all. Using a combination of its camera and probabilistic software, it learned to associate fallen pen and hiding place.

 

Hider Rovio came to the same conclusions as it explored the environment, but crucially, it had been given the ability to learn how to send a false signal.

 

The game then began. Hider Rovio's learned knowledge allowed it to predict what seeker Rovio would do in the same situation. It calculated that knocking over a pen and sneaking elsewhere would fool its seeker. "It uses its own model of itself to determine how best to deceive the other individual," says Wagner.

 

In 15 of the 20 times the game was played, the seeker chose the wrong corridor (International Journal of Social Robotics, DOI: 10.1007/s12369-010-0073-8).

 

However, many researchers point out that it is unsurprising that the deceiving robot should succeed, given the extent of its pre-programming. "It seems to me the theory of mind is in the experimenter, not the robot," says Sara Mitri, a roboticist and evolutionary biologist at Harvard University. "It's like it's staged; [the robot] is like a puppet rather than a child."

 

In 2009, Mitri was part of a team at the Swiss Federal Institute of Technology in Lausanne (EPFL) which created basic robots that evolved the ability to discourage other robots from accessing a common, finite "food" source, without being programmed to do so (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.0903152106).

 

Wagner acknowledges that the robots are pre-programmed, but emphasises that each one learns on its own. "It would learn how its movement affected the markers, which told it basically how to deceive," he says.

 

Raúl Arrabales, who researches machine consciousness at the Carlos III University of Madrid in Spain, agrees. "It is actually an implementation of theory of mind, because it is using the learning mechanism to update the model of the [opponent] or its own model." He notes, however, that the robot can't transfer the knowledge autonomously to another situation. "It is not like a human, it is something in between," he says.

 

Humans have a generalised concept of deception, which wasn't demonstrated by these robots. "It was implemented on a very specific task, for a very particular interaction," says Kevin Gold, an AI researcher at the Rochester Institute of Technology in New York state. "It's a far cry from human theory of mind because it is so specific to the task."

 

http://www.newscientist.com/article/mg20727794.800-deceptive-robots-hint-at-machine-selfawareness.html?DCMP=OTC-rss&nsref=online-news


Viewing all articles
Browse latest Browse all 71741

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>