Google DeepMind researchers have been studying an ages-old problem. The behavior of two competing entities. Only they have taken the issue one step further. And had two AI agents face each other. Their purpose? To see how they would react.
The field of artificial intelligence has been marking a progress. The area has greatly evolved in recent years. And as it is, most expect artificial intelligence or AI to become a common occurrence.
AI computer agents or simply AI agents are given their own task. They could come help manage day-to-day systems. For example, the traffic light succession. Or they may be used for more complex procedures.
But what could happen if their tasks conflict with another AI’s? What would be their reactions? That is what they Google DeepMind researchers set out to determine.
Research results were revealed earlier this week. They were published on February 09. And the research paper was titled as follows. “Multi-agent Reinforcement Learning in Sequential Social Dilemmas”. Google DeepMind researchers also offered additional details in a blog post. This was released on the official project page.
The research team decided to test out the following. They sought to analyze the reaction of the two AI agents in a “social dilemma”. A series of such events was used.
A “social dilemma” is a generic term. It is used in situations in which an individual can profit if they are selfish. Or when both parties loose if they are being selfish.
Google’s DeepMind tests were somewhat more simple than that. They placed the AI agents in quite basic video games. Gathering and Wolfpack were used. According to the releases, the study results were interesting. But perhaps not surprising.
Just like humans, the AI agents had a context-dependent behavior. They were seen to be more antagonistic or cooperative. And all in accordance with the situation.
In Gathering, for example, they played along nicely. But only whilst the apple stocks were high. As they started dwindling, the AI agents started zapping one another. And research also noted the following. A “cleverer” AI was also introduced in the game. And this latter decided that zapping was the way to go from the very beginning.
In contrast, Wolfpack revealed a different result. The cleverer the AI, the more it collaborated with the others. This is because learning to work together needs more computational power. But it will also help track and also herd the game prey.
The computational requirements could also theoretically explain the more aggressive behavior. Zapping is considered a more challenging task. Especially when compared to apple gathering. And the zapping actions also take up more power, when compared to the other.
As such, what were the study results? As in life, it all depends on the context. And most importantly, on the rules. The AI agents based their behavior on the rules they were offered. For Gathering, the zapping rules offered a higher reward. As such, the AI chose this path.
In contrast, in Wolfpack, the rules were more rewarding when they are collaborating. As such, the AI agents chose to work together. The bottom line for following studies?
Scientists will be faced with a future challenge. They will have to make sure that the AI agents know the rules. And also that these are the right ones.
Google’s DeepMind team stated as follows in their blog post. This research could help them better understand multi-agent AI systems. And also determine how to better control them. As it is, most such systems will depend on a continuous cooperation.
More details on both the study and the games involved can be found in the official Google DeepMind blog post. The research paper can also be found in the same location.
Image Source: Pixabay
Latest posts by Nancy Young (see all)
- CDC Says Flu Vaccine Has 48% Effectiveness This Season - February 18, 2017
- Human Activity Changes Climate 170 Times Faster than Nature - February 14, 2017
- Google DeepMind AI Agents Faced Off In A Competition - February 12, 2017