Article 6AP6T Google Tells AI Agents to Behave Like 'Believable Humans' to Create 'Artificial Society'

Google Tells AI Agents to Behave Like 'Believable Humans' to Create 'Artificial Society'

by
Chloe Xiang
from on (#6AP6T)

Researchers at Google and Stanford used ChatGPT to generate human-like characters who live and interact in a contained, video game-like world called Smallville. Smallville features 25 characters with preloaded personas who wake up, go to sleep, make breakfast, interact with each other, and attend each other's parties in an attempt to mimic human behavior as closely as possible. One popular AI observer likened the experiment to an early version of Westworld, though it's more like a video game demo where the characters' actions and dialogue are autogenerated by AI.

The researchers input one paragraph per character into ChatGPT, describing their occupation, relationship with other agents, and memories they have, and then began the simulation, which they called an "Interactive Simulacra of Human Behavior."

These characters, or generative agents, are notably able to retrieve information from their memory" which is a comprehensive record of the agent's experiences. The agents are able to perceive their environments and then use their memories to determine an action. The agents are also able to reflect, which allows them to create new insights and long-term plans. The researchers "interviewed" each character after the simulation had been running for a while and found that some of them had developed careers and political interests on their own.

Sam, a character in Smallville, for example, decided to run for mayor of the town after being "involved in local politics for years." Sam told other AI agents about his plans, and researchers studied how this bit of news spread throughout the town. Another agent, Klaus Mueller, was "researching the effects of gentrification in low-income communities for a research paper."

The researchers believe that the ability to create believable simulations of human behavior in this metaverse can lead to its application in a number of virtual spaces, including powering non-playable characters. We then demonstrate the potential of generative agents by manifesting them as non-player characters in a Sims-style game world and simulating their lives in it. Evaluations suggest that our architecture creates believable behavior," the researchers concluded. Going forward, we suggest that generative agents can play roles in many interactive applications ranging from design tools to social computing systems to immersive environments."

The characters have developed specific routines, such as waking up, taking a shower, cooking breakfast, interacting with their families, then going to work every day.

1681158686872-screenshot-2023-04-10-at-1Screenshot from paper

The online replay of the simulation looks like a pixelated, 16-bit video game, similar to that of Harvest Moon, with an isometric view of different characters' homes and outdoor spaces. The characters are represented by their initials in the simulation, but scrolling down the page allows users to click on each character to see their actions in more detail, including their current action, location, and conversation.

The researchers wrote that there were a number of emergent behavior patterns from the generative agents. The first was that the agents shared information with each other that was passed from agent to agent. The second was that the agents formed new relationships over time and remembered their past interactions with other agents. Finally, the agents were able to coordinate with each other; one agent decided to throw a Valentine's Day Party, for example.

1681158854545-screen-shot-2023-04-10-at-Screenshot from paper

We posit that, based on the work summarized above, large language models can become a key ingredient for creating believable agents," the researchers wrote. Creating NPCs with believable behavior, if possible, could enhance player experiences in games and interactive fictions by enabling emergent narratives and social interactions with the agents."

However, more importantly, game worlds provide increasingly realistic representations of real-world affordances, these simulated worlds offer accessible testbeds for developers of believable agents to finesse the agents' cognitive capabilities without worrying about implementing robotics in the real world or creating simulation environments from scratch," the researchers added.

The implementation of generative agents into video games has the potential to make those fictional worlds more robust and interactive. It's easy to imagine AI like this being used to create more interesting NPCs in video games. The researchers wrote that they think they've created "believable individual and emergent social behaviors," which has led to "believable simulations of human behavior" in an "artificial society."

There are people who hope to bring this simulated reality to the real world. Artur Sychov, the founder of a metaverse company called Somnium Space, is creating a project called Live Forever" in which people can talk to their relatives in the metaverse, even after they pass away. ChatGPT was also integrated into his metaverse and has been able to retain short-term memory."

The researchers behind the generative agents wrote that there are a number of important ethical concerns that need to be addressed. One risk is people forming parasocial relationships with generative agents even when such relationships may not be appropriate," they wrote. A second risk is the impact of errors, which is when an application makes an incorrect conclusion about a user's goals based on the agent's predictions. They also said that the existing risks surrounding generative AI still apply, which include the production of misinformation and other malicious content.

External Content
Source RSS or Atom Feed
Feed Location http://motherboard.vice.com/rss
Feed Title
Feed Link http://motherboard.vice.com/
Reply 0 comments