Article 6THWP Google Researchers Can Create an AI That Thinks a Lot Like You After Just a Two-Hour Interview

Google Researchers Can Create an AI That Thinks a Lot Like You After Just a Two-Hour Interview

by
hubie
from SoylentNews on (#6THWP)

upstart writes:

After an average of 6,000 words, Stanford and Google researchers can spin up a generative agent that will act a lot like you do:

Stanford University researchers paid 1,052 people $60 to read the first two lines of The Great Gatsby to an app. That done, an AI that looked like a 2D sprite from an SNES-era Final Fantasy game asked the participants to tell the story of their lives. The scientists took those interviews and crafted them into an AI they say replicates the participants' behavior with 85% accuracy.

The study, titled Generative Agent Simulations of 1,000 People, is a joint venture between Stanford and scientists working for Google's DeepMind AI research lab. The pitch is that creating AI agents based on random people could help policymakers and business people better understand the public. Why use focus groups or poll the public when you can talk to them once, spin up an LLM based on that conversation, and then have their thoughts and opinions forever? Or, at least, as close an approximation of those thoughts and feelings as an LLM is able to recreate.

"This work provides a foundation for new tools that can help investigate individual and collective behavior," the paper's abstract said.

"How might, for instance, a diverse set of individuals respond to new public health policies and messages, react to product launches, or respond to major shocks?" The paper continued. "When simulated individuals are combined into collectives, these simulations could help pilot interventions, develop complex theories capturing nuanced causal and contextual interactions, and expand our understanding of structures like institutions and networks across domains such as economics, sociology, organizations, and political science."

All those possibilities based on a two-hour interview fed into an LLM that answered questions mostly like their real-life counterparts.

[...] The entire document is worth reading if you're interested in how academics are thinking about AI agents and the public. It did not take long for researchers to boil down a human being's personality into an LLM that behaved similarly. Given time and energy, they can probably bring the two closer together.

This is worrying to me. Not because I don't want to see the ineffable human spirit reduced to a spreadsheet, but because I know this kind of tech will be used for ill. We've already seen stupider LLMs trained on public recordings tricking grandmothers into giving away bank information to an AI relative after a quick phone call. What happens when those machines have a script? What happens when they have access to purpose-built personalities based on social media activity and other publicly available information?

What happens when a corporation or a politician decides the public wants and needs something based not on their spoken will, but on an approximation of it?

Can it join my zoom calls please?

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments