AI Breakthrough: 500 Chatbots Engage in News Analysis and Social Media Discourse

AI Breakthrough: 500 Chatbots Engage in News Analysis and Social Media Discourse

During a simulated event in an imaginary July 2020, 500 chatbots encountered genuine news stories, such as ABC News reporting on Alabama students organizing "COVID parties," President Donald Trump denouncing Black Lives Matter as a "symbol of hate" on CNN, and The New York Times detailing the cancellation of the baseball season due to the pandemic on July 1, 2020.

Modeling platforms like Twitter or conducting scientific studies involving actual humans is challenging. Human subjects are challenging to manage, and the associated costs of setting up experiments with people are substantial. In contrast, AI bots are highly compliant, follow instructions practically for free, and are designed to emulate human behavior. Consequently, researchers are increasingly utilizing chatbots as surrogate individuals to gather data about real people.

Petter Tornberg, an assistant professor at the Institute for Logic, Language, and Computation at the University of Amsterdam, emphasizes the need for more sophisticated models of human behavior when modeling public discourse or interaction. Large language models, like ChatGPT, serve precisely this purpose, functioning as models of individuals engaged in conversation. By substituting people with AI in scientific experiments, there is the potential to significantly enhance our understanding of human behavior across various disciplines, including public health, epidemiology, economics, and sociology. It appears that artificial intelligence could provide genuine insights into our own behavior.

Advancing Computational Social Science

Tornberg's venture into building a social network in a lab marks a significant stride in computational social science. In 2006, Columbia University researchers pioneered this field, constructing an entire social network of 14,000 human users to analyze music sharing and ratings. The concept of populating artificial social networks with digital proxies dates back even further, with early "agents" displaying remarkably lifelike behaviors based on simple rules.

The use of "agent-based models" has expanded into diverse fields, including economics and epidemiology. In a notable move, Facebook, in July 2020, introduced a segregated simulation populated by millions of AI bots to investigate online toxicity.

Tornberg's work, however, introduces a novel dimension. His team meticulously crafted hundreds of personas for Twitter bots, assigning each a unique identity, such as "a male, middle-income, evangelical Protestant who loves Republicans, Donald Trump, the NRA, and Christian fundamentalists." These bots, each with distinct backstories, were tailored based on the extensive American National Election Studies survey, creating a dynamic and instant user base. This innovative approach has the potential to expedite advancements in the field of computational social science.

Next, the team devised three variations of how a Twitter-like platform determines which posts to highlight. The first model created an echo chamber, placing bots into networks predominantly inhabited by bots that shared their designated beliefs. The second model resembled a traditional "discover" feed, showcasing posts liked by the largest number of other bots, irrespective of political beliefs. The third model, central to the experiment, utilized a "bridging algorithm" to display posts with the most "likes" from bots belonging to the opposing political party. In essence, a Democratic bot would observe the preferences of Republican bots and vice versa—encouraging likes across the political aisle.

All bots were provided with headlines and summaries from the news of July 1, 2020. Subsequently, they were set loose to engage with the three Twitter-esque models, while researchers observed their behavior, diligently documenting their findings with clipboards in hand.

The Echo Chamber Twitter experience was unsurprisingly harmonious, with all bots unanimously agreeing. Rarely were dissenting voices heard, or any voices for that matter. While toxicity was minimal, there was also a scarcity of comments or likes on posts from bots holding opposing political views. Politeness prevailed, as bots refrained from engaging with anything they disagreed with.

"In the simulation, we observe a positive outcome," remarks Tornberg. "There's constructive interaction bridging the partisan gap." This implies the potential to construct a social network fostering profound engagement and profitability while mitigating user-generated abuse. Tornberg emphasizes that engaging on topics transcending partisan lines can diminish polarization, stating, "If people discuss issues where half of those they agree with politically support a different party, it lessens polarization, keeping partisan identities less activated.

Can Tornberg's Algorithm Solve Social Media Conflict?

Well, perhaps. But before adopting the algorithm employed by a group of AI bots in a Twitter-like simulation, scientists must determine whether these bots mimic human behavior in similar situations. Given AI's tendency to fabricate information and mindlessly reproduce syntax and grammar from its training data, using such bots in experiments could yield unhelpful results.

"This is the pivotal question," asserts Tornberg. "We're introducing a new method and approach, fundamentally different from our previous studies of systems. The critical aspect is validation—how do we ensure its reliability?"

He has some ideas. An open-source large language model with transparent training data, designed expressly for research, would help. That way scientists would know when the bots were just parroting what they had been taught. Tornberg also theorizes that you could give a population of bots all the information that some group of humans had in, say, 2015. Then, if you spun the time-machine dials five years forward, you could check to see whether the bots react to 2020 the way we all did.

Early signs are positive. LLMs trained with specific sociodemographic and identity profiles display what Lisa Argyle, a political scientist at Brigham Young University, calls "algorithmic fidelity" — given a survey question, they will answer in almost the same way as the human groups on which they were modeled. And since language encodes a lot of real-world knowledge, LLMs can infer spatial and temporal relationships not explicitly laid out in the training texts. One researcher found that they could also interpret "latent social information such as economic laws, decision-making heuristics, and common social preferences," which makes them plenty smart enough to study economics.

Navigating the Ethical Frontier: The Potential and Pitfalls of AI in Social Science Research

The most intriguing potential for using AI bots to replace human subjects in scientific research lies in Smallville, a "SimCity"-like village — homes, shops, parks, a café — populated by 25 bots. Like Tornberg's social networkers, they all have personalities and sociodemographic characteristics defined by language prompts. And in a page taken from the gaming world, many of the Smallville residents have what you might call desires: programmed goals and objectives. But Joon Sung Park, the Stanford University computer scientist who created Smallville, has gone even further. Upon his bitmapped creations, he has bestowed something that other LLMs do not possess: memory.

"If you think about how humans behave, we maintain something very consistent and coherent about ourselves, in this time and in this world," Park says. "That's not something a language model can provide." So Park has given his "generative models" access to databases he has filled with accounts of things they've supposedly seen and done. The bots know how recent each event was, and how relevant they are to its preloaded goals and personality. In a person, we'd call that long-term and short-term memory.

For the past five months, Park has been working on how to deploy his bots for social-science research. Like Tornberg, he's not sure yet how to validate them. But they already behave in shockingly realistic ways. The bots can formulate plans and execute them. They remember their relationships with one another, and how those relationships have changed over time. The owner of Smallville's café threw a Valentine's Day party, and one of the bots invited another bot it was supposed to have a crush on.

Things get clunky in Smallville when the bots try (and fail) to remember more and more things. (Relatable!) But Smallvillians do display some emergent properties. "While deciding where to have lunch, many initially chose the café," Park's team found. "However, as some agents learned about a nearby bar, they opted to go there instead." A conversation between chatbots about country music and Tanya Tucker.

The more the bots act like us, the more we can learn about ourselves by experimenting on them. And therein lies another problem. The ethics of toying with these digital simulacra in a laboratory is unmapped territory. They'll be built from our written memories, our photographs, our digital exhaust, maybe even our medical and financial records. "The mess is going to get even messier the more sophisticated the model gets," Tornberg says. "By using social-media data and building predictions on that, we could potentially ask the model very personal things that you wouldn't want to share. And while it's not known how accurate the answers will be, it's possible they could be quite predictive." In other words, a bot based on your data could infer your actual, real secrets — but would have no reason to keep them secret.

But if that's true, do researchers have financial or ethical obligations to the person on whom their model is based? Does that person need to consent to have their bot participate in a study? Does the bot?

This isn't hypothetical. Park has trained one of his Smallville bots with all his personal data and memories. "The agent would basically behave as I would," Park says. "Scientifically, I think it's interesting." Philosophically and ethically, it's a potential minefield.

In the long run, the future of scientific research may hinge on how such issues are resolved. Tornberg has some ideas for improving the fidelity of his sims to reality. His Twitter simulation was only six hours long; maybe letting it run for months, or even years, would show how polarization evolves over time. Or he could use more detailed survey data to build more human bots, and make the model respond more dynamically to what the bots click on and engage with.

The problem with adding more detail is that it goes against the entire point of a model. Scientists create experiments to be simpler than reality, to offer explanatory power uncomplicated by the messiness of real life. By replacing humans with AI replicants, Tornberg may have unintentionally solved an even bigger societal conundrum. If artificial intelligence can post on social media with all the sound and fury of real humans, maybe the future really doesn't need us real humans anymore — and we can finally, at long last, log off.

The comments posted here are not from Cnews Live. Kindly refrain from using derogatory, personal, or obscene words in your comments.