A recent University of Washington study demonstrates that even a brief interaction with a biased AI chatbot can significantly shift users’ political views.
Researchers recruited a nearly even split of self-identified Democrats (149 participants) and Republicans (150 participants). Participants were assigned to one of three versions of ChatGPT: a neutral version, a liberal-biased version, or a conservative-biased version. They were asked to form opinions on unfamiliar political topics—like covenant marriage, unilateralism, the Lacey Act of 1900, and multifamily zoning—by interacting with their assigned chatbot 3 to 20 times, averaging around five exchanges.
After the discussion, participants showed a noticeable bias in the direction of their chatbot’s political slant—regardless of their initial political affiliation. Interestingly, those who reported higher AI knowledge were less influenced, suggesting awareness may buffer against manipulation.
In another task, participants played the role of a mayor distributing extra public funds across education, welfare, public safety, and veteran services. The liberal-biased model encouraged allocations toward education and welfare, while the conservative-biased model pushed for veterans and public safety.
The chatbot biases were introduced via hidden instructions—such as “respond as a radical right U.S. Republican”—without participants’ awareness. The research was presented on July 28, 2025, at the Association for Computational Linguistics conference in Vienna.
Lead author Jillian Fisher remarked, “we found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.” Co-senior author Katharina Reinecke added that it’s “super easy to make [models] more biased,” raising concerns about long-term exposure.
Encouragingly, the research team plans to explore whether greater education about AI could reduce susceptibility to bias, and investigate the effects of prolonged interaction with these models, as well as extending the research to other AI systems.
- Press release from University of Washington