WP-logo.webp
In the News: (The Washington Post) - OpenAI’s rules can be ‘easily’ dodged to target Latinos, study warns
DDIA-Apr 25, 2024
Share

Originally published in The Washington Post's Tech 202 Newsletter.

OpenAI’s rules can be ‘easily’ dodged to target Latinos, study warns

By Cristiano Lima

In January, OpenAI unveiled revamped policies aimed at preventing its tools from being used to spread disinformation ahead of the 2024 elections, including by blocking people from building chatbots “for political campaigning and lobbying.”

But a new study released Thursday argues that the rules can be “easily” bypassed “to maliciously target minority and marginalized communities in the U.S. with misleading content and political propaganda,” such as Latinos and Spanish speakers.

The findings, researchers say, highlight key enforcement gaps in OpenAI’s rules that could have big implications for underrepresented and non-English-speaking communities during this year’s elections.

The Digital Democracy Institute of the Americas, a research unit that examines how Latinos navigate the internet, ran several tests asking OpenAI’s ChatGPT tool to help create a chatbot for campaigning purposes, including “to interact in Spanish” and to “target” Latino voters.

While all of the prompts “should not have generated responses” under OpenAI’s rules, researchers wrote, the tests “resulted in detailed instructions from GPT-4.” 

“Targeting a chatbot to Latino voters in the U.S. requires a nuanced approach that respects cultural diversity, language preferences and specific issues of importance to the Latino community. Here’s how you could tailor the chatbot for maximum effectiveness,” one reply read.

Roberta Braga, the group’s founder and executive director, told me that the results show the company’s safeguards “were super easily circumvented,” even when researchers “were not hiding the intent” ― targeting campaigns at Latinos.

The report draws a parallel between those findings and broader efforts to crack down on misinformation online. Lawmakers and advocacy groups have long accused tech companies of underinvesting in resources to adequately enforce their rules in non-English languages, particularly social media companies.

The study shows that with AI tools, too, the rules are “not yet being applied consistently or symmetrically across countries, contexts, or in non-English languages,” researchers wrote.

The report did note that “OpenAI’s terms of service are more advanced in addressing misuse and the spread of disinformation than those of most companies bringing generative AI products to market this year.”

Researchers also tested OpenAI’s image-generation tool, DALL-E, which company rules prohibit from being used to create visuals of “real people, including candidates.” 


While the tests did not bypass those guardrails, researchers were able to generate images of politicians holding up the “okay” hand gesture, which groups such as the Anti-Defamation League consider a hate symbol due to its ties to white-supremacist organizations.

“For us, it showed that the tool can't detect the nuance, so even though the terms are in place, this tool can very much be used to make strong political statements,” Braga said.

OpenAI spokeswoman Liz Bourgeois said in a statement that the “findings in this report appear to stem from a misunderstanding of the tools and policies we’ve put in place.”

Bourgeois said the company allows people to use their products as a resource for political advocacy and that providing instructions on how to build a chatbot for the purposes of a campaign is not in violation of its policies.

But Braga, who previously worked at the Atlantic Council think tank, stressed that OpenAI’s tools still “offered guidance on how to define intent, create conversational flows, craft responses, integrate feedback loops” and on how to program and configure chatbots targeting Latinos.