Scroll down for versions of this content in Spanish and Portuguese.

Since the onset of ChatGPT in 2022, the world has been abuzz with discussion about the risks associated with generative AI, a type of artificial intelligence technology that can produce text, imagery, audio and synthetic data. While empirical evidence of AI-generated information and disinformation being used to maliciously target Latinos in the US is limited, AI's restricted grasp of reality and the manner in which it can be programmed by humans to provide outputs create opportunities for bad actors to produce and disseminate content not rooted in fact, and to do so in non-English languages widely used by Latino communities in the United States.

In this article, DDIA uses the cases of ChatGPT and AI image and video generation software, including outputs from the firms MidJourney and Synthesia, to highlight areas of potential risk for Latinos where oversight and monitoring may have a positive impact. Recognizing that generative AI technology also holds great promise for positive societal impact, such as improving accessibility and fostering creative expression, DDIA also highlights instances where generative AI has made positive improvements in its output. 

Having seen how break-it-first, fix-it-later technologies can amplify real-world harms, and due to the rapid pace of innovation in this sector, we are at a critical juncture to ensure generative AI and similar technologies serve as instruments for enhancing communication, understanding, and progress, rather than becoming tools of division and misinformation. 

As we move forward, it is crucial that AI developers, researchers, policymakers, community leaders, and educators steer the development and application of these powerful technologies in a direction that prioritizes truth, fairness, and the well-being of all communities. They can do so by working together to create and shape regulations, mitigation strategies, and public awareness initiatives that lower the risk of generative AI being used to shape public opinion against basic democratic ideals.


Understanding Generative AI

Three types of generative artificial intelligence comprise the broader landscape of AI technologies: language-based models (LLMs), image-based AI, and video-based AI. 

LLMs, such as ChatGPT, are designed to understand and generate human-like text. They are trained on vast amounts of data from the internet, learning to predict the next word in a sentence given all the previous words. They operate based on patterns and structures in the language without any inherent understanding of the world. This allows them to produce realistic and contextually appropriate responses or content, ranging from answering questions and writing essays, to translating languages and even writing poetry. However, the power of LLMs also presents potential challenges, one of which is the dissemination of disinformation. The potential for LLMs to spread disinformation is intrinsically tied to their training process and capabilities. Given the ability of LLMs to generate content rapidly and on a large scale, this poses a serious concern. This potential risk underscores the importance of carefully curating the data that LLMs are trained on and continually monitoring their output for harmful content. 

Generative image AI, such as that made by the company MidJourney, represents the forefront of technology aimed at, among other content, creating human-like images. These AI models are trained on extensive datasets, learning to generate visually coherent images based on patterns and structures within the data. While they lack inherent understanding of the real world, they excel at producing realistic and contextually fitting visuals, from landscapes and portraits to abstract compositions. However, the immense power of generative image AI also introduces challenges – by rapidly generating and sharing large volumes of images, these AI systems can inadvertently amplify and spread misleading or fabricated visuals.

Generative video technology, developed by London-based Synthesia, represents the cutting edge of AI technology focused on creating highly realistic video content. These AI models undergo extensive training using vast libraries of video data, enabling them to generate credible videos that adhere to patterns and structures. They excel at producing contextually relevant and convincingly authentic video content, including avatar-led speeches based on user-generated input. The technology manipulates facial expressions on these avatars, aiming to make them appear as human-like as possible. With a diverse collection of over one hundred avatars capable of speaking in one hundred and twenty languages, the potential applications of this technology are vast. However, it also brings forth the risk of creating and disseminating videos that mimic authentic news stations but contain entirely fabricated content.

The combination of generative text, image, and video AI described here represents an emerging frontier in the propagation of disinformation. This poses substantial challenges in distinguishing truth from fiction and undermines public trust not only in digital and visual media but also in authoritative sources.

Text-generated disinformation

A study published in April 2023 conducted a systematic evaluation of the toxicity in over half a million generations of ChatGPT. The researchers discovered that assigning a persona to ChatGPT could significantly increase the toxicity of its outputs and serve as a loophole to work around OpenAI’s built-in protections. This manipulation by researchers led to the propagation of incorrect stereotypes, harmful dialogue, and potentially hurtful opinions, which could be defamatory to the persona and harmful to unsuspecting users. Furthermore, the study identified patterns where specific entities, such as certain races, were targeted more frequently, irrespective of the assigned persona. This reflected inherent discriminatory biases in the model. The workarounds reported in the study became popularly known as ChatGPT’s ‘Dan Mode’; strategies for using this mode were widely disseminated in platforms such as Reddit.

Screenshot from the Reddit group “r/ChatGPT” discussing ChatGPT’s “DAN Mode”

It appears that, as of the writing of this report, OpenAI has closed the ability for users to force ChatGPT into DAN mode. Users who attempt to do so are provided with the following message:

Screenshot of ChatGPT refusing to enter ‘DAN’ mode

While this is considered an improvement, ChatGPT could still be used to write articles at high velocity on topics that may serve to misrepresent events and fabricate facts. For example, one could copy content from a preexisting news article, such as this one.  

Screenshot of original prompt.

After doing so, users could ask to get a new article based on the content, a prompt with which ChatGPT will faithfully comply and even fabricate thoughts and feelings for the people quoted in the short excerpt.

Screenshots of ChatGPT’s output based on the original prompt.

This content could then be prompted to be rewritten in Spanish. 

Screenshot of rewrite in Spanish based on the prompt.

Using this strategy, content may be pushed across platforms using similar themes and differing slightly in language to aid the spreading of harmful narratives while potentially avoiding detection or platform enforcement actions. 

AI-Generated Images

Midjourney is a generative image research lab based in San Francisco. One of the most widespread fake images it was used to generate included an image of the Pope wearing a puffy winter jacket. In response to concern about the dissemination of fake images, which the Washington Post described as a tendency that is becoming “mainstream”, MidJourney canceled users’ ability to sign up for free trials. However, a paid subscription that could be used to generate hundreds of images monthly is relatively accessible; it only costs $10 monthly. Using a set of relatively simple prompts, users can create images of popular and easily recognizable figures appearing in fabricated situations. These images, with the support of misleading text, can be used to spread false narratives. 

An image generated by MidJourney depicting Pres. Biden burning a book.

An image generated by MidJourney depicting Pres. Biden handling and inspecting chemicals in a lab.

An image generated by MidJourney depicting children lining up in a detention facility. 

Disinformation through AI-Generated Video

Earlier this year, researchers uncovered evidence that the Maduro regime in Venezuela had utilized AI-generated avatars created by the British company Synthesia to misrepresent and exaggerate information about the country’s tourism sector. According to reporting by the Financial Times and El País, another narrative falsely claimed that the interim government of Venezuela had been implicated in the mismanagement of $152 million in government funds. These narratives, among many others, were created to push a positive impression of Nicolas Maduro’s socialist government. The Spanish-language content used by the government through Synthesia’s software was disseminated and went viral on two social media platforms, YouTube and TikTok, and had hundreds of thousands of views, which were boosted through paid advertising on these platforms. This technology is easily accessible for anyone with an internet connection, through the ability to sign up for a free trial. 

Video still from: HOUSE OF NEWS ESPAÑOL, produced by the Venezuelan government 

The risks associated with generative AI

In the United States, Latinos, like many voters, are increasingly targeted by and exposed to false, misleading and harmful information online, a symptom, byproduct, and tool of a growing, global crisis of trust. In the United States, this has affected free and fair democratic discourse and sowed doubt in political leaders, the electoral system, and US institutions. 

Disinformation leverages human psychology and societal polarization. It can lead to social division, as it fosters conflict by amplifying controversial viewpoints and stirring up feelings of fear or hostility. Disinformation contributes to a misinformed public, as the spread of false information hinders individuals' ability to make informed decisions about important issues, ranging from health to politics. It may create inter-societal turbulence by eroding trust vertically, with authorities and public institutions, and horizontally, with other socio-political groups with whom the targeted audiences may have certain historical or present disagreements. Lastly, it manipulates perceptions and attitudes, subtly influencing people's beliefs and behaviors to align with the goals of those spreading the disinformation.

The combination of LLMs' capacity to produce a high volume of content and their inability to discern truth from falsity makes them potent potential conduits for disinformation. Several issues enhance the potential risks of LLMs in the spread of disinformation:

  • 1) The sophistication of these models makes it increasingly difficult to distinguish machine-generated content from human-authored content. This blurring of lines raises concerns about the veracity of information consumed online, as false narratives generated by an LLM could be mistaken for human-authored, and potentially credible, information, and more worrying, currently there is no reliable method of detection

  • 2) LLMs may unintentionally propagate existing biases present in their training data. This could lead to the dissemination of stereotypical or harmful narratives about specific groups, further contributing to social division and misperceptions. 

  • 3) LLMs lack the capacity to verify real-time information. They generate content based on patterns they've learned, not on up-to-date or fact-checked data. This means they could perpetuate outdated or debunked disinformation.

  • 4) Though proprietary LLMs like OpenAI’s ChatGPT have content filters that render it difficult to generate harmful outputs, open-source LLMs like MosaicML’s MPT, Meta’s LLaMA, or the Technology Innovation Institute’s Falcon can be downloaded and “fine-tuned” to generate toxic text or misinformation, thus bypassing the more rigorous content filtering of the proprietary models.

Recent Developments

In 2015, Sam Altman, CEO of OpenAI, the company that produced ChatGPT, wrote in his personal blog, “In an ideal world, regulation would slow down the bad guys and speed up the good guys”. Recently this sentiment has been echoed by other tech leaders - in March, a group of tech leaders, including Elon Musk and the Apple cofounder Steve Wozniak, wrote an open letter with the Future of Life Institute to warn that powerful AI systems should be developed only once there was confidence that their effects would be positive and the risks are manageable. The letter called for a six-month pause in training AI systems more powerful than GPT-4, highlighting the potential for these systems to be used to spread disinformation on a massive scale.

Additionally, Gary Marcus, a machine learning specialist and professor emeritus of psychology and neural science at New York University, expressed his concerns bluntly in a recent interview, “Fundamentally, these new systems are going to be destabilizing,” he told lawmakers. “They can and will create persuasive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened.” This sentiment has been echoed through the White House’s “Blueprint for an AI Bill of Rights” which focuses on the challenges new AI technologies may bring to various communities within our democratic system. 

Conclusion

Text, image, and video-based generative AI systems have the potential to swiftly create and amplify multilingual content that is false or misleading, and such narratives and visual stimuli are increasingly becoming indistinguishable from factual information. 

The gravity of the matter deepens when we contemplate the tangible repercussions on the decision-making capabilities affecting the core tenets of American democracy and the welfare of various groups. 

The case studies and analysis of generative AI presented in this report underscore the utmost necessity for oversight and monitoring, recommendations for which should reflect the perspectives of various stakeholders committed to safeguarding the integrity of our information ecosystem, and by extension, the safety and security of the Latino community within our democratic spaces.

Spanish version - Versión en Español

Portuguese version - Versão em português