Also read:

As discussed in our previous posts, understanding the impact of misinformation on U.S. Latinos is nuanced and complex. Misinformation prevalence and acceptance among Latino communities is not as high as headlines would lead us to believe  – uncertainty about false narratives dominates the group consciousness. Latinos who see and believe a lot of false claims are a minority of the community, being outnumbered by the uncertain and those who actively reject false claims. 

Still, it is important to recognize that information environments are far from healthy, and developing interventions that bring people out of webs of confusion or keep Latinos who are uncertain about misinformation from embracing falsehoods is essential. 

Academic literature on misinformation has identified several interventions that reduce what scholars call “susceptibility to misinformation” and strengthen the capacity for individuals to distinguish between accurate, misleading, and false claims. These interventions range from “accuracy nudges,” which encourage people to be mindful of whether social media posts are true or not, to “prebunking” methods, which educate people about common features of misinformation or tactics used by bad actors (e.g., the use of emotional language), to more direct attacks on false claims such as “debunking,” or fact-checking.

The Digital Democracy Institute of the Americas (DDIA) studied two of these approaches  across seven different studies using the gold standard of randomized controlled trials to identify causal effects. Our main takeaways are the following:

  1. Debunking and Prebunking are Both Effective: Both debunking and prebunking methods are effective in moving misinformation-related outcomes among Latinos. Debunking improved people’s factual accuracy for a variety of claims, whereas prebunking enhanced Latinos’ confidence in being able to detect misinformation.

  2. Variation in Effectiveness: Not surprisingly, the interventions' effects are not uniformly strong across all misinformation types or among all subgroups within Latino communities. Both interventions were slightly less effective for more conservative voters, but this difference was not statistically significant. 

  3. No Backfire Detected: Importantly, neither debunking nor prebunking led to a "backfire" effect, where correcting misinformation led people to believe more false content or made people less confident in their ability to detect misinformation. This held true even among subgroups who were more likely to see and believe misinformation. 

  4. Room for Improvement: While debunking and prebunking interventions are promising, more research should explore tailored interventions and consider the impacts on more downstream outcomes such as sharing misinformation and voting. We intend to conduct such research.

Debunking and Prebunking: Powerful Tools in the Arsenal

Of the different types of interventions that have been tested, debunking and prebunking methods have both emerged as especially promising. Debunking is a reactive technique for countering misinformation that aims to reduce beliefs in specific claims or misconceptions. 

Debunking involves summarizing viral misinformation, highlighting factual or logical inconsistencies, and reinforcing more accurate beliefs. Debunking methods are commonly observed in the work of fact-checking organizations such as PolitiFact and Factchequeado, that provide corrective information about disputed political claims to the public. 

Prebunking, on the other hand, employs a more proactive approach. Also described as inoculation, due to its similarities with the logic of vaccination, prebunking exposes individuals to a weakened form of a misinformation campaign by stressing common manipulation techniques used by bad actors, for example: emotional language use, false dichotomies, fearmongering, scapegoating, and incoherence

By educating people on how bad actors attempt to change minds, inoculation theorists argue that individuals can develop “antibodies” against misinformation. This strategy has the benefit of guarding people against misinformation that they have yet to see. 

Both interventions have received support in the literature. Scholars have found that debunking via fact-checking generally has positive effects on beliefs, even among groups that may have less experience with the format, such as Latinos.

On the topic of prebunking, scholars have found evidence that such strategies increase individuals’ ability to spot disinformation techniques, discern trustworthy from untrustworthy content, and improve the quality of their sharing decisions.

Debunking and prebunking have their own set of limitations. Studies find that the effects of fact-checks often decay over time, whereas existing studies have shown a remarkably high level of persistence for prebunking. Moreover, prebunking methods are “general” in that they may target a set of common strategies or tactics that hold across misinformation campaigns. However, if beliefs in specific claims are deeply entrenched, a more targeted approach like debunking may be more effective, as are proactive, good, persuasive strategic communications approaches.

As the previous posts have discussed, levels of misinformation adoption vary significantly among Latinos. Evidence suggests that more partisan, politically involved Latinos are more likely to believe misinformation. Therefore, a natural question is whether interventions that aim to address false beliefs also work among those most likely to be at risk. 

Using content from FactChequeado and the Cambridge Social Decision-Making Lab, Equis carried out a number of randomized controlled trials (RCTs) in 2022 and 2023 with the polling firm Swayable to assess the effectiveness of these methods within Latino samples.

In all studies, participants were first asked about relevant outcomes (e.g., factual accuracy, confidence in spotting disinformation techniques) before being randomly assigned to various conditions. Subsequent to randomization, outcomes were measured again.

The key outcomes were self-reported confidence in being able to identify disinformation tactics and factual accuracy in the prebunking and debunking tests, respectively. Self-reported disinformation tactic discernment was measured on a 1-5 scale, with higher scores indicating higher levels of confidence. Belief accuracy was measured on a 1-5 scale. Higher scores denote higher factual accuracy with respect to the specific claim that is debunked. 

Debunking (Generally) Improves Belief Accuracy

Five different experiments involving over 9,600 Latinos were conducted to assess the effectiveness of debunking on factual accuracy. Participants were randomly assigned to either a fact-checking message produced by FactChequeado or an unrelated, control message. The five messages spanned salient pieces of misinformation at the time, including claims that (1) a new law would allow men to enter girl’s restrooms, (2) only permanent residents could possess a REAL ID, (3) sharks were found swimming on the street in Florida, (4) electoral ballots of undocumented immigrants were discovered, and (5) the IRS taxes the middle class more. 

For each of these tests, participants were asked to report their belief in these claims before being randomly assigned to different conditions, and reporting their beliefs once more. Overall, average treatment effects for debunking via fact-checks are positive and moderate in size. 

To compare across studies, we use effect sizes. Effect sizes are a standardized method of comparing how those exposed to an intervention (the treatment group) fare relative to those who are unexposed (the control group). An effect size of .15 (or 15% of a standard deviation) is considered to be small, whereas effect sizes of .36 and .65 are considered to be medium-sized and large, respectively. Effect sizes are a common metric used in meta-analyses where data from many studies are combined to produce a general picture of how well an intervention or treatment works. 

Focusing first on the fact-checks, effect sizes for those who were assigned to a fact-check were .32, .28, and .26 on beliefs about bathroom policies, REAL ID, and sharks swimming in highways, respectively. These reflect small to medium-sized effects. To put these estimates into perspective, the average gap in beliefs between men and women in effect sizes is .08. Thus, the effects of the fact-check about the false claim that “a new law that would allow men to enter girl’s restrooms” are four times greater than the average gap in beliefs between men and women. Shifts in beliefs were positive for the claims regarding ballots and IRS, but these estimates were much smaller (.05) and not statistically significant. Overall, the findings suggest that fact-checks improve accuracy across the board, but more politically contentious fact-checks may be less effective.

Inoculation (Generally) Improves Confidence in Detecting Misinformation

Two distinct tests of inoculation were carried out. The first was a message that bundled a set of digital literacy skills with inoculation messages. The treatment encourages participants to be discerning by evaluating the motive behind posts and the use of “emotional language” (i.e., an inoculation strategy), verifying content against credible sources, and conducting reverse image searches for attached media. 

The second harnessed research conducted by scholars at the Cambridge Social Decision-Making Lab. The inoculation interventions were provided in the form of three videos: one video stressing the use of “emotional language” and fear to manipulate people; another highlighting the apparent incoherence of disinformation messages; and finally, the use of false dichotomies (e.g., “either you are with us or the terrorists.”). The second test compared these three videos against a placebo video. 

Turning to the findings from the first test, the “digital literacy plus” message shifted self-reported confidence in being able to detect misinformation by about .10 units, relative to the placebo group. This treatment effect is statistically significant. To put this in context, the gap between college and non-college educated Latinos in effect sizes is .15, so the effect of this intervention is equivalent to closing the education gap on disinformation detection confidence by 67%. 

Focusing on the second test, all of the videos moved self-reported confidence in a positive direction. However, only the “emotional language” intervention was significant, shifting confidence by about .09 units. This is comparable to the “education gap” within this particular sample (.10). Effects of the other interventions are about half the magnitude of the “emotional language” video, and do not attain conventional levels of statistical significance. Overall, inoculation worked, but effects were generally small and certain variations (“emotional language”) were more powerful than others.

Figure 1. Average treatment effects for the fact-check and inoculation interventions. I adjust for pre-treatment outcome measures and estimate HC2 clustered standard errors. 

Do Interventions Help Those Who Need Them Most?

Key takeaways from this section

  • Across different subgroups who may be prone to seeing and believing misinformation, popular interventions like fact-checking and inoculation work.

  • But, effects tend to be small. Don’t expect fact-checking or inoculation to have dramatic effects on behaviors.

Our existing tests tell us how people move, on average, when exposed to an intervention, but like any remedy, there are some people who will be helped more than others. An important question is whether tools like debunking and prebunking help Latinos who already believe misinformation or are at risk for adopting false claims; not just the average person. 

Using subgroups that have been shown to possess higher levels of misinformation beliefs in our previous posts, we estimate what are called conditional average treatment effects (CATEs). These measures capture how different subgroups respond to interventions, so we can learn whether heavy social media users, for example, also become more factually accurate when exposed to fact-checks. 

Given the multiple number of interventions, we use meta-analysis tools to generate an overall measure of how different subgroups respond to debunking and prebunking. For a given subgroup and study pair, CATEs are estimated separately, and meta-analysis is used to summarize the overall effects across studies. Figure 5 presents the CATEs for the fact-checking intervention. 

As highlighted in our previous posts, consumption, partisanship, and ideology predict how likely a person is to adopt or believe misinformation. Therefore, the question is whether interventions such as debunking or inoculation positively move outcomes even among the groups most at risk for accepting misinformation. The CATEs reveal similarly sized positive effects of fact-checking on factual accuracy.  

Though confidence intervals hover around zero for some groups (possibly due to the smaller sample sizes), treatment effects are remarkably consistent, hovering around .10 units. The smallest effects (approximately .09) are observed among Trump voters and conservatives, whereas the largest effects are observed among Twitter and Youtube consumers (.14). 

In no case do we detect significantly different CATEs. We also never observe backfire, or the process where correcting misinformation strengthens inaccurate beliefs.

Figure 2. Random effects meta-analytic conditional average treatment effect estimates. Models adjust for pre-treatment scores and use HC2 clustered standard errors. Meta-analytic estimates rely on all of the tests involving fact-checks.

Turning to inoculation, we see positive effects across subgroups who are seeing and believing misinformation most often. However, the effects of inoculation interventions differ among subgroups. Effects among Twitter consumers, Reddit consumers, and men are slightly greater than .10. In contrast, effect sizes are .01 among conservatives, .03 among Joe Biden voters, .03 among women, and .05 among Facebook consumers. Inoculation appears to be effective across the board, albeit with varying degrees of success.

Figure 3. Random effects meta-analytic conditional average treatment effect estimates. Models adjust for pre-treatment scores and use HC2 clustered standard errors. Meta-analytic estimates rely on all of the tests involving inoculation messages.

More To Be Done

The preceding analysis provides hopeful evidence that popular interventions developed in the misinformation literature can be deployed among Latinos, especially those who are most at risk for being exposed to and believing in misinformation. 

Fact-checks and inoculation messages that focus on tactics successfully move intended outcomes, such that Latinos become more factually accurate when exposed to debunking content from fact-checkers and more confident in their ability to discern misinformation when presented with messages highlighting common tactics of “disinformation agents.” Evidence suggests that these interventions are generally effective among high misinformation-adopting subgroups and do not backfire. 

Still, there is room for improvement. Outcomes such as belief accuracy and confidence are important in their own right, but stop short of outcomes that practitioners especially care about like vote choice, partisan identification, and online behaviors. As previous research suggests, fact-checks generally improve beliefs, but often have weak effects on attitudes. 

Positive shifts on self-reported measures of “disinformation detection confidence” are also promising, but they do not capture more downstream outcomes that we may be interested in. For example, do those who see the “emotional language” video become less likely to share or believe misinformation? Does the intervention have unintended consequences on believing credible information that features emotional appeals? These questions cannot be resolved using self-reported measures alone.

At a minimum, the existing tests indicate that these interventions deserve a careful look. The task ahead of us now is to identify ways of increasing their effectiveness. 

Can fact-checks from trusted sources magnify the positive effects observed here? Can more narrative formats that use storytelling generate stronger effects than the “facts first” approach? Can the use of more personalized, tailored messages outperform the generic strategies generally applied?

These questions call for more experiments and exploration to refine our approach to countering misinformation. A one-size-fits-all solution is unlikely to be the panacea for misinformation; rather, an adaptive and audience-specific approach that takes into account characteristics of subgroups and a nuanced understanding of values, identity, and media consumption habits may offer the most promising path forward.