ANALYSIS: AI Isn’t Our Election Safety Problem, Disinformation Is, by Paul Barrett and Justin Hendrix, published in Time Magazine (Available Here)

The article highlights the significant threat of disinformation to U.S. elections, exacerbated by generative AI's ability to create convincing fake content and social media platforms' rollback on election integrity measures. It also emphasizes the need for these platforms to take decisive actions to combat misinformation to safeguard democracy during moments, such as voting.

image_2024-03-18_214741299

BOOK REVIEW: ‘Everything Is Possible’: A Worrying New Book Explores The Danger Of Disinformation, by David Smith, published in The Guardian (Available Here)

This article is a book review for "Attack from Within," by former national security prosecutor and University of Michigan Law School professor Barbara McQuade. According to the review, the book’s main value is offering a comprehensive analysis of disinformation as a modern phenomenon. The book makes three critical contributions in this respect: identifying the root causes of disinformation, illuminating its perilous effects on democratically elected government and the American public, and offering practical solutions to tackle this prevalent issue.

image_2024-03-18_214849018

RESEARCH: Emotional Manipulation Through Prompt Engineering Amplifies Disinformation Generation in AI Large Language Models, by Rasita Vinay, Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani, published in Arxiv (Available Here

A new study from four Swiss researchers tests four versions of OpenAI’s large language models for their ability to generate disinformation. It found that when users included positive emotional cues (i.e., asking politely) the success rate for generating disinformation was much higher than when using negative emotional cues. Most worrying, the researchers had a 100% success rate when politely prompting disinformation from OpenAI’s most advanced model (GPT-4).