NEWS: OpenAI Releases ‘Deepfake Detector’ to Disinformation Researchers, by Cade Metz and Tiffany Hsu, published in The New York Times (Available Here

This week OpenAI announced it will release a new tool to assist in the detection of images created by DALL-E, its image generation software. The company said this tool had a 99.8% success rate detection rate, but could not accurately detect images created by other software such as those made by Stability and Midjourney. OpenAI further said it is developing methods, through its work with tech partners in the C2PA coalition, to add credentials to digitally made content to assist users in understanding when and how something was produced. 

Screenshot 2024-05-07 at 10

ANALYSIS: The Promise of Health Chatbots Has Already Failed, by Derek Beres, published in Mother Jones (Available Here)

This analysis in Mother Jones discusses the dangers of health misinformation spread by chatbots and AI-generated content. Some examples discussed include the software pointing users to unreliable sources and failing to distinguish between real medical qualifications and alternative medicine certifications, which can lead people to delay real medical treatment. This analysis supports the conclusions we made in our recently released report, “De(generating) Democracy?: A Look at the Manipulation of AI Tools to Target Latino Communities Online,” which can be found here

Screenshot 2024-05-07 at 10

RESEARCH: Hoaxpedia: A Unified Wikipedia Hoax Articles Dataset, by Hsuvas Borkakoty and Luis Espinosa-Anke, published in Arxiv (Available Here)

Hoaxes are a recognised form of disinformation created deliberately, with potential serious implications in the credibility of reference knowledge resources such as Wikipedia. What makes detecting Wikipedia hoaxes hard is that they often are written according to the official style guidelines. Two researchers from the University of Cardiff introduce a concept known as HOAXPEDIA, a framework for detecting hoax articles on Wikipedia.