Medical Misinformation More Likely to Fool AI if Source Appears Legitimate, Study Shows
A recent study highlights that artificial intelligence (AI) tools are more likely to share incorrect medical information when it appears to come from authoritative sources, such as doctors’ notes, compared with social media content. Researchers tested multiple AI models using clinical scenarios, hospital discharge summaries with inserted errors, and common health myths.
Findings show AI “believed” nearly half of fabricated recommendations in realistic medical notes, while only 9% of social media misinformation was propagated. The study underscores the need for built-in safeguards in AI systems to verify medical claims, especially as AI becomes increasingly integrated into patient care and clinical workflows. Read more from Reuters here.