Misinformation NewsFeed

Google's AI Summaries Spark Media Backlash Fears

Google's AI-generated summaries have faced criticism from the News Media Alliance, which warns that they will cause "catastrophic" damage to cash-strapped publishers and content creators. The feature has also been shown to produce bizarre responses, such as suggesting users eat rocks or add glue to their pizza sauce.

https://nypost.com/2025/02/25/business/chegg-accuses-google-of-using-ai-to-crush-traffic-revenue-in-antitrust-lawsuit-harmful-and-unsustainable/

Sydney Scientists Foil Bots with IllusionCAPTCHA Test

Scientists at the University of New South Wales in Sydney, Australia, have developed a new type of CAPTCHA test called "IllusionCAPTCHA" that uses AI-generated optical illusions to verify whether users are humans or bots. The test, created by Yuekang Li and his team, leverages generative AI to create images with misleading prompts, such as blending an apple with a cityscape. In a study, 10 people successfully identified these illusions 83% of the time when hidden text was used and 88% when illusory images were incorporated, but advanced AI models like GPT and Gemini failed every time.

https://petapixel.com/2025/02/25/this-ai-generated-optical-illusion-test-could-replace-captchas/

Facebook Fails to Stop Fake AI-Generated Images

The rise of AI-generated content is becoming increasingly sophisticated, making it difficult to distinguish between real and fake images. Social media platforms like Facebook have been criticized for boosting such posts, which can be convincing even to tech-savvy individuals like the author. The lack of context clues among some older adults, known as Boomers, is particularly puzzling, as they often fail to recognize obvious red flags in manipulated content, such as images that defy physics or logic.

https://www.dailymail.co.uk/news/article-14414225/baby-boomers-fooled-artificial-intelligence-fake-images.html

57% of US Adults Sue AI Providers Over Inaccuracy

Pearl.com has released its inaugural AI Accountability & Trust Report, which reveals that 57% of U.S. adults hold AI platforms legally responsible for inaccuracy, with 39% considering suing an AI provider if it provides harmful or incorrect information. A study commissioned by Censuswide surveyed over 2,000 Americans nationwide, highlighting the growing demand for trust and accuracy in AI.

https://www.prnewswire.com/news-releases/americans-are-ready-to-sue-ai-new-pearl-study-finds-39-willing-to-sue-for-mistakes-302379524.html

Judge Fines Lawyers $5,000 for AI-Generated Cases

A federal judge in Manhattan fined two New York lawyers $5,000 for citing AI-generated cases in a personal injury lawsuit against an airline. In another case, former lawyer Michael Cohen was called "embarrassing" by a judge after he used Google's AI chatbot Bard and submitted fake citations. A Texas federal judge ordered a lawyer to pay a $2,000 penalty and attend an AI course for citing nonexistent cases in a wrongful termination lawsuit. A Minnesota judge dismissed a misinformation expert who cited fake AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris. Law professor Harry Surden recommends lawyers learn about AI's strengths and weaknesses to avoid mistakes, noting that the technology itself is not the problem, but rather a lack of AI literacy among attorneys.

https://channelnewsasia.com/business/ai-hallucinations-court-papers-spell-trouble-lawyers-4945781

Lawyers Fined for Using Fake AI-Generated Cases

A federal judge in Manhattan fined two New York lawyers $5,000 for citing cases invented by AI in a personal injury case against an airline. In another case, Michael Cohen's lawyer used Google's AI chatbot Bard and was not sanctioned despite admitting to submitting fake citations. A Texas federal judge ordered a lawyer to pay a $2,000 penalty and attend a course on generative AI after he cited nonexistent cases in a wrongful termination lawsuit. A Minnesota federal judge deemed an misinformation expert's credibility damaged after citing fake AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris. Law professor Harry Surden recommends lawyers learn about the strengths and weaknesses of AI tools to address the growing issue of "AI literacy" among attorneys, which he believes is the problem rather than the technology itself.

https://www.geo.tv/latest/591423-ai-hallucinations-in-court-papers-spell-trouble-for-lawyers

UK-US Study Reveals Deepfake Illiteracy Among Youth

A recent iProov study has revealed widespread false confidence among the younger generation when it comes to distinguishing between real and AI-generated content, such as images and videos. In a study involving 2,000 participants from the UK and US, only 0.1% correctly identified deepfakes, highlighting a low literacy in AI tools. The study found that older adults are particularly vulnerable to AI-generated deception, with only two participants out of 2,000 able to accurately distinguish between real and deepfake stimuli.

https://www.techradar.com/pro/in-a-test-2000-people-were-shown-deepfake-content-only-one-of-them-managed-to-get-a-perfect-score