Joey Malinski, the director of Maryland-based digital company ATB Productions, created a viral video claiming to show UFOs over Maryland using artificial intelligence. The video was posted to TikTok and YouTube, where Malinski took credit for its creation. He used an AI tool called Sora to design the clip, which he said was inspired by the lack of information provided by the government about drones. Malinski chose to mimic existing drone videos circulating online to make it appear authentic, but intentionally added elements that made it fake, such as a "circular" UFO design and X-Files theme music. The video was analyzed using InVid and RevEye, video and photo forensics tools, which confirmed it was AI-generated.
https://www.khou.com/article/news/verify/ai/viral-video-of-ufos-in-maryland-is-not-real/536-6f7c78ad-9759-46e0-8c1c-840f23522627A fake news alert sent to Steve Harvey fans on the NewsBreak app caused a frenzy after it reported that the television host had died. The article, which was later found to be AI-generated, stated that "the world has lost a remarkable figure in the entertainment industry as Steve Harvey passed away." However, further investigation revealed that the report was false and Steve Harvey is alive and well.
https://www.dailymail.co.uk/news/article-14220335/Steve-Harvey-fake-news-alert-death.htmlThe Federal Trade Commission (FTC) has sued the company behind an AI content generator called Rytr, accusing it of producing fake reviews for various businesses. The FTC claims some users used the tool to generate hundreds or thousands of reviews on platforms like Amazon and Yelp. AI detection companies like Pangram Labs have found that some AI-generated reviews appear at the top of search results due to their detailed and well-thought-out content. However, determining what is fake or not can be challenging as external parties may not have access to data signals indicating patterns of abuse. Tech companies like Amazon and Yelp are developing policies for handling AI-generated content, with some allowing customers to post AI-assisted reviews while others take a more cautious approach. The Coalition for Trusted Reviews, which includes prominent tech companies, aims to push back against fake reviews by sharing best practices and developing advanced AI detection systems.
https://thestar.com/news/world/united-states/the-internet-is-rife-with-fake-reviews-will-ai-make-it-worse/article_edc30c17-ed25-50dd-b528-3fb73eb0187d.htmlThis summer, Perplexity launched a publisher program that partnered with 20 publishers, including Time, Der Spiegel, Fortune, Entrepreneur, and The Texas Tribune in July, and another 14 publishers in December. ProRata.ai also signed deals with several major media companies, including the Financial Times, Axel Springer, The Atlantic, and Fortune, to share subscription revenue. Additionally, Microsoft partnered with various media companies such as Axel Springer, Informa, FT, Reuters, Hearst, and USA Today Network for AI-driven projects.
https://digiday.com/media/media-briefing-the-top-trends-in-the-media-industry-in-2024/Yelp has implemented measures to detect AI-generated reviews, citing the rise in consumer adoption of AI tools. The Coalition for Trusted Reviews, a group including Amazon, Trustpilot, and TripAdvisor, views AI as an opportunity to combat fake reviews. However, experts say tech companies like Yelp, Amazon, and Google are not doing enough to eliminate review fraud, despite blocking or removing suspect reviews and accounts. Consumers can spot fake AI-generated reviews by looking out for overly enthusiastic or negative language, jargon that repeats a product's name, and generic phrases. Research has shown that people cannot reliably distinguish between AI-generated and human-written reviews, but some AI detectors may be fooled by shorter texts.
https://apnews.com/article/fake-online-reviews-generative-ai-40f5000346b1894a778434ba295a0496Reporters Without Borders is calling on Apple to remove its generative AI tool, which has been incorrectly summarizing news articles. The organization's head of technology and journalism, Vincent Berthier, stated that the tool's probabilistic nature makes it unreliable for producing information about current events. Since its launch in June, users have reported errors, including a false summary of a New York Times story claiming Israeli Prime Minister Benjamin Netanyahu had been arrested when in fact an arrest warrant was issued by the International Criminal Court.
https://en.tempo.co/read/1955626/apples-ai-feature-criticized-for-misleading-headlinesNews Corp's deal with OpenAI could distort the flow of information to the public due to a lack of diversity in the news sources used to train ChatGPT, experts warn. The chatbot draws on verified news sources, including News Corp publications such as The Sydney Morning Herald and The Australian Financial Review, which have strong editorial leanings. This could exacerbate Australia's already concentrated news ecosystem, where a small number of companies dominate the media landscape.
https://www.smh.com.au/business/companies/openai-s-deal-with-news-corp-could-distort-flow-to-public-experts-warn-20241219-p5kzqh.html