The Federal Trade Commission (FTC) has sued the company behind an AI content generator called Rytr, accusing it of producing fake reviews for various businesses. The FTC claims some users used the tool to generate hundreds or thousands of reviews on platforms like Amazon and Yelp. AI detection companies like Pangram Labs have found that some AI-generated reviews appear at the top of search results due to their detailed and well-thought-out content. However, determining what is fake or not can be challenging as external parties may not have access to data signals indicating patterns of abuse. Tech companies like Amazon and Yelp are developing policies for handling AI-generated content, with some allowing customers to post AI-assisted reviews while others take a more cautious approach. The Coalition for Trusted Reviews, which includes prominent tech companies, aims to push back against fake reviews by sharing best practices and developing advanced AI detection systems.
https://thestar.com/news/world/united-states/the-internet-is-rife-with-fake-reviews-will-ai-make-it-worse/article_edc30c17-ed25-50dd-b528-3fb73eb0187d.htmlYelp has implemented measures to detect AI-generated reviews, citing the rise in consumer adoption of AI tools. The Coalition for Trusted Reviews, a group including Amazon, Trustpilot, and TripAdvisor, views AI as an opportunity to combat fake reviews. However, experts say tech companies like Yelp, Amazon, and Google are not doing enough to eliminate review fraud, despite blocking or removing suspect reviews and accounts. Consumers can spot fake AI-generated reviews by looking out for overly enthusiastic or negative language, jargon that repeats a product's name, and generic phrases. Research has shown that people cannot reliably distinguish between AI-generated and human-written reviews, but some AI detectors may be fooled by shorter texts.
https://apnews.com/article/fake-online-reviews-generative-ai-40f5000346b1894a778434ba295a0496Reporters Without Borders is calling on Apple to remove its generative AI tool, which has been incorrectly summarizing news articles. The organization's head of technology and journalism, Vincent Berthier, stated that the tool's probabilistic nature makes it unreliable for producing information about current events. Since its launch in June, users have reported errors, including a false summary of a New York Times story claiming Israeli Prime Minister Benjamin Netanyahu had been arrested when in fact an arrest warrant was issued by the International Criminal Court.
https://en.tempo.co/read/1955626/apples-ai-feature-criticized-for-misleading-headlines