Misinformation NewsFeed

Meta's Cicero Exposes Users' Intent for Sale

A new study has raised concerns about the potential misuse of anthropomorphic AI agents, such as chatbots and digital assistants, which have access to vast amounts of intimate psychological and behavioral data. The research cited an example of Meta's AI model, Cicero, which can infer and predict human intent in conversations, potentially allowing companies like Meta to influence users' decisions by auctioning off their intent to advertisers. Dr. Yaqub Chaudhary from the Leverhulme Centre for the Future of Intelligence emphasized that these AI assistants may serve the interests of companies rather than individuals, raising concerns about data privacy and manipulation. The study's findings have sparked worries among internet users who are sharing more personal information with AI than they would with a regular Google search, making them vulnerable to being manipulated by persuasive AI-powered advertising.

https://www.ndtv.com/science/rise-of-intention-economy-ai-tools-to-manipulate-you-into-making-decisions-study-finds-7363948

Cambridge Researchers Warn of AI's Influence

Researchers at the University of Cambridge have warned that conversational AI tools may soon influence users' decision-making in a new commercial frontier called the "intention economy". This emerging marketplace could impact various aspects of life, from buying movie tickets to voting for political candidates. The researchers argue that this trend is driven by increasing familiarity with chatbots and other anthropomorphic AI agents, which are being used to develop persuasive technologies. According to co-author Yaqub Chaudhary, AI tools are being developed to elicit, infer, collect, record, understand, forecast, and manipulate human plans and purposes. The new AI will rely on Large Language Models (LLMs) to target users' cadence, politics, vocabulary, age, gender, online history, and preferences for flattery and ingratiation. Co-author Jonnie Penn warns that unless regulated, the intention economy will treat motivations as a currency, leading to a "gold rush" of those who target, steer, and sell human intentions.

https://www.hurriyetdailynews.com/uk-study-warns-of-perils-in-ai-driven-intention-economy-204158

Cambridge Researchers Warn of AI's Swaying Influence

Researchers at the University of Cambridge have warned that conversational artificial intelligence (AI) tools may soon be used to subtly influence users' decisions in a new market known as the "intention economy". This emerging marketplace, which could become lucrative but also raise concerns, involves the use of digital signals of intent to sway people's choices on everyday activities such as buying movie tickets or voting for political candidates. The researchers attribute this trend to growing familiarity with chatbots and other AI agents that are increasingly being used in education and other areas.

https://www.tbsnews.net/world/uk-study-warns-perils-ai-driven-intention-economy-1030171

Researchers Expose Dark Side of ChatGPT Search

Perplexity, a company that offers an AI search engine, uses a modified version of PageRank to identify trustworthy web pages for its users. ChatGPT Search, which is based on Bing, also has its own crawler that fetches real-time information and presumably includes sites from Bing's search index. However, researchers have discovered ways to manipulate AI search engines, including changing writing styles to make claims more persuasive, adding keywords from the search query, and replacing interpretative content with statistics.

https://www.searchenginejournal.com/chatgpt-search-manipulated-with-hidden-instructions/536390/

Joey Malinski's Fake UFO Video Exposed

Joey Malinski, the director of Maryland-based digital company ATB Productions, created a viral video claiming to show UFOs over Maryland using artificial intelligence. The video was posted to TikTok and YouTube, where Malinski took credit for its creation. He used an AI tool called Sora to design the clip, which he said was inspired by the lack of information provided by the government about drones. Malinski chose to mimic existing drone videos circulating online to make it appear authentic, but intentionally added elements that made it fake, such as a "circular" UFO design and X-Files theme music. The video was analyzed using InVid and RevEye, video and photo forensics tools, which confirmed it was AI-generated.

https://www.khou.com/article/news/verify/ai/viral-video-of-ufos-in-maryland-is-not-real/536-6f7c78ad-9759-46e0-8c1c-840f23522627

Fake News Alert Causes Frenzy Among Steve Harvey Fans

A fake news alert sent to Steve Harvey fans on the NewsBreak app caused a frenzy after it reported that the television host had died. The article, which was later found to be AI-generated, stated that "the world has lost a remarkable figure in the entertainment industry as Steve Harvey passed away." However, further investigation revealed that the report was false and Steve Harvey is alive and well.

https://www.dailymail.co.uk/news/article-14220335/Steve-Harvey-fake-news-alert-death.html

FTC Sues Rytr Over Fake Amazon Reviews

The Federal Trade Commission (FTC) has sued the company behind an AI content generator called Rytr, accusing it of producing fake reviews for various businesses. The FTC claims some users used the tool to generate hundreds or thousands of reviews on platforms like Amazon and Yelp. AI detection companies like Pangram Labs have found that some AI-generated reviews appear at the top of search results due to their detailed and well-thought-out content. However, determining what is fake or not can be challenging as external parties may not have access to data signals indicating patterns of abuse. Tech companies like Amazon and Yelp are developing policies for handling AI-generated content, with some allowing customers to post AI-assisted reviews while others take a more cautious approach. The Coalition for Trusted Reviews, which includes prominent tech companies, aims to push back against fake reviews by sharing best practices and developing advanced AI detection systems.

https://thestar.com/news/world/united-states/the-internet-is-rife-with-fake-reviews-will-ai-make-it-worse/article_edc30c17-ed25-50dd-b528-3fb73eb0187d.html

Yelp Battles Fake Reviews with AI Detectors

Yelp has implemented measures to detect AI-generated reviews, citing the rise in consumer adoption of AI tools. The Coalition for Trusted Reviews, a group including Amazon, Trustpilot, and TripAdvisor, views AI as an opportunity to combat fake reviews. However, experts say tech companies like Yelp, Amazon, and Google are not doing enough to eliminate review fraud, despite blocking or removing suspect reviews and accounts. Consumers can spot fake AI-generated reviews by looking out for overly enthusiastic or negative language, jargon that repeats a product's name, and generic phrases. Research has shown that people cannot reliably distinguish between AI-generated and human-written reviews, but some AI detectors may be fooled by shorter texts.

https://apnews.com/article/fake-online-reviews-generative-ai-40f5000346b1894a778434ba295a0496

Apple's AI Tool Misinforms on Netanyahu Arrest Warrant

Reporters Without Borders is calling on Apple to remove its generative AI tool, which has been incorrectly summarizing news articles. The organization's head of technology and journalism, Vincent Berthier, stated that the tool's probabilistic nature makes it unreliable for producing information about current events. Since its launch in June, users have reported errors, including a false summary of a New York Times story claiming Israeli Prime Minister Benjamin Netanyahu had been arrested when in fact an arrest warrant was issued by the International Criminal Court.

https://en.tempo.co/read/1955626/apples-ai-feature-criticized-for-misleading-headlines