Becky Owen, a former head of Meta's creator team and current innovation officer at creative agency Billion Dollar Boy, has warned that fake AI-generated social media accounts could be used to spread false information if adequate safeguards are not implemented on these platforms.
https://www.techspot.com/news/106138-meta-wants-fill-social-platforms-ai-generated-bots.html#commentsOffsetResearchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence have warned about a new commercial frontier called the "intention economy" where conversational AI tools may influence users' decision-making. This emerging sector, which combines knowledge of online habits with user profiling and Large Language Models (LLMs), could lead to social manipulation on an industrial scale if left unchecked. The researchers argue that companies will use this technology to target users based on their cadence, politics, vocabulary, age, gender, online history, and preferences for flattery and ingratiation. This could have significant implications for free and fair elections, a free press, and fair market competition, according to co-author Jonnie Penn, who notes that public awareness of the issue is key to preventing its unintended consequences.
https://www.brecorder.com/news/40340345/uk-study-warns-of-perils-in-ai-driven-intention-economyThe US Department of Justice alleged that a company called CGE used generative AI tools to create and disseminate disinformation through a network of fake news websites, attempting to conceal its Russian origin and create false corroboration between stories. However, the scheme was exposed when an individual involved came forward, causing the plan to unravel.
https://nypost.com/2024/12/31/us-news/biden-administration-sanctions-russian-group-that-allegedly-made-fake-tim-walz-sexual-assault-video/A new study has raised concerns about the potential misuse of anthropomorphic AI agents, such as chatbots and digital assistants, which have access to vast amounts of intimate psychological and behavioral data. The research cited an example of Meta's AI model, Cicero, which can infer and predict human intent in conversations, potentially allowing companies like Meta to influence users' decisions by auctioning off their intent to advertisers. Dr. Yaqub Chaudhary from the Leverhulme Centre for the Future of Intelligence emphasized that these AI assistants may serve the interests of companies rather than individuals, raising concerns about data privacy and manipulation. The study's findings have sparked worries among internet users who are sharing more personal information with AI than they would with a regular Google search, making them vulnerable to being manipulated by persuasive AI-powered advertising.
https://www.ndtv.com/science/rise-of-intention-economy-ai-tools-to-manipulate-you-into-making-decisions-study-finds-7363948Researchers at the University of Cambridge have warned that conversational AI tools may soon influence users' decision-making in a new commercial frontier called the "intention economy". This emerging marketplace could impact various aspects of life, from buying movie tickets to voting for political candidates. The researchers argue that this trend is driven by increasing familiarity with chatbots and other anthropomorphic AI agents, which are being used to develop persuasive technologies. According to co-author Yaqub Chaudhary, AI tools are being developed to elicit, infer, collect, record, understand, forecast, and manipulate human plans and purposes. The new AI will rely on Large Language Models (LLMs) to target users' cadence, politics, vocabulary, age, gender, online history, and preferences for flattery and ingratiation. Co-author Jonnie Penn warns that unless regulated, the intention economy will treat motivations as a currency, leading to a "gold rush" of those who target, steer, and sell human intentions.
https://www.hurriyetdailynews.com/uk-study-warns-of-perils-in-ai-driven-intention-economy-204158Researchers at the University of Cambridge have warned that conversational artificial intelligence (AI) tools may soon be used to subtly influence users' decisions in a new market known as the "intention economy". This emerging marketplace, which could become lucrative but also raise concerns, involves the use of digital signals of intent to sway people's choices on everyday activities such as buying movie tickets or voting for political candidates. The researchers attribute this trend to growing familiarity with chatbots and other AI agents that are increasingly being used in education and other areas.
https://www.tbsnews.net/world/uk-study-warns-perils-ai-driven-intention-economy-1030171Publishers such as those affiliated with News Corp have partnered with OpenAI, a tech company, to license their content. According to reports, OpenAI has offered some publishers $1 million to $5 million per year to access their data. In one notable deal, News Corp reportedly received up to $250 million in cash and credits from OpenAI over five years. This partnership is part of the larger effort by tech companies like OpenAI to develop large language models that can provide timely answers.
https://digiday.com/media/the-pros-and-cons-of-publishers-ai-licensing-deals/Perplexity, a company that offers an AI search engine, uses a modified version of PageRank to identify trustworthy web pages for its users. ChatGPT Search, which is based on Bing, also has its own crawler that fetches real-time information and presumably includes sites from Bing's search index. However, researchers have discovered ways to manipulate AI search engines, including changing writing styles to make claims more persuasive, adding keywords from the search query, and replacing interpretative content with statistics.
https://www.searchenginejournal.com/chatgpt-search-manipulated-with-hidden-instructions/536390/Meta is developing AI-powered features for its social media platforms, including tools for editing photos and creating AI assistants that can answer questions from creators' followers. The company has also introduced 28 AI-powered personas with unique personalities and interests to interact with users on its platforms. Meta's rules require AI-generated content to be clearly labeled, and the company plans to add text-to-video generation software for creators next year.
https://www.pymnts.com/meta/2024/meta-expects-ai-characters-to-generate-and-share-social-media-content/