Cleary Gottlieb attorneys Daniel Ilan, Megan Medeiros, and Melissa Faragasso note that businesses need new governance in cybersecurity and privacy as they increasingly use artificial intelligence (AI). Regulators are expecting executives to proactively address the bespoke risks associated with AI development and deployment. This is particularly relevant for companies like Rite Aid, which was banned from using AI facial recognition by the Federal Trade Commission after misuse. To mitigate these risks, leadership must evaluate responsible and safe strategies for AI use, such as implementing effective governance structures and ensuring transparency in AI decision-making processes.
https://news.bloomberglaw.com/litigation/businesses-need-new-ai-governance-in-cybersecurity-and-privacyThe Federal Trade Commission (FTC) has sued the company behind an AI content generator called Rytr, accusing it of producing fake reviews for various businesses. The FTC claims some users used the tool to generate hundreds or thousands of reviews on platforms like Amazon and Yelp. AI detection companies like Pangram Labs have found that some AI-generated reviews appear at the top of search results due to their detailed and well-thought-out content. However, determining what is fake or not can be challenging as external parties may not have access to data signals indicating patterns of abuse. Tech companies like Amazon and Yelp are developing policies for handling AI-generated content, with some allowing customers to post AI-assisted reviews while others take a more cautious approach. The Coalition for Trusted Reviews, which includes prominent tech companies, aims to push back against fake reviews by sharing best practices and developing advanced AI detection systems.
https://thestar.com/news/world/united-states/the-internet-is-rife-with-fake-reviews-will-ai-make-it-worse/article_edc30c17-ed25-50dd-b528-3fb73eb0187d.html