The US Environmental Protection Agency (EPA) has been using AI technology developed by Elon Musk's group, DOGE, to monitor workers' communications, including Microsoft Teams chats. The EPA managers were told that the AI was looking for language considered hostile to Trump or Musk, and that employees who did not align with the administration's mission would be targeted. This has raised concerns among cybersecurity experts and government ethicists about data security practices and potential abuse of power. DOGE is also using Signal, a private messaging app, which has added to growing concerns over transparency in its operations. The EPA has acknowledged it is exploring AI for administrative efficiency but denied using it to surveil workers.
https://channelnewsasia.com/business/musks-doge-using-ai-snoop-us-federal-workers-sources-say-5051791The European Union's artificial intelligence law targets high-risk uses of AI, including facial recognition and autonomous vehicles, with a multi-phase compliance process that began this year. Companies must adapt to the new regulations, which will be enforced over several years, according to Rachael Daigle, editor of Bloomberg Law.
https://news.bloomberglaw.com/ip-law/corporate-legal-teams-take-on-worlds-broadest-ai-frameworkThe European Union's new AI law targets high-risk uses of artificial intelligence, with compliance deadlines set to take effect in a multi-phase process over several years. Companies may be liable for non-compliance, as the law aims to regulate the use of AI in various sectors, including healthcare and finance. The EU's AI Act is part of a broader effort to address concerns around data protection and safety.
https://news.bloomberglaw.com/esg/corporate-legal-teams-take-on-worlds-broadest-ai-frameworkThe European Union's AI Act has taken effect, impacting companies worldwide that use or sell AI systems in the EU. The law requires corporations to assess and manage risks associated with their AI use, depending on their role and the potential harm to individuals. This regulation aims to protect consumers from AI-related harms, with general-purpose AI rules set to come into effect next.
https://news.bloomberglaw.com/esg/worlds-broadest-ai-law-pushes-legal-teams-on-managing-riskThe European Union's new AI law targets high-risk uses of artificial intelligence, with corporate legal teams facing significant compliance challenges as the first deadlines begin to take effect in a multi-phase process over several years. The EU AI Act aims to regulate the development and deployment of AI systems, particularly those that pose a risk to society. Companies are likely to be held accountable for non-compliance, highlighting the need for careful planning and implementation of the new regulations.
https://news.bloomberglaw.com/tech-and-telecom-law/corporate-legal-teams-take-on-worlds-broadest-ai-frameworkFILMPAC has announced SUPERSET, a custom AI video training dataset launching in Summer 2025, to address challenges in generative AI by providing high-quality, human-driven video footage of authentic expressions, gestures, motion, and interactions.
https://www.prnewswire.com/news-releases/filmpac-introduces-superset--the-future-of-ai-video-training-data-302421453.htmlGoogle is giving DeepMind staff a year-long paid break to prevent them from joining rival companies in the AI wars. The move comes as Google seeks to maintain its dominance in artificial intelligence. This decision affects employees who have been approached by other tech giants, including Microsoft and Amazon, with lucrative offers.
https://www.livemint.com/technology/tech-news/ai-wars-google-is-giving-deepmind-staff-year-long-paid-break-to-stop-them-from-joining-rivals-gemini-chatgpt-11744076994363.htmlGoogle's billion-dollar AI experts are being paid to sit idle, rather than defecting to rival OpenAI, according to a report. The practice is seen as a way for the company to maintain its competitive edge in the field of artificial intelligence. Google has been investing heavily in AI research and development, with a focus on creating more advanced and sophisticated models. However, some experts have expressed concerns that the company's approach may be stifling innovation, leading to a brain drain of top talent.
https://interestingengineering.com/culture/google-blocking-employees-joining-rivalsA lawyer, Dewald, used an AI program to generate fake legal citations, which led to severe consequences in court. His decision was influenced by his fear of stumbling over words during a presentation to Justice Sallie Manzanet-Daniels. This incident highlights the flaws that can arise when relying on AI systems for real-world problems. Former lawyer Michael Cohen had previously used a similar AI program, Google Bard, to generate fake citations, and experts warn that these programs can still produce false or nonsensical information.
https://www.dailymail.co.uk/news/article-14585113/judge-man-bizarre-trick-court-case-jerome-dewald.html