A group of academic experts, including those from the University of California, Berkeley and Harvard Law School, have released a report on artificial intelligence (AI) safety, sparking divisions among AI safety advocates. The report highlights concerns over the lack of transparency in AI decision-making processes and the need for more robust testing protocols to ensure AI systems are safe and reliable.
https://news.bloomberglaw.com/artificial-intelligence/ai-safety-advocates-split-on-governor-newsom-panels-reportThe UK's new Labour government plans to introduce binding regulations on AI technology, following a 'pro-innovation' approach adopted by the previous government. In contrast, the EU's AI Act imposed requirements for usage and development. To mitigate potential threats, organizations must ensure adequate protection against sophisticated phishing attacks and data gathering methods used by threat actors.
https://www.techradar.com/pro/adapting-the-uks-cyber-ecosystemThe UK's public spending watchdog has released a report highlighting the barriers to AI adoption in the country, citing "out-of-date legacy IT systems" as a major contributor. The Department of Science and Technology (DSIT) estimates that nearly a third of central government systems are end-of-life products, posing serious cybersecurity risks. The Committee of Public Accounts warns that replacing these systems will be a time-consuming process, with 21 of the highest-risk systems lacking remediation funding. The UK Government aims to inject £14 billion into private sector investment to promote AI adoption, but the report raises concerns about the public sector's readiness for such a significant transformation.
https://www.techradar.com/pro/outdated-legacy-tech-is-stopping-uk-government-from-adopting-ai-mps-sayThe SBU has released a video warning against sharing false information about Ukraine's government, claiming it spreads anti-Russophone propaganda and prevents peace in the war with Moscow. The video was shared on social media, including by Russian-language news outlets, but experts believe it is AI-generated due to inconsistencies such as the boy's changing appearance and graphical errors like a misplaced logo and QR code. The 1 PLUS 1 Media Group and Euronews have denied the video's authenticity, blaming Russian agitators for creating and spreading it.
https://www.euronews.com/my-europe/2025/03/28/fake-ukrainian-tv-advert-urges-children-to-report-relatives-listening-to-russian-musicA coalition of US robotics companies, including Tesla, Boston Dynamics, and Agility Robotics, is warning that the country risks losing the AI-robotics race to China, which could have significant impacts on multiple sectors. The companies are urging the US government to adopt a national robotics strategy to compete with China's rapidly expanding capabilities in this field.
https://interestingengineering.com/culture/us-robotics-giants-counter-chinaThe US government has introduced two new rules to tighten control over the flow of advanced AI chips and model weights, aiming to close loopholes exploited by China. The Interim Final Rule on Advanced Artificial Intelligence Technology (AI Diffusion Rule) establishes a global export control regime for advanced AI chips and closed-frontier AI model weights, partitioning countries into three tiers based on their relationship with the US. The rule sets an allocation of 50,000 H100-equivalent GPUs for Tier 2 countries, which includes most countries in the world. The Due Diligence Rule captures not only advanced AI chips but also potential chips with other functions using 16/14 nm or below, and requires exporters to obtain reliable attestations for the IC's performance capacity. The rules close most loopholes exploited by Chinese companies, including the use of third-party companies to purchase export-controlled chips or rent overseas data centers with legitimate access to controlled chips. However, the rules impose significant compliance burdens on exporters and data center end-users, and raise concerns about the BIS's ability to effectively enforce them due to insufficient monitoring and tracking capacity.
https://www.csis.org/analysis/ai-diffusion-framework-and-foundry-due-diligence-rule-compliance-perspectiveThe recent viral images generated by ChatGPT's improved image generator, inspired by OpenAI's upgrade, have spread rapidly online, mimicking specific art styles from Studio Ghibli. The trend involves prompting the AI to produce photos that resemble scenes and characters from beloved animated films like those of Hayao Miyazaki. As a result, internet users are flooded with Ghibli-style memes featuring pets, babies, friends, celebrities, and everyday objects, showcasing both the creative potential and limitations of this technology.
https://www.techradar.com/computing/artificial-intelligence/i-refuse-to-jump-on-chatgpts-studio-ghibli-image-generator-bandwagon-because-it-goes-against-everything-i-love-about-those-moviesGoogle's latest generative AI model, Gemini 2.5, is considered its most intelligent and outperforms top models from competitors like OpenAI, Anthropic, Grok, and DeepSeek by a significant margin. Google Cloud users can now tap into this powerful tool to build custom apps. According to Anders Indset, founder of Njordis, Gemini 2.5 is a masterpiece of reasoning and computational might, putting Google at the forefront of an intense AI competition.
https://www.pymnts.com/artificial-intelligence-2/2025/this-week-in-ai-openai-state-laws-and-google-gemini/Gemini, developed by Google, has made significant improvements in its latest version, Gemini 2.5 Pro, which can create visually appealing web applications and perform code transformations and editing. The new model is already available in Google AI Studio and the Gemini application for advanced users. Additionally, Google plans to launch Gemini 2.5 Pro through Vertex AI in a few weeks, with pricing plans to be announced for larger-scale users.
https://en.tempo.co/read/1991466/google-claims-gemini-2-5-as-its-most-intelligent-ai-yet