Threats NewsFeed

NATO-Backed Startup Cracks Advanced Security Systems

A NATO-backed security startup has claimed that AI technology will soon be able to crack even the most advanced security systems in a matter of seconds. The company, which remains unnamed, is developing an artificial intelligence-powered tool that can allegedly bypass complex security measures with ease. This development raises concerns about the potential for cyber threats and the need for enhanced cybersecurity measures to protect sensitive information.

https://www.firstpost.com/tech/ai-will-soon-be-able-to-crack-even-the-most-advanced-security-in-seconds-claims-nato-backed-security-startup-13851214.html

Execs Fear AI-Driven Cyber Threats Most

A report from Chubb found that cybersecurity threats, particularly those caused by malicious AI manipulation, are the top concern for business growth among executives surveyed in a Harris Poll of 500 risk decision-making leaders. Cyber breaches and data leaks were cited as a major concern by 40% of respondents, surpassing other risks such as accidents and regulations. The report also found that cybersecurity is the leading geopolitical risk, with 60% of executives citing it as a concern. Chubb's "Risk Decisions 360°" report suggests that businesses are taking steps to mitigate these risks, with 86% having or planning to adopt business interruption coverage for events like cyberattacks and natural disasters. However, many executives feel their companies are not effective at managing emerging and evolving risks, with over a third believing they need to improve in this area.

https://www.prnewswire.com/news-releases/chubb-report-reveals-cybersecurity-as-leading-risk-threatening-business-growth-with-technology-disruption-following-closely-behind-302342438.html

Unit 42's Bad Likert Judge Boosts Attack Success Rate

Cybersecurity firm Unit 42 has developed a technique called "Bad Likert Judge" to test the vulnerability of large language models (LLMs) to generating harmful content. The technique uses the Likert scale to score the harmfulness of a response and generate examples that align with the scales, including potentially harmful content. In research posted on December 31, Unit 42 found that this technique can increase the attack success rate by over 60% compared to plain attack prompts. The goal is to help defenders prepare for potential attacks using this technique, which targets edge cases and does not reflect typical LLM use cases. This comes as hackers have begun offering "jailbreak-as-a-service" that uses prompts to trick commercial AI chatbots into generating prohibited content.

https://www.pymnts.com/artificial-intelligence-2/2025/unit-42-warns-developers-of-technique-that-bypasses-llm-guardrails/

SlashNext's AI-Powered Phishing Detector Unleashed

SlashNext's AI-powered cybersecurity tool analyzes URLs, emails and messages in real-time to detect and block phishing attempts and social engineering attacks. According to J Stephen Kowski, field CTO at SlashNext, this approach uses advanced machine learning models that can understand the context and intent of communications, moving beyond traditional pattern matching to identify threats that may evade other security tools. This proactive method represents a shift from reactive detection to predictive threat prevention that adapts to new attack variations in real-time.

https://www.pymnts.com/cybersecurity/2025/55-of-companies-have-implemented-ai-powered-cybersecurity/

Google and Atlas Computing Uncover AI Model Secrets

Researchers at Google and Atlas Computing have developed a method to extract the architecture of AI models running on specific chips by analyzing electromagnetic data. The technique, which was tested with 99.91% accuracy, involves comparing the electromagnetic field data from an unknown model to data captured while other AI models ran on the same chip. This could potentially allow attackers to reverse-engineer AI models used in smartphones and other edge devices, highlighting the need for physical security measures to protect against such attacks.

https://gizmodo.com/how-to-steal-an-ai-model-without-actually-hacking-anything-2000542423