Researchers at Monash University have raised concerns that artificial intelligence is being used in Australian fertility clinics without adequate ethical oversight, potentially eroding public trust in these facilities. The technology is being used to select embryos during IVF treatment, but patients may not be aware if AI has been involved or how the algorithms were trained to make their choice. This raises bioethical concerns about the potential for unintended bias and the dehumanising effect on parents and babies.
https://www.smh.com.au/national/artificial-intelligence-beginning-to-make-decisions-about-who-is-brought-into-the-world-20250105-p5l256.htmlExperts are warning that social engineering tactics, which use human interaction to deceive individuals, are becoming increasingly sophisticated with the integration of artificial intelligence. According to Jake Moore, a cybersecurity advisor at ESET, this trend is making it more challenging for people to protect themselves from phishing emails that can evade security measures and trick users into divulging sensitive information.
https://nypost.com/2025/01/04/tech/gmail-outlook-and-apple-users-urged-to-watch-out-for-this-new-email-scam-cybersecurity-experts-sound-alarm/Safeguarding minister Jess Phillips has vowed to ban the creation of sexually explicit deepfake images this year, citing her own experience as a target of the technology. A private members' bill was proposed by Tory peer Baroness Owen in the Lords but failed to gain government support. The issue is being raised due to concerns over AI-generated videos being used for sexual exploitation, child pornography, fraud and political disinformation. Last year, explicit deepfakes of celebrities such as Taylor Swift went viral on social media, while actress Emma Watson was targeted by a fake ad featuring her engaging in a sexual act. In 2023, deepfaked videos received 34 million views, with women making up 99% of those targeted.
https://www.dailymail.co.uk/news/article-14248775/Labour-minister-pledges-ban-creation-deepfake-porn-images-despite-Government-failing-Tory-bid-outlaw-vile-practice-MONTH.htmlThe Chinese Cyberspace Administration (CAC) has taken steps to address the growing threat of internet trolls using AI-powered software tools to manipulate accounts and fabricate trending topics. The CAC has urged website platforms to cooperate with authorities to investigate these groups, enhance their technical measures to detect and neutralize group control software and bot accounts, and implement a long-term governance framework to tackle the issue. This effort aims to strengthen coordination between administrative penalties and criminal prosecutions, ultimately improving the online environment in China.
https://www.globaltimes.cn/page/202501/1326207.shtmlCybercriminals are exploiting advanced AI-powered tools, such as deepfakes and emulators, to bypass biometric security defenses in critical sectors like finance, healthcare, e-commerce, and government. As a result, organizations are facing unprecedented identity verification vulnerabilities. To combat this threat, companies must invest in adaptive, AI-enhanced biometric systems that include real-time monitoring and features such as behavioral and liveness detection. This is crucial for safeguarding sensitive data and transactions, as well as building public and partner trust.
https://www.businesswire.com/news/home/20250103090167/en/Biometric-Solutions-to-Combat-AI-based-Threats-Global-Research-Report-2024-Opportunities-in-Decentralized-Identity-Usage-Preventing-Increasing-AI-powered-Attacks-and-Regulatory-Compliance---ResearchAndMarkets.comThe Indonesian Constitutional Court has ruled that a provision in the country's election law regarding "self-image" is unconstitutional. The court found that the phrase, which allows candidates to use photographs or images of themselves, can be misinterpreted and exploited by using artificial intelligence (AI) to create manipulated photos. This could lead to false information being spread to voters, undermining their ability to make informed decisions. As a result, the court has struck down the provision, citing concerns that excessive image manipulation through AI can damage democracy.
https://en.tempo.co/read/1959473/constitutional-court-rules-against-excessive-use-of-ai-manipulated-images-for-election-campaignCybersecurity firm Unit 42 has developed a technique called "Bad Likert Judge" to test the vulnerability of large language models (LLMs) to generating harmful content. The technique uses the Likert scale to score the harmfulness of a response and generate examples that align with the scales, including potentially harmful content. In research posted on December 31, Unit 42 found that this technique can increase the attack success rate by over 60% compared to plain attack prompts. The goal is to help defenders prepare for potential attacks using this technique, which targets edge cases and does not reflect typical LLM use cases. This comes as hackers have begun offering "jailbreak-as-a-service" that uses prompts to trick commercial AI chatbots into generating prohibited content.
https://www.pymnts.com/artificial-intelligence-2/2025/unit-42-warns-developers-of-technique-that-bypasses-llm-guardrails/Companies such as eBay and British insurer Beazley have warned about an uptick in fraudulent emails containing personal details obtained via artificial intelligence (AI) analysis of online profiles, according to the Financial Times. Cybersecurity experts say these attacks are increasing as AI grows in sophistication, allowing hackers to create targeted phishing scams by scraping victims' online presence and recreating their style and tone. The use of generative AI tools has lowered the entry threshold for advanced cybercrime, with eBay's Nadezda Demidova stating that there has been a growth in polished and closely targeted phishing scams. To combat this, companies are employing AI-powered cybersecurity measures, with 55% of companies using such measures according to a PYMNTS Intelligence report.
https://www.pymnts.com/fraud-attack/2025/ai-fuels-reported-rise-in-polished-phishing-scams/SlashNext's AI-powered cybersecurity tool analyzes URLs, emails and messages in real-time to detect and block phishing attempts and social engineering attacks. According to J Stephen Kowski, field CTO at SlashNext, this approach uses advanced machine learning models that can understand the context and intent of communications, moving beyond traditional pattern matching to identify threats that may evade other security tools. This proactive method represents a shift from reactive detection to predictive threat prevention that adapts to new attack variations in real-time.
https://www.pymnts.com/cybersecurity/2025/55-of-companies-have-implemented-ai-powered-cybersecurity/