Software company OpenAI has recently released a new AI model called O3, which is causing concern among some of its users. The model, designed to reason and make decisions like humans, is being tested by software engineers and developers who are finding it difficult to understand how it arrives at certain conclusions. Some have expressed concerns that the model's reasoning may be flawed or biased, leading to potentially incorrect outcomes.
https://www.firstpost.com/tech/openais-new-o3-reasoning-model-is-freaking-out-software-engineers-developers-heres-why-13851007.htmlResearchers at Monash University have raised concerns that artificial intelligence is being used in Australian fertility clinics without adequate ethical oversight, potentially eroding public trust in these facilities. The technology is being used to select embryos during IVF treatment, but patients may not be aware if AI has been involved or how the algorithms were trained to make their choice. This raises bioethical concerns about the potential for unintended bias and the dehumanising effect on parents and babies.
https://www.smh.com.au/national/artificial-intelligence-beginning-to-make-decisions-about-who-is-brought-into-the-world-20250105-p5l256.htmlA recent study at Stanford found that two-thirds of doctors there use a platform to record and transcribe patient meetings with the help of artificial intelligence (AI). However, an analysis by OpenAI's Whisper technology revealed that it sometimes inserts false information into transcripts. For instance, a doctor recorded a conversation where a patient attributed their cough to exposure to their child, but the AI incorrectly added this detail. Another example showed that an AI transcription tool assumed a Chinese patient was a computer programmer without any basis in the conversation. Experts warn that while AI has potential benefits for healthcare, its outputs must be thoroughly checked and verified by doctors to ensure accuracy and trustworthiness. Dr. Adam Rodman, an internal medicine doctor and AI researcher at Beth Israel Deaconess Medical Center, expressed concerns that relying on AI could lead to complacency and the degradation of patient care.
https://gizmodo.com/doctors-say-ai-is-introducing-slop-into-patient-care-2000543805