Doctors Shocked as Google's AI Creates Non-Existent Human Body Part

Featured Image

The Growing Concern Over AI Errors in Healthcare

Healthcare professionals are increasingly concerned about the use of generative AI tools in medical settings. These tools, while promising, have been plagued by significant issues that raise serious questions about their reliability and safety.

One of the most alarming problems is what experts call "hallucinations" — a term used to describe instances where AI systems generate false or misleading information. These errors can be particularly dangerous when they occur in critical areas like healthcare, where accuracy is essential.

A recent example involved Google's Med-Gemini, a healthcare AI model introduced in a May 2024 research paper. The AI was shown analyzing brain scans for various conditions and identified an "old left basilar ganglia infarct." However, this part of the brain does not actually exist. A board-certified neurologist, Bryan Moore, pointed out this error to The Verge, noting that while Google updated its blog post, the research paper itself remained unchanged.

The AI likely confused the "basal ganglia," which is a real part of the brain associated with motor movements and habit formation, with the "basilar artery," a major blood vessel at the base of the brainstem. Google attributed the mistake to a simple misspelling of "basal ganglia."

This incident highlights a broader issue: even the most advanced AI models from companies like Google and OpenAI often produce inaccurate or fabricated information. While these errors may seem minor, they can have serious consequences in a medical setting.

Risks of AI Errors in Medical Practice

In a hospital environment, such mistakes could lead to misdiagnoses, incorrect treatments, and potentially life-threatening situations. Although the specific error in Google's research did not directly harm patients, it sets a troubling precedent.

Maulin Shah, chief medical information officer at Providence, emphasized the severity of the issue. "Two letters, but it’s a big deal," he said. He stressed that even small errors can have significant implications in high-stakes environments like healthcare.

Google had previously promoted its healthcare AI as having "substantial potential in medicine," claiming it could identify conditions in X-rays and CT scans. However, after the error was brought to light, Google employees told Moore it was a typo. In an updated blog post, the company acknowledged that "basilar" might be a common mis-transcription of "basal," but the research paper still incorrectly refers to the "basilar ganglia."

This kind of confusion can lead to serious problems in medical practice. Judy Gichoya, an associate professor of radiology and informatics at Emory University, noted that the tendency of AI systems to fabricate information without acknowledging uncertainty is a major concern. "They tend to make up things, and it doesn’t say 'I don’t know,' which is a big problem for high-stakes domains like medicine," she said.

The Broader Implications of AI in Healthcare

The issue is not limited to Med-Gemini. Google's more advanced healthcare model, MedGemma, has also shown inconsistencies depending on how questions are phrased. This variability can lead to errors and further complicate the use of AI in clinical settings.

Experts warn that the rapid adoption of AI in healthcare — including AI therapists, radiologists, and patient interaction services — requires a more cautious approach. While AI has the potential to improve efficiency and accuracy, it also introduces new risks that must be carefully managed.

In the meantime, human oversight remains crucial. However, continuously monitoring AI outputs can lead to inefficiencies and increased workload for healthcare professionals.

Despite these concerns, Google continues to push forward with its AI initiatives. In March, the company announced that its error-prone AI Overviews search feature would start providing health advice. It also introduced an "AI co-scientist" to assist in drug discovery and other scientific endeavors.

However, if these AI systems are not properly observed and verified, they could pose a risk to human lives. Shah emphasized that AI must meet a higher standard of accuracy than humans. "Maybe other people are like, 'If we can get as high as a human, we’re good enough.' I don’t buy that for a second," he said.

As the use of AI in healthcare continues to expand, the need for rigorous testing, transparency, and human oversight becomes even more critical. The goal should be to ensure that AI enhances, rather than undermines, the quality and safety of medical care.

Posting Komentar untuk "Doctors Shocked as Google's AI Creates Non-Existent Human Body Part"