Washington: Artificial intelligence is becoming a common presence in operating rooms around the world, promising greater precision and better outcomes. But a new investigation and recent reports suggest that the rapid spread of AI in surgery is also raising serious safety and accountability concerns.
According to a Reuters investigation published this week, reports submitted to US regulators show a rise in problems linked to AI enhanced medical devices. These reports include cases of botched surgeries, incorrect guidance during procedures and even misidentification of body parts by software systems used to assist doctors.
One of the examples cited involves an AI supported navigation system used in sinus surgery. After artificial intelligence features were added, reports of malfunctions and injuries increased sharply. Some patients reportedly suffered severe complications such as fluid leaks around the brain and damage to surrounding tissue. Lawsuits filed in the United States allege that the technology did not always perform as expected in real operating room conditions.
The investigation also points to concerns beyond surgery. In other cases, AI powered imaging tools were reported to have misidentified fetal body parts during prenatal scans, while AI based heart monitoring systems were flagged for missing abnormal heart rhythms. Regulators note that such reports do not automatically prove the technology caused harm, but they do signal potential risks that need closer scrutiny.
The US Food and Drug Administration has now cleared more than 1,300 AI enabled medical devices, a number that has grown rapidly in recent years. Experts say current approval systems were designed for traditional medical equipment and may not fully account for software that learns, adapts or behaves differently once deployed in hospitals.
At the same time, supporters of medical AI argue that the technology can save lives when properly tested and used. Studies in recent years have shown AI systems can help predict complications, assist surgeons with complex procedures and reduce human error. Many doctors say the tools are helpful when they are clearly positioned as support systems rather than replacements for clinical judgment.
Regulators and hospitals are now under pressure to strike a balance between innovation and patient safety. Calls are growing for stronger testing requirements, clearer rules on responsibility when errors occur and better training for medical staff using AI tools.
As artificial intelligence continues to move from research labs into operating rooms, the debate is no longer about whether AI will be used in medicine, but how safely and transparently it can be integrated into patient care.