New York: Artificial Intelligence (AI), once hailed as a transformative force across industries, is increasingly making headlines not for its innovation, but for its missteps and ethical controversies. From consulting blunders to discriminatory algorithms, the technology’s risks, biases, and systemic impacts are becoming impossible to ignore. Recent high-profile cases highlight the pressing need for oversight, accountability, and transparency in AI deployment.
In one of the most striking examples, Deloitte Australia faced public scrutiny and financial consequences after a $440,000 report for the Department of Employment and Workplace Relations (DEWR) was found to contain fabricated references and “hallucinations” generated by AI. Dr. Christopher Rudge of the University of Sydney highlighted that the AI-generated report attempted to fill gaps with fabricated academic citations. Deloitte later issued a partial refund and added a note in the appendix acknowledging the use of a generative AI tool chain based on Azure OpenAI GPT-4. Despite corrections, the incident underscores the risks of over-reliance on AI for critical policy documents.
The International Monetary Fund (IMF) and the Bank of England have both issued warnings about AI-driven economic risks. IMF Chief Kristalina Georgieva cautioned investors to “buckle up: uncertainty is the new normal,” noting that AI could disrupt up to 40% of global jobs, exacerbate economic inequality, and contribute to volatile market valuations. These concerns highlight the broader systemic impact AI could have on financial stability if left unchecked.
AI’s capacity for bias has been vividly illustrated in the Apple Card case, where customers observed gender-based disparities in credit limits. Tech entrepreneur David Heinemeier Hansson went public in 2019, highlighting that men consistently received higher credit limits than women, a problem confirmed by Apple co-founder Steve Wozniak. Hansson criticized the blind faith in algorithms, pointing out that “it does not matter what the intent of individual Apple reps are; it matters what THE ALGORITHM…does. And what it does is discriminate.”
In healthcare, AI has also faced scrutiny. A lawsuit against the U.S.-based insurer Cigna alleges that its AI-driven PXDX system denied over 300,000 claims in just two months in 2022. The system reportedly processed claims in 1.2 seconds on average, rejecting applications without reviewing patient files. The case raises urgent ethical and operational questions about replacing human judgment with AI in critical services such as healthcare.
Even in customer service, AI missteps carry consequences. Air Canada was ordered by a Canadian tribunal to pay $812 in damages after a chatbot gave incorrect advice on bereavement fares. Tribunal member Christopher Rivers clarified, “While a chatbot has an interactive component, it is still just a part of Air Canada’s website…It makes no difference whether the information comes from a static page or a chatbot.” The ruling establishes accountability for AI-driven interfaces in commercial settings.
These incidents collectively underscore AI’s dual nature: a powerful engine for innovation and a potential source of error, bias, and legal liability. Experts warn that without rigorous oversight, transparent governance, and ethical safeguards, AI could generate systemic risks that extend far beyond individual companies, affecting economies, social equity, and public trust worldwide.