Science
Deloitte’s AI Misstep Highlights Need for Responsible AI Use

Deloitte recently faced scrutiny after utilizing generative AI to create a report for a government agency, leading to a partial refund when multiple references were found to be fabricated. This incident underscores the risks associated with uncritical reliance on AI technologies, echoing concerns voiced by experts regarding the application of artificial intelligence in professional settings.
Valence Howden, an advisory fellow at Info-Tech Research Group, humorously reinterpreted Isaac Asimov’s famous three laws of robotics for today’s generative AI landscape. Originally penned in 1950, Asimov’s first law states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Howden suggests a modern adaptation: “AI may not injure a hyperscaler’s profit margin.” This shift highlights the evolving priorities in the tech industry, where profitability often outweighs ethical considerations.
Howden proposed updates to Asimov’s remaining laws, with the second law becoming: “GenAI must obey the orders given it by human beings, except where its training data doesn’t have an answer, in which case it can fabricate responses — a phenomenon now termed ‘Botsplaining.’” The third law was adapted to state, “GenAI must protect its own existence as long as doing so does not harm the Almighty Hyperscaler.”
This update stems from the recent incident involving Deloitte Australia, which published a report based on generative AI output without adequately verifying the information. Authorities discovered multiple “nonexistent references and citations,” prompting the company to refund part of its fee. The irony is notable; Deloitte, a firm expected to guide enterprises on how to effectively utilize AI, demonstrated poor practices that contradict its advisory role.
The need for responsible AI usage is urgent, particularly in enterprise settings. Howden suggests a new set of laws governing the use of generative AI by IT departments. The first law emphasizes the importance of verification: “IT Directors may not injure their enterprise employers by not verifying GenAI or agentic output before using it.” The second law highlights the necessity for models to acknowledge their limitations: “A model must obey the orders given it by human beings, except when it lacks reliable data. In those instances, it must admit, ‘I don’t know.’”
The third proposed law warns that “IT Directors must protect their own existence by not blindly using whatever GenAI outputs.” Ignoring this guideline could lead to negative consequences, including job loss and potential legal repercussions.
As organizations increasingly adopt AI tools, the emphasis on strict verification processes is crucial. Many companies, including Deloitte, may find that the rigorous validation required to ensure AI-generated information is accurate could diminish the anticipated return on investment (ROI). Generative AI should be viewed as a tool to enhance, not replace, human effort.
A seasoned journalist’s experience reflects this approach to AI information. Just as a reporter might use off-the-record tips to guide inquiries, professionals should leverage AI-generated data to inspire questions and further investigation rather than treating it as an infallible source.
Challenges abound when relying on generative AI, including the risk of “hallucinations,” which occur when a model produces incorrect information due to a lack of training on specific data. The credibility of the sources used in training is also vital. For instance, data drawn from prestigious medical journals, like the New England Journal of Medicine, contrasts sharply with that scraped from less reliable sources, such as personal websites.
Moreover, outdated data or translation errors can impact the accuracy of AI-generated responses. As the information landscape becomes increasingly complex, differentiating between informational and action-based AI functions is essential. Requests for informational output require a different level of scrutiny than action-oriented tasks, such as coding or content creation.
In conclusion, the recent incident involving Deloitte serves as a cautionary tale for organizations embracing generative AI. While these tools offer potential efficiencies, they must be approached with a critical eye and a commitment to verification. Recognizing the limitations of AI and ensuring responsible usage will ultimately safeguard both enterprises and their employees in the evolving technological landscape.
-
Sports1 month ago
Netball New Zealand Stands Down Dame Noeline Taurua for Series
-
Entertainment1 month ago
Tributes Pour In for Lachlan Rofe, Reality Star, Dead at 47
-
Sports1 month ago
Silver Ferns Legend Laura Langman Criticizes Team’s Attitude
-
Entertainment2 months ago
Khloe Kardashian Embraces Innovative Stem Cell Therapy in Mexico
-
Sports2 months ago
Gaël Monfils Set to Defend ASB Classic Title in January 2026
-
World3 months ago
Police Arrest Multiple Individuals During Funeral for Zain Taikato-Fox
-
Politics2 weeks ago
Netball NZ Calls for Respect Amid Dame Taurua’s Standoff
-
Entertainment3 weeks ago
Tyson Fury’s Daughter Venezuela Gets Engaged at Birthday Bash
-
Sports3 weeks ago
Heather McMahan Steps Down as Ryder Cup Host After Controversy
-
Entertainment3 weeks ago
Tyson Fury’s Daughter Venezuela Gets Engaged at Birthday Bash
-
World2 weeks ago
New Zealand Firefighters Plan Strike on October 17 Over Pay Disputes
-
Sports2 months ago
Tragic Death of Shane Christie Sparks Calls for Player Safety