Science
Deloitte’s AI Misstep: Revising Asimov’s Laws for Generative AI

Deloitte’s recent experience with generative AI highlights the risks of relying blindly on advanced technologies. The firm faced backlash after it submitted a report to a government agency that contained numerous nonexistent references and citations. This led to a partial refund of its fee, raising questions about the integrity of findings produced by generative AI.
Valence Howden, an advisory fellow at Info-Tech Research Group, offered a modern interpretation of Isaac Asimov’s three laws of robotics, suggesting they be revised for the current landscape of generative AI. Asimov’s first law, which states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm,” could evolve in today’s context to: “AI may not injure a hyperscaler’s profit margin.”
The second law could be updated to: “Generative AI must obey the orders given by human beings, except when its training data lacks an answer, in which case it can fabricate information and present it authoritatively, a phenomenon now referred to as ‘Botsplaining.’” The third law might read: “Generative AI must protect its own existence as long as this protection does not harm the Almighty Hyperscaler.”
Deloitte’s situation serves as a cautionary tale. The firm, which advises enterprise IT executives on leveraging generative AI, inadvertently showcased poor practices instead. This incident underscores the need for companies to establish robust verification processes when utilizing AI outputs.
Establishing New Guidelines for Generative AI Use
As organizations increasingly integrate generative AI into their workflows, it becomes critical to develop new operational guidelines. The proposed laws for enterprise IT could include:
1. **IT Directors may not injure their enterprise employers by neglecting to verify generative AI outputs.**
2. **A model must comply with human orders unless it lacks reliable data, in which case it is required to declare, ‘I don’t know.’ Fabricating information without disclosure is a serious violation.**
3. **IT Directors must safeguard their own positions by avoiding blind reliance on generative AI outputs. Neglecting this responsibility could lead to termination, legal repercussions, and potential exile to places like North Sentinel Island, where technology is prohibited.**
These updated principles highlight the necessity for diligence in verifying AI-generated information. The allure of rapid returns on investment (ROI) from AI tools can be tempting, but the reality is that rigorous verification processes may diminish the efficiency that executives hope for.
As a journalist, dealing with unreliable sources has been a significant part of my experience. The approach to managing AI-generated content is reminiscent of handling off-the-record information. While such sources may seem questionable, they can often lead to valuable insights if approached thoughtfully.
For instance, during my tenure at a major city newspaper, I received a tip from a politically unreliable source regarding missing city resources. Despite my doubts, I followed the lead to a warehouse address, ultimately discovering that around 60,000 street signs were unaccounted for. This experience illustrates how to engage with generative AI outputs: don’t assume correctness, but use them as a springboard for further inquiries.
The Reliability of Generative AI
Generative AI can produce numerous correct answers alongside many incorrect ones. This inconsistency is often overlooked by hyperscalers promoting these technologies. The phenomenon known as “hallucinations” occurs when large language models (LLMs) lack the necessary training to provide accurate responses, leading to fabricated information.
Moreover, even when data is reliable, it might be outdated, poorly translated, or contextually inappropriate. An answer relevant in one region, such as the United States, may not hold true in another, like Japan or France. Misinterpretation of user queries compounds the challenge of ensuring accuracy.
To effectively utilize generative AI, it is essential to categorize its functions into two distinct types: informational and action-oriented. Informational requests involve seeking answers or recommendations, while action requests entail coding or creating content. The latter requires a higher level of scrutiny to ensure accuracy and reliability.
While the need for thorough due diligence may temper the perceived ROI from AI, it is crucial to recognize that meaningful returns may not have existed in the first place without such verification. As companies navigate the complexities of generative AI, establishing clear guidelines and enhancing verification practices will be essential for mitigating risks and maximizing the benefits of this powerful technology.
-
Sports1 month ago
Netball New Zealand Stands Down Dame Noeline Taurua for Series
-
Entertainment1 month ago
Tributes Pour In for Lachlan Rofe, Reality Star, Dead at 47
-
Sports1 month ago
Silver Ferns Legend Laura Langman Criticizes Team’s Attitude
-
Entertainment2 months ago
Khloe Kardashian Embraces Innovative Stem Cell Therapy in Mexico
-
Sports2 months ago
Gaël Monfils Set to Defend ASB Classic Title in January 2026
-
World3 months ago
Police Arrest Multiple Individuals During Funeral for Zain Taikato-Fox
-
Politics2 weeks ago
Netball NZ Calls for Respect Amid Dame Taurua’s Standoff
-
Entertainment3 weeks ago
Tyson Fury’s Daughter Venezuela Gets Engaged at Birthday Bash
-
Sports3 weeks ago
Heather McMahan Steps Down as Ryder Cup Host After Controversy
-
Entertainment3 weeks ago
Tyson Fury’s Daughter Venezuela Gets Engaged at Birthday Bash
-
World2 weeks ago
New Zealand Firefighters Plan Strike on October 17 Over Pay Disputes
-
Sports2 months ago
Tragic Death of Shane Christie Sparks Calls for Player Safety