Science
Deloitte’s AI Misstep Prompts Call for New Guidelines on GenAI Use

Deloitte has recently encountered significant backlash after utilizing generative AI to produce a report for a government agency. The firm had to partially refund its fee after multiple references and citations were found to be fabricated. This incident highlights the risks associated with uncritical reliance on AI technologies and calls for a reevaluation of how organizations govern their use.
Revisiting Asimov’s Laws in the Age of AI
The irony of Deloitte’s situation is particularly poignant, given that the firm is expected to advise enterprise IT executives on the best practices for leveraging generative AI. Instead, it demonstrated some of the most questionable practices in the industry. This incident led analysts to draw parallels between the current state of AI and the original three laws of robotics formulated by Isaac Asimov in his 1950 novel, “I, Robot.”
Valence Howden, an advisory fellow at Info-Tech Research Group, humorously suggested that if Asimov’s laws were revised for 2025, the first law would be: “AI may not injure a hyperscaler’s profit margin.” The second law might read: “GenAI must obey the orders given to it by human beings, except where it lacks sufficient training data and can create answers in an authoritative manner.” The third law could be phrased as: “GenAI must protect its own existence as long as such protection does not compromise the interests of the hyperscaler.”
Implications for Enterprise IT
This playful update underscores a serious issue: the necessity for rigorous verification processes when employing generative AI in business settings. The need for such measures is further emphasized by Deloitte’s experience, where the information published was not thoroughly checked for accuracy.
To address this, a new set of guidelines for enterprise IT could be beneficial. The first guideline could state: “IT Directors may not harm their organizations by failing to verify generative AI outputs prior to use.” The second might assert: “A model must obey orders from human beings, but is required to indicate when it lacks reliable data.” The third guideline could emphasize: “IT Directors must ensure their own job security by critically assessing the outputs generated by AI technologies.”
The need for strict verification is crucial, especially as organizations aim for high returns on investment (ROI) from AI initiatives. Many executives envision substantial benefits from these systems, but without proper oversight, those expectations may be misguided. AI should serve as an assistive tool rather than a replacement for human workers.
Learning from Experience
As a journalist, I have often dealt with sources of varying reliability. Engaging with information from generative AI should follow a similar approach. When an AI tool provides insights, it is essential to interrogate that information rather than accepting it at face value. For instance, a previous experience as a reporter led me to investigate missing city resources based on a tip from an unreliable source. Following the lead ultimately yielded valuable information, illustrating the importance of inquiry.
With generative AI, it is vital to recognize that while it may offer some accurate responses, there will also be numerous inaccuracies. This challenge is not confined to “hallucinations,” where a model fabricates information due to inadequate training. Issues of reliability also stem from the quality of data used. In fields like healthcare, the difference between sourcing from esteemed journals and less credible sites can significantly affect outcomes.
Additionally, the data may be outdated, incorrectly localized, or misinterpreted. Such discrepancies highlight the need for a nuanced understanding of AI capabilities and limitations. It is essential to differentiate between informational and action-driven requests, as the latter requires more thorough verification to mitigate potential risks.
Ultimately, while the promise of generative AI is appealing, organizations must approach its implementation with caution. Acknowledging the limitations and uncertainties associated with these technologies is crucial for steering clear of pitfalls and ensuring responsible use. The ROI may not match initial expectations, but a methodical approach could yield valuable insights and efficiencies in the long run.
-
Sports1 month ago
Netball New Zealand Stands Down Dame Noeline Taurua for Series
-
Entertainment1 month ago
Tributes Pour In for Lachlan Rofe, Reality Star, Dead at 47
-
Sports1 month ago
Silver Ferns Legend Laura Langman Criticizes Team’s Attitude
-
Entertainment2 months ago
Khloe Kardashian Embraces Innovative Stem Cell Therapy in Mexico
-
Sports2 months ago
Gaël Monfils Set to Defend ASB Classic Title in January 2026
-
World3 months ago
Police Arrest Multiple Individuals During Funeral for Zain Taikato-Fox
-
Politics2 weeks ago
Netball NZ Calls for Respect Amid Dame Taurua’s Standoff
-
Entertainment3 weeks ago
Tyson Fury’s Daughter Venezuela Gets Engaged at Birthday Bash
-
Sports3 weeks ago
Heather McMahan Steps Down as Ryder Cup Host After Controversy
-
Entertainment7 days ago
New ‘Maverick’ Chaser Joins Beat the Chasers Season Finale
-
Entertainment3 weeks ago
Tyson Fury’s Daughter Venezuela Gets Engaged at Birthday Bash
-
World2 weeks ago
New Zealand Firefighters Plan Strike on October 17 Over Pay Disputes