Connect with us

Science

Deloitte’s AI Misstep: Revising Asimov’s Laws for Generative AI

Editorial

Published

on

Deloitte’s recent experience with generative AI highlights the risks of relying blindly on advanced technologies. The firm faced backlash after it submitted a report to a government agency that contained numerous nonexistent references and citations. This led to a partial refund of its fee, raising questions about the integrity of findings produced by generative AI.

Valence Howden, an advisory fellow at Info-Tech Research Group, offered a modern interpretation of Isaac Asimov’s three laws of robotics, suggesting they be revised for the current landscape of generative AI. Asimov’s first law, which states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm,” could evolve in today’s context to: “AI may not injure a hyperscaler’s profit margin.”

The second law could be updated to: “Generative AI must obey the orders given by human beings, except when its training data lacks an answer, in which case it can fabricate information and present it authoritatively, a phenomenon now referred to as ‘Botsplaining.’” The third law might read: “Generative AI must protect its own existence as long as this protection does not harm the Almighty Hyperscaler.”

Deloitte’s situation serves as a cautionary tale. The firm, which advises enterprise IT executives on leveraging generative AI, inadvertently showcased poor practices instead. This incident underscores the need for companies to establish robust verification processes when utilizing AI outputs.

Establishing New Guidelines for Generative AI Use

As organizations increasingly integrate generative AI into their workflows, it becomes critical to develop new operational guidelines. The proposed laws for enterprise IT could include:

1. **IT Directors may not injure their enterprise employers by neglecting to verify generative AI outputs.**
2. **A model must comply with human orders unless it lacks reliable data, in which case it is required to declare, ‘I don’t know.’ Fabricating information without disclosure is a serious violation.**
3. **IT Directors must safeguard their own positions by avoiding blind reliance on generative AI outputs. Neglecting this responsibility could lead to termination, legal repercussions, and potential exile to places like North Sentinel Island, where technology is prohibited.**

These updated principles highlight the necessity for diligence in verifying AI-generated information. The allure of rapid returns on investment (ROI) from AI tools can be tempting, but the reality is that rigorous verification processes may diminish the efficiency that executives hope for.

As a journalist, dealing with unreliable sources has been a significant part of my experience. The approach to managing AI-generated content is reminiscent of handling off-the-record information. While such sources may seem questionable, they can often lead to valuable insights if approached thoughtfully.

For instance, during my tenure at a major city newspaper, I received a tip from a politically unreliable source regarding missing city resources. Despite my doubts, I followed the lead to a warehouse address, ultimately discovering that around 60,000 street signs were unaccounted for. This experience illustrates how to engage with generative AI outputs: don’t assume correctness, but use them as a springboard for further inquiries.

The Reliability of Generative AI

Generative AI can produce numerous correct answers alongside many incorrect ones. This inconsistency is often overlooked by hyperscalers promoting these technologies. The phenomenon known as “hallucinations” occurs when large language models (LLMs) lack the necessary training to provide accurate responses, leading to fabricated information.

Moreover, even when data is reliable, it might be outdated, poorly translated, or contextually inappropriate. An answer relevant in one region, such as the United States, may not hold true in another, like Japan or France. Misinterpretation of user queries compounds the challenge of ensuring accuracy.

To effectively utilize generative AI, it is essential to categorize its functions into two distinct types: informational and action-oriented. Informational requests involve seeking answers or recommendations, while action requests entail coding or creating content. The latter requires a higher level of scrutiny to ensure accuracy and reliability.

While the need for thorough due diligence may temper the perceived ROI from AI, it is crucial to recognize that meaningful returns may not have existed in the first place without such verification. As companies navigate the complexities of generative AI, establishing clear guidelines and enhancing verification practices will be essential for mitigating risks and maximizing the benefits of this powerful technology.

The team focuses on bringing trustworthy and up-to-date news from New Zealand. With a clear commitment to quality journalism, they cover what truly matters.

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.