Key Insights
Key takeaways from "Proactive risk management in Generative AI" report published by KPMG
CFTE summarised “Proactive risk management in Generative AI” report by KPMG. The report focuses on the need for proactive risk management in the rapidly evolving field of Generative AI.
Key Aspects
- The report underscores Generative AI’s distinct nature compared to other AI technologies, particularly in its ability to create data that mimics real human artifacts.
Table of Contents
- Managing Hallucinations and Misinformation
- The Matter of Attribution
- Real Transparency and Broad User Explainability
- Accountability on the Road Ahead
Key Findings and Insights
This report will give you an insight into:
- Managing Hallucinations and Misinformation: Generative AI, particularly in its use of large language models, can produce outputs that, while coherent and grammatically convincing, may be factually inaccurate or entirely false. This phenomenon, known as "hallucinating," presents a significant risk, as the AI can confidently generate outputs with invented references or sources. This discrepancy between coherent and valid data raises critical concerns about the reliability of Generative AI outputs and underscores the need for thorough validation processes.
- Attribution and Intellectual Property Challenges: The report highlights the complications surrounding attribution and copyright when using Generative AI. Since these AI models generate outputs based on vast datasets sourced from real-world information, they can inadvertently create content that breaches copyright or attribution norms. This poses legal and ethical challenges for businesses utilizing AI-generated content, as they might inadvertently engage in plagiarism or violate copyright laws, leading to potential legal disputes.
- Transparency and Explainability for End-Users: Given that many end-users of Generative AI may lack a technical understanding of its workings, the report emphasizes the importance of transparency and explainability. It argues that non-technical explanations of Generative AI’s limits, capabilities, and associated risks are crucial. This approach can help foster an enterprise culture of continuous learning and informed decision-making, enabling users to better understand and trust the tools they are using.
- Accountability in Generative AI Usage: The report cautions that as Generative AI becomes increasingly capable of mimicking human creativity, the responsibility for its outputs rests with humans, not the AI models themselves. It notes the potential societal impacts of Generative AI, including workforce displacement and legal issues. The report stresses the importance of maintaining human oversight ("keeping the human in the loop") and developing methods to link the Generative AI product and its outcomes with ethical and accountable practices