The New Skills in Finance Report 2022
In a digital-transforming era, there is a widening skills gap for those who cannot adapt to the new digital world in finance.
CFTE and Elevandi published the report with the discussion with leading experts to help governments, organisations and individuals address the current skills gap in finance and build a digital-resilient workforce in the industry.
Key takeaways from "The flip side of generative AI: Challenges and risks around responsible use" report published by KPMG
CFTE summarised “The flip side of generative AI: Challenges and risks around responsible use” report by KPMG. The report looks at the risk management challenges raised by generative AI and underscore the importance for organizations to identify these risks early, form governance constructs to help adopt generative AI responsibly, and understand how risks may impact building trust in the use of generative AI.
Key Aspects
- The report examines the potential and challenges of generative artificial intelligence (AI) in business contexts, particularly highlighting the risks and ethical considerations associated with its use.
- Discussing how generative AI facilitates quick and cost-effective content creation but also brings significant risks like IP theft, fraud, and reputational damage.
Table of Contents
- Introduction: Potential of Generative AI in Business
- Risks from Within: Intellectual Property and Employee Misuse
- Inaccurate Results and External Risks
- Steps for Building Responsible AI Governance
Key Findings and Insights
This report will give you an insight into:
- Intellectual Property Risks:
Generative AI technologies use neural networks trained on large data sets, which can inadvertently expose private or proprietary information. The report warns about the potential misuse of business data inputted into these AI systems, which could be accessed by others, increasing the risk of IP theft or fraud. An example cited is Amazon’s caution to its employees about sharing code with ChatGPT, highlighting concerns over proprietary information being used in AI training and output - Employee Misuse:
The technology's efficiency could tempt employees to misuse it, posing ethical and compliance challenges. Examples include using AI to automate tasks that require ethical and compliance considerations, such as legal reviews, or passing off AI-generated work as their own. - Inaccurate Results and Reputational Risks:
Generative AI may produce inaccurate outcomes or perpetuate societal biases, leading to potential business failures, legal issues, and reputational damage. This highlights the importance of continuous monitoring and training of AI systems to align them with expectations and to mitigate risks of misinformation and bias. - External Risks:
Generative AI can be used maliciously to create deepfakes or for cybercrimes like sophisticated phishing scams. These risks underscore the challenge of detecting and mitigating the impacts of such misuse, which can be more complex and harder to trace due to the advanced capabilities of generative AI - Responsible AI Governance:
The report suggests steps for developing AI governance to address risks and ethical implications, improve digital literacy, and enforce AI standards. It suggests evaluating critical questions across multiple functions within an organization, involving risk management, compliance, legal, public affairs, and regulatory affairs.