AI Implementation Risk Chapter
The AI Implementation Risk Chapter focuses on the practical risks organisations face when deploying artificial intelligence systems into real-world business, operational, and human workflows.
The chapter aims to help risk professionals, business leaders, and practitioners understand how to evaluate AI use cases, implement appropriate governance controls, manage human-AI interaction risks, and ensure that AI systems are adopted responsibly, safely, and effectively.
As organisations increasingly explore AI tools, large language models, automation systems, and AI-enabled decision support, they must consider not only the technical capabilities of these systems, but also their operational, governance, legal, reputational, and human risks.
Poorly implemented AI systems may create risks such as overreliance on automated outputs, unclear accountability, inaccurate or biased recommendations, data exposure, weak documentation, inadequate human oversight, and insufficient internal controls.
This chapter will therefore focus on practical AI risk management for organisational adoption. Key areas include AI use-case selection, implementation and deployment risk, workflow integration, human oversight, accountability structures, model performance risk, vendor and data risks, workforce adoption, documentation, auditability, and long-term monitoring after deployment.
Through this chapter, RIMAS can provide members with a practical platform to discuss how AI can be implemented responsibly across organisations while balancing innovation, governance, risk management, and business resilience.

Chapter's Leaders

Chairman, AI Implementation Risk Committee
Mr. Jackson Chai
Founder, IntegratedAI Pte. Ltd.
Jackson Chai is a professional with experience across artificial intelligence, software development, analytics, corporate tax, risk management, and policy research. He is the founder of IntegratedAI Pte. Ltd., a Singapore-based company focused on practical AI implementation, AI systems, human-computer interaction, and policy research.
Jackson is currently pursuing doctoral research in Computer Science, with a focus on human-AI systems, large language model tooling, and structured document generation. His work examines how AI systems can be designed, implemented, and governed in ways that are practical, reliable, and usable for organisations.
He is also part of a national policy research team focusing on the employment and employability pillar, with a particular focus on the impact of AI on jobs, workforce readiness, and organisational adoption. This work complements his broader interest in how AI can be adopted responsibly across industry, policy, and society.
Prior to his current work in AI, software systems, and policy research, Jackson gained professional experience at KPMG Singapore, where he focused on corporate tax compliance, planning, and advisory. This background provides him with a strong foundation in regulatory interpretation, governance, professional services, and organisational risk management.
His technical expertise includes Python, R, JavaScript, TypeScript, React, Next.js, database design, cloud infrastructure, and the implementation of AI-enabled applications using large language models. He has hands-on experience building software systems, integrating APIs, designing data workflows, and developing AI solutions for practical business use cases.
Jackson’s current focus is on helping organisations understand and manage the risks involved in adopting AI systems, including implementation risk, governance risk, human-AI interaction risk, operational risk, workforce risk, and responsible deployment.
