Managing AI Risks amid Evolving Regulation: Build on what you already have

Abstract
The article discusses the prevalence of a risk-based approaches in AI regulation formed by different nations. At an organisation level, it highlights the importance of aligning AI usage within the bounds of an organisation’s risk appetite statements (RAS). It underscores how RAS, covering financial, operational, reputational, environmental, health and safety, and social responsibility risks, provides an existing foundation for managing AI related risks. While global AI regulation continues to evolve, organisations can proactively implement tailored risk controls based on their RAS. By adopting a fluid risk-based approach, organisations can navigate uncertainties surrounding AI usage effectively, ensuring sound corporate governance. The article concludes with a Chinese proverb, highlighting the urgency of taking action in implementing AI operating controls.

In global AI regulation, a prevalent feature is the adoption of a risk-based framework. This methodology evaluates AI applications based on their societal risk, applying stringent controls to high-risk uses while allowing greater flexibility for low-risk endeavours. While ‘risk’ often suggests a focus on avoiding negative outcomes, potentially limiting innovation, it fundamentally represents the impact of uncertainty on goals, encompassing both favourable and unfavourable effects. 

 

The practice of risk management across all organisational levels, from the boardroom through to daily operations, is well-established worldwide. The introduction of ISO 31000 in 2009, the first international standard for risk management, underscores this. Currently, it’s estimated that over 100,000 organisations in more than 190 countries have integrated ISO 31000 into their corporate governance frameworks.

 

Given this, your organisation’s board and executive should have well-formed risk appetite statements (RAS) that defines amount and type of risk it is willing to pursue. These are the typical categories with example RAS that are set by your board: 

 

Financial Risk  

This relates to an organisations willingness to accept financial exposure. It encompasses aspects like investment decisions, debt management and liquidity risk. 

 

‘We are willing to accept a moderate level of financial risk to achieve our growth targets, provided that the potential return on investment exceeds the cost of capital by at least 15%. We do not accept risks that could harm our liquidity position or result in a downgrade of our credit rating.’ 

 

Operational Risk 

Focuses on risks related to day-to-day operations. Examples include supply chain disruptions, process failures and technology risks. 

 

‘Our organisation prioritizes operational continuity and customer satisfaction above rapid expansion. We accept moderate operational risks that can be effectively mitigated through our existing controls. We will avoid risks that could lead to significant disruptions to our services or compromise customer data privacy.’ 

 

Reputational Risk 

Deals with protecting an organisation’s reputation. It involves managing risks that could harm public perception, trust or brand image. 

 

‘Our reputation directly impacts client trust. We maintain a moderate risk appetite, balancing innovation with stability. We actively manage risks related to data security, compliance, and customer satisfaction.’ 

 

Environmental Risk 

Addresses the impact of an organisation’s activities on the environment. This category includes sustainability, pollution and resource management. 

 

‘As a responsible organisation, we have a low appetite for environmental risk. Any adverse effects on the environment from our or our suppliers’ activities are carefully managed. We monitor our impact through regular reporting, environmental management plans, and sustainable practices.’ 

 

Health and Safety Risk 

Pertains to the safety and well-being of individuals within the organisation. It includes workplace safety, occupational health and public health considerations. 

 

‘At our organisation, we prioritize the well-being of our employees, visitors, and the community. We have a low tolerance for incidents that could cause harm with our goal being zero accidents. We have zero appetite for non-compliance with safety regulation.’

 

Social Responsibility Risk 

Involves considerations related to ethical behaviour, social responsibility and community impact. 

 

‘We have a high-risk appetite for community engagement. We actively seek opportunities to collaborate with local communities, support education, and contribute to social well-being. Our goal is to foster meaningful relationships and address community needs.’ 

 

One thing you will notice from these risk categories and RAS is that AI is not specifically mentioned. However, it does provide themes of how AI use can both unlock opportunities for moderate to high-risk appetite items and for low-to-zero risk appetite items where it needs to be carefully tested, controlled and monitored. This is really helped and can be built upon to implement a host of targeted risk controls for AI, as well as focus areas to unlock new opportunities using AI. 

 

That means you don’t need to wait for all of the different nations to agree specifically on an idealistic global AI regulation – you can take the lead yourself and align your AI operating controls to your organisations RAS. This is the essence of a risk-based approach. As dynamic regulation evolves, you can customise your approach to fit. 

 

To help you unpack this, the below diagram shows the relationship between corporate governance themes that your board will seek assurance on and areas that your AI usage management procedures and guidelines can target. It also shows hot-spots that different countries consistently refer to in their AI regulation drafts to pay closer attention to. 

 

It’s never too late to start putting in place your operating controls for AI based on your organisation’s risk appetite statements.

 

As the famous Chinese Proverb says: 

‘The best time to plant a tree was 20 years ago. The second-best time is now.’ 
 

 

Cameron Towt, MAICD, is the founder of ioethic. He brings over almost 25 years of experience in data management and advanced analytics, specialising in the governance of data and AI to drive societal benefits. 

 

See Cameron’s profile here.

Get In Touch