Innovations around artificial intelligence will continue to shape and form how it will help humanity in various sectors in the coming decade. AI is the main driver for many technologies including IoT, Robotics and most importantly health and environment. So, the question comes, what risk does it pose to enterprises and how are the risks affecting and impacting current enterprises who are adopting it? How is AI exposing us to a universe of pitfalls and risks, breaching regulatory requirements and leading to enterprises getting penalised by regulatory bodies? Let’s get into it deeper as we learn about it in layers
Does AI Deployment Really Drive Enterprises Towards a Security Breach?
How frequently this happens:
Gartner’s recent discussions reflect through various data analysis and research about 30% of enterprises deploying AI had a security breach. The core reason: 62% is data compromise by an internal party, followed by 51% of cases where the compromise of data is coming from external parties. Malicious attack on the infrastructure, which is other than any data compromise, accounts for about 36%.
This leads us to the next discussion: why does managing these risks and security matter at all?
AI Unleashed Does Need to be Controlled
Managing risks and security around AI deployment matters immensely as we progress through development and using it for humanity.
Few cases where the failure around these guardrails has impacted society and enterprises heavily. Let’s look into three very serious breaches and their nature.
The first case study deals with United States law and enforcement agencies, which was recently shared and analysed.
Police departments in the United States use predictive algorithms to make various strategies, which involves strong decision making to reduce crime and prisoner numbers. Facial recognition is widely used to identify the suspects and systems like this in case failure may lead to wrongful imprisonment. There are reports about where this has happened, and police must place restrictions on using facial recognition.
The second case study here is about the airline industry, where a North American airline company is ordered to pay for pricing mistake by a customer service chatbot. Any company implementing chatbots and other AI tools needs to take accountability of the result. In case there are some mistakes and differences, there can be serious legal issues arising, so monitoring of chatbots is crucial.
The third case study is about costly fraud scams, where a finance worker pays out $25 million after video call with deepfake ‘chief financial officer’. This has increased potential concern within the financial sector around fraud calls and scams using deepfake.
So, there is a strong need at the Executive board level to understand that AI governance and security is a must when implementation of GenAI is happening in any enterprise. It is recommended to have separate funds allocated for the purpose where policies and governance will be implemented along with the roll out.
Solution Around Risk and Security Management to Develop AI Trust
To adopt the solution, the AI risks need to be categorised first within an organisation.
AI risks can be broadly categorised into several areas:
• Operational Risks: This is mainly after the implementation has happened and may be due to poor data quality, algorithmic errors, or unforeseen interactions with other systems within the existing landscape of an enterprise.
• Ethical Risks: AI systems can inadvertently perpetuate biases, leading to unfair treatment of individuals or groups. These risks also include the potential for AI to be used in ways to disturb the regulatory requirements and societal security, such as privacy violations and surveillance.
• Regulatory Risks: Enterprises may face legal challenges related to data security and protection, intellectual property, and accountability which will in turn provide AI driven decisions.
• Strategic Risk: The strategic risk is mainly for an enterprise when adopting AI for strategic decision making and making decisions which are not quality driven. This may result in incorrect strategy adoption for any enterprise.
As discussed by McKinsey & Company in one of the reports:
Effective mitigation strategies for the above risks are discussed below:
• Robust Governance Frameworks: Establishing clear governance framework is essential in most of the cases as it defines roles and responsibilities, ensuring transparency in AI decision-making processes, and implementing oversight mechanisms to monitor AI systems continuously.
• Ethics and Regulatory Adaptation: Enterprises should engage with policymakers to shape AI regulations and ensure that their AI systems are compliant with emerging laws of the specific land. This also involves adopting flexible AI frameworks that can adapt to new regulatory requirements specially around finance and law and enforcement.
• Continuous Learning and Adaptation: Continuous learning and training for AI models is critical. This will ensure operational risks and strategic risks are minimised.
• Developing a culture for fairness and ethics using AI within organisations, specifically within stakeholders and cross-departmental collaboration.
Below is a summarised view of effective technology components which needs to be considered to mitigate the risk at the implementation stage as recommended by Gartner:
The above framework shows all the components around the content anomaly, data protection and application security to be treated as three separate pillars on all AI systems. The organisational governance on privacy, fairness and bias control along with measurements and metrics should guardrail the entire framework and implementation.
Again, there are specific specialised roles that enterprises need to develop and/or upskill their users to fill gaps in owning and maintaining the solution. Well-agreed RACI will help to put the governance in place.
Role of Countries’ Regulations to Manage Risks
Enterprises using AI now need to follow strict regulations under the AI Act laid out by countries and jurisdictions.
The European Union (EU) has been at the forefront of developing comprehensive regulations to manage the risks associated with AI. The EU’s approach is characterised by a focus on ethical principles, human rights and a balance between innovation and regulation. It covers GDPR and Data Protection to Ethics Guidelines for Trustworthy AI and Digital Services Act and Markets act to enforce strict regulations.
The United States does not have a single, comprehensive piece of legislation equivalent to the European Union’s AI Act. Instead, the U.S. approach to AI regulation is more decentralised and sector-specific, relying on a combination of federal agency guidelines, state laws and industry self-regulation. In October 2022, the White House Office of Science and Technology Policy (OSTP) released the ‘Blueprint for an AI Bill of Rights’. This document outlines five key principles aimed at protecting individuals from the potential harms of AI and automated systems mainly around bias, privacy and consideration of human alternatives.
Australia’s approach to AI regulation is evolving, with a focus on ethical principles, privacy protection and promoting innovation. While there is no single AI Act, the country is developing a comprehensive framework through voluntary guidelines, sector-specific regulations and strategic investments in AI research and development. On the other hand, New Zealand primarily uses existing common law framework to address issues that arises from AI technologies. There are set of AI principles to guide ethical use of AI which is in place to cover the AI principles and Ethics to be followed within the land of law in New Zealand.
Conclusion
Managing AI Risk is a topic which cannot be concluded in simple terms. However, it requires a balanced and proactive approach that aligns technological innovation with ethical standards, safety and human values. By implementing robust regulations, fostering international collaboration and encouraging responsible AI development, we can harness the benefits of AI while minimising its potential harms.
It is essential for governments, industries and society to work together to ensure that AI is developed and deployed in ways that are transparent, fair and accountable, ultimately enhancing human well-being and advancing societal progress.
The future will show us how enterprises will come up with innovation and governance to tackle and get benefitted immensely with AI. Things will move so swiftly and with governance and tactics in place it will immensely develop around-the-clock service with efficiency around decisions.
‘Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.’ – Andrew Ng
About Sayani:
I am a Data and AI professional with over 19 years of insightful experience, successfully managing and delivering IT services to clients across diverse sectors in various countries.
I got the opportunity of leading Data transformation Programs, End to End Service Delivery Portfolios and Operational improvement initiatives related to data.
My passion for Data Enterprise Architecture has brought me closer to various companies and have helped me to develop my experience across domains.
I am extremely excited for the future of Data and AI. The changes around this technology will bring advancements very quickly across Data Privacy and Security, AI in healthcare and climate change, Quantum Computing and AI, Interoperability and many more areas which will start shaping humanity to a greater extent.
See Sayani’s profile here.