Research
7 min read

What to do when AI behaves badly

Artificial Intelligence (AI) is fast developing into a ubiquitous technology, with applications across all aspects of business and society.

Just over a year ago we wrote these words as the opening paragraph for this article. As we write now in the summer of 2023 these words are even more prescient with the rapid advances in generative AI that have sparked the imagination of so many.  

Fundamentally, AI is so powerful because it is a general-purpose technology that is not bound to any given context or any given industry and as its use becomes ever more prevalent there is the potential for an accompanying rise in cases where its application is implicated in ethical scandals.  

The research we published below in 2022 on AI ethical failures, and a governance tool to safeguard against the misuse of AI, is as relevant now in the age of the ‘prompt’ as it was when we set out upon this research.   

The upcoming EU regulation for AI systems, called the 'AI Act', is about to pass into law. One of its key aims is to prevent these ethical failures that we have identified. While most AI systems will be considered low-risk and thus no mandate applies, those that are high-risk will have to undergo conformity assessment to demonstrate they are safe to operate.

Business will generally see regulation as a burden as it increases transaction cost and slows developments down. But AI is a powerful technology, and not ensuring that it is trustworthy bears strategic risks, both in terms of reputational damage as well as regulatory fines.

 

Our original article from April 2022 continues below:  

A prominent, and well known, example of the ethical misuse of AI is the 2018 Cambridge Analytica scandal that plunged Facebook into crisis. But behind the Facebook scandal is an increasingly prominent phenomenon: as more companies adopt AI to increase efficiency and effectiveness of their products and services, they expose themselves to new and potentially damaging controversy associated with its use. When AI systems violate social norms and values, organisations are at great risk, as single events have the potential to cause lasting damage to their reputation. 

To fully achieve the promise of AI, it is essential to better understand the ethical problems of AI and further prevent such problems from happening. This is where our research comes in. We aimed to answer two questions: First, what are the ethical problems associated with AI? And second, how can we prevent or harness the ethical problems of AI?  

Identifying AI ethical failure: privacy, bias, explainability 

In our research, we collected and analysed 106 cases involving AI controversy; we identified the root causes of stakeholder concerns and the reputational issues that arose. We then reviewed the organisational response strategies with a view to setting out three steps on how organisations should respond to an AI failure in order to safeguard their reputation. 

1) The most common reputational impact from AI ethical failure derives from intrusion of privacy, which accounts for half of our cases. There are two related, yet distinct, failures embedded here: consent to use the data and consent to use the data for the intended purpose. For example, DeepMind, the AI-powered company acquired by Google, accessed data from 1.6 million patients in a London hospital trust to develop its healthcare app streams. However, neither the Trust nor DeepMind had explicitly told those patients that their information would be used to develop the app. 

2) The second most common reputational impact of AI ethical failure is algorithmic bias, which accounts for 30% of our cases. It refers to reaching a prediction that systematically disadvantages (or even excludes) one group for example based on personal identifiers such as race, gender, sexual orientation, age or socio-economic background. Biased AI prediction can become a significant threat to fairness in society, especially when attached to institutional decision-making. For example, the Apple Credit Card launched in 2019 was providing larger credit lines to men than women, with – in one reported case – a male tech entrepreneur being given a credit limit twenty times that of his wife despite her having the higher credit score. 

3) The third reputational impact of AI failure arises from the problem of explainability. These account for 14% of our cases. Here AI is often described as a ‘black box’ from which people are not able to explain the decision that the AI algorithm has reached. The criticisms – or concerns – stem from the fact that people are usually only informed of the final decisions made by AI, whether that be loan grants, university admission or insurance prices, but at the same time have no idea how or why the decisions are made. Key examples include embedding AI in medical image analysis, as well as using AI to guide autonomous vehicles. The ability to understand decisions that these AI systems make is under increasing scrutiny, especially when ethical trade-offs are involved.

Looking across all 106 AI failure cases, the common theme that runs across these failures is the integrity of the data used by the AI system. AI systems work best when they have access to lots of data. Organisations face significant temptations to acquire and use all the data they have access to, irrespective of users’ consent (‘data creep’) or neglect the fact that customers have not given their explicit consent for this data to be used (‘scope creep’). In both cases, the firm violates the privacy rights of the customer by using data it had not been given consent to use in the first place, or to use for the purpose at hand. 

""

Defining the solution for AI ethical failure: Introducing capAI 

Having identified and quantified AI ethical failure, we then developed an ethics-based audit procedure intended to help avoid these failures in the future. We called it capAI (conformity assessment procedure for AI systems) as a response to the draft EU Artificial Intelligence Act (AIA) that explicitly sets out a conformity assessment mandate for AI systems.

AI in its many varieties is meant to benefit humanity and the environment. It is an extremely powerful technology but it can be risky.

CapAI is a governance tool that ensures and demonstrates that the development and operation of an AI system is trustworthy. We define trustworthy as being legally compliant, technically robust and ethically sound.

Specifically, capAI adopts a process view of AI systems by defining and reviewing current practices across the five stages of the AI life cycle: design, development, evaluation, operation, and retirement. CapAI enables technology providers and users to develop an ethical assessment at each stage of the AI life cycle and to check adherence to the core requirements for AI systems set out in the AIA.   

 

At the end of the procedure, it can produce three types of assessment outputs.

1) An internal review protocol (IRP), which provides organisations with a tool for quality assurance and risk management. The IRP fulfils the compliance requirements for a conformity assessment and technical documentation under the AIA. It follows the development stages of the AI system’s lifecycle, and assesses the organisation’s awareness, performance and resources in place to prevent, respond to and rectify potential failures. The IRP is designed to act as a document with restricted access. However, like accounting data, it may be disclosed in a legal context to support business-to-business contractual arrangements or as evidence when responding to legal challenges related to the AI system audited.

2) A summary datasheet (SDS) to be submitted to the EU’s future public database on high-risk AI systems in operation. The SDS is a high-level summary of the AI system’s purpose, functionality and performance that fulfils the public registration requirements, as stated in the AIA.  

3) An external scorecard (ESC), which can (optional) be made available to customers and other stakeholders of the AI system. The ESC is generated through the IRP and summarises relevant information about the AI system along four key dimensions: purpose, values, data and governance. It is a public reference document that should be made available to all counterparties concerned.

We hope that capAI will become a standard process for all AI systems and prevent the many ethical problems they have caused.

Conjointly, the internal review protocol and external scorecard provide a comprehensive audit that allows organisations to demonstrate to all stakeholders the conformance of their AI system with the EU’s Artificial Intelligence Act. 

Matthias Holweg is Director of Saïd Business School’s Oxford Artificial Intelligence Programme

Research

This article is based on two pieces of research from Saïd Business School: 

The Reputational Risks of AI by Matthias Holweg, Rupert Younger and Yuni Wen. This research project was supported by Oxford University’s Centre for Corporate Reputation.

capAI - A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act by Luciano Floridi, Matthias Holweg, Marriarosaria Taddeo, Javier Amaya Silva, Jakob Mökander and Yuni Wen. This research projected was conducted jointly by Oxford University's Saïd Business School and the Oxford Internet Institute.