How can businesses use AI responsibly?

Download video transcript

Watch the second Leadership in Extraordinary Times event: How can businesses use AI responsibly?

Or listen to the podcast on Spotify or Apple.

The explosive growth in the use of artificial intelligence (AI) in business has brought with it a host of challenges. How do large organisations control how AI is applied and take responsibility for the decisions that AI makes? Who ensures that algorithms and machine learning are used ethically – and, indeed, how is the ethical use of AI defined? 

In the Leadership in Extraordinary Times broadcast on 12 May 2021 Andrew Stephen, Associate Dean of Research and L’Oréal Professor of Marketing at Saïd Business School, discussed these questions with Felipe Thomaz, Associate Professor of Marketing, Natalia Efremova, Teradata Research Fellow in Marketing and AI, and Yasmeen Ahmad, Vice President of Strategy, Teradata. 

In a preview of a new policy report from Oxford Saïd and the International Chamber of Commerce, Felipe Thomaz introduced the key concept of ‘human-centricity’. The ethics of AI should be considered in the light of the impact that it has on humans and also with the awareness that a human will ultimately bear the responsibility for automated decision-making. Businesses that care about using AI responsibly should: 

Involve everyone 

It should not be just the analysts or the data scientists who are concerned with the ethics of AI or the fairness of algorithms: it is vital that leaders and representative from the business side are part of the discussion because they have a better understanding of how the application is going to be used in practice and how it will impact on people. 

Test tolerance of errors 

An error in AI applied to supply chains, for example, may result in inefficiencies but will not be regarded as an ethical problem. In contrast, in a healthcare setting, the accuracy of algorithmic predictions could be a matter of life and death. 

Embed AI ethics in the culture of the organisation 

Although a poll taken during the broadcast suggested that governments should take the lead in regulating AI, in practice they do not have the tools to investigate every last algorithm. It is therefore down to businesses to ensure that ‘the fair and the equal and unbiased use of algorithms [is] embedded in company culture ... embedded in how people build these tools, how they apply the algorithms, how they analyse data.’