0:00:13.040,0:00:19.440 Hi and welcome to today's episode of Leadership in Extraordinary Times. I'm Professor Andrew Stephen, 0:00:19.440,0:00:24.880 the L'Oreal Professor of Marketing and Associate Dean of Research here at the Saïd Business School, 0:00:24.880,0:00:31.680 University of Oxford. Today we are talking about how businesses can use AI responsibly which is 0:00:31.680,0:00:37.840 obviously a really big topic but we're going to try and unpack that and make it as practical as we 0:00:37.840,0:00:44.880 possibly can throughout today's discussion. I want to welcome all of you wherever you happen to be 0:00:45.680,0:00:50.640 watching us from today, thank you for joining us; and I want to extend a special welcome actually 0:00:50.640,0:00:57.360 to our Oxford Executive Diploma in AI for Business students, some of whom are here in 0:00:57.360,0:01:02.640 the building here at the Saïd Business School and others who are around the world. Joining 0:01:02.640,0:01:08.160 us today because actually today is the first day of one of our modules of that executive diploma 0:01:08.160,0:01:15.600 programme, and what better topic to talk about for an AI programme than responsible AI in business? So 0:01:15.600,0:01:22.960 what we're going to do over the course of the next 45 minutes or so is break down this topic 0:01:24.000,0:01:31.920 and we're going to hear from Dr Yasmeen Ahmad, who is the vice president for strategy at Teradata, 0:01:31.920,0:01:39.040 as well as Dr Natalia Efremova, who is the Teradata Research Fellow in AI and Marketing 0:01:39.040,0:01:42.560 here at the Saïd Business School, working in the Oxford Future of Marketing initiative. 0:01:43.120,0:01:49.520 And so we'll hear from them in a little while to really talk about this from both a 0:01:49.520,0:01:56.160 technical and a practical standpoint, but first I want to bring in my colleague Dr Felipe Thomaz, 0:01:56.160,0:02:02.240 who's an Associate Professor of Marketing here at the Saïd Business School, and Felipe's been heading 0:02:02.240,0:02:09.520 up a project that a number of us have been working on over the last few months with the International 0:02:09.520,0:02:17.280 Chamber of Commerce in drafting some guidance for the private sector around responsible AI. 0:02:17.280,0:02:24.640 So I thought we would kick things off by having a chat with Felipe about this project which is 0:02:24.640,0:02:31.360 about to be released next week, in fact we'll be releasing the report to come out of this research 0:02:31.360,0:02:37.280 project. So Felipe, welcome, thank you for joining us. For those of you watching at home, Felipe 0:02:37.280,0:02:42.640 is literally on the other side of the wall to me but we're distancing ourselves, so Felipe 0:02:42.640,0:02:48.080 thanks; so why don't you start by telling us a little bit about this project that 0:02:48.080,0:02:56.640 we've been working on over the last few months with the ICC. Thank you Andrew, 0:02:57.280,0:03:01.680 it's a very exciting project right; as you've described like there's a 0:03:02.720,0:03:09.520 lot to talk about in responsible business and responsible application of AI and the ethical 0:03:09.520,0:03:15.440 application of AI as something particularly thorny in terms of being able to discuss. 0:03:15.440,0:03:22.880 So we wanted to see how we could bring lot of companies up to speed and 0:03:22.880,0:03:28.400 the world's a nice big place with a lot of variability in adoption of technology 0:03:28.400,0:03:32.960 so there's a lot of opportunity for people to just really get their businesses up and running 0:03:32.960,0:03:41.760 to the next level of AI aptitude, but to do so responsibly and ethically. 0:03:41.760,0:03:50.400 The challenge was how do we synthesise what we know about responsible AI applications and 0:03:50.400,0:03:56.960 ethical considerations and ethical risks in AI and organise it in a way that is business sensible 0:03:57.520,0:04:04.560 and it is in a way organized to allow a manager to implement and actually put it to use so that 0:04:04.560,0:04:11.680 they can just go ahead and take those initial steps all right, so what are some of those 0:04:11.680,0:04:15.680 steps that need to be taken because and part of this is very much about coming up with 0:04:16.240,0:04:20.640 with things that businesses can do to try and ensure that the work that they're 0:04:20.640,0:04:26.160 doing around AI and machine learning is responsible, is ethical, is appropriate. 0:04:26.160,0:04:29.840 What are some of the elements in your framework? 0:04:30.720,0:04:36.640 Right, so to start out one of the things that we had to do in order to force it towards a 0:04:37.680,0:04:44.480 business oriented and actually a managerially useful approach was actually to force a 0:04:44.480,0:04:51.680 hierarchy onto some of complex ethical ideas, basically organising the questions around 0:04:51.680,0:04:55.840 what is right, what is correct and organising the trade-offs that managers are going to face 0:04:56.480,0:05:01.680 in a way that allows them to actually analyse their environment, so I know still very much into 0:05:01.680,0:05:09.200 like ethereal land and discussion here, but just to say that to go to a framework we are actually 0:05:09.200,0:05:14.960 forcing a way for managers to make trade-offs and decisions, and assist in decision-making, so 0:05:14.960,0:05:22.240 to do that is starting to go from the most complex ideas to the more simple ideas that make up those 0:05:22.240,0:05:28.800 more esoteric concepts. So ethics is essentially a combination of responsibility and accountability 0:05:28.800,0:05:34.720 and then responsibility becomes human centricity, fairness, and the ability to be harmless in your 0:05:34.720,0:05:40.880 execution. And the report is going to have the full kind of components available 0:05:40.880,0:05:46.640 for people to dive in. But the idea is to say how do the pieces fit together, how do 0:05:46.640,0:05:51.360 we organise them in a way that I say, what has priority, what is more important than another 0:05:52.400,0:05:59.600 to allow for these trade-offs. Now when I take it to reality and I go right let's go, how do I do 0:05:59.600,0:06:06.080 this as a manager, how do I organise my business to start leveraging these things, we start 0:06:06.080,0:06:10.800 to get to those practical points and what we're trying to do is say you're going to start 0:06:10.800,0:06:16.560 from the technical aspect of the AI, you're going to go to your workflow,, from that 0:06:16.560,0:06:22.240 workflow you're going to overlay an ethical design component onto it which is essentially saying 0:06:22.240,0:06:27.280 here are the steps that I'm going to take, so very broadly let's think of here's my data 0:06:27.280,0:06:32.000 concerns, here's my algorithmic choices and here's my business case, business use what I'm going to 0:06:32.000,0:06:38.240 use my outputs for; and you just map all of that out and that overlay of ethics comes in and starts 0:06:38.240,0:06:46.160 asking questions of the manager, saying do I have specific threats or risks or concerns arising from 0:06:46.160,0:06:52.720 my data sourcing? do I have ethics concerns/risks associated with my data cleaning and processing 0:06:52.720,0:06:58.720 pre-processing, how I use it, how I absorb it into my company? All of those things start getting 0:06:58.720,0:07:06.880 mapped so that you have a good understanding of these potential threats to your ethics stance, 0:07:06.880,0:07:12.960 right, so you're saying I want to do things well then there are aspects within your design 0:07:12.960,0:07:17.840 of your workflow for your AI application that can give rise to some complications. 0:07:18.480,0:07:24.320 The first step then is to map those, identify what they are and then we fall into some fairly robust 0:07:24.880,0:07:30.560 managerial aspects where we say okay what are those mitigation strategies that I can bring 0:07:30.560,0:07:35.600 to bear to account for this risk exposure that I have, how do I minimise my risk 0:07:35.600,0:07:39.760 exposure and these are ethical risks, breaches, potential bridges that you 0:07:39.760,0:07:45.600 can have and doing incorrect things and then institutionalising them via AI or just like 0:07:46.320,0:07:51.280 embedding them into code as it were and just perpetuating those errors so this is an early 0:07:51.280,0:07:57.680 mapping, understanding how it fits within what you're trying to apply and then minimise 0:07:57.680,0:08:02.720 and mitigate and just live with that notion, and that idea of saying that I'm going to revisit 0:08:02.720,0:08:07.760 and keep coming back to this process to saying do I have new threats, do I recognise new things, 0:08:07.760,0:08:13.040 am I changing anything in my workflow and they're unable to actually start delivering against that 0:08:13.040,0:08:18.480 goal which is what is the most responsible, what is a properly ethical application of AI 0:08:18.480,0:08:23.680 for my business so it's okay. There's lots that I want to talk about there but I do want 0:08:23.680,0:08:29.440 to just remind everyone in the audience to please feel free to put some questions to us, we'll 0:08:29.440,0:08:35.440 come back to your questions later, keep them brief and to the point as much as you can please. 0:08:35.440,0:08:41.360 And so Felipe, this research that we're talking about is going to be released next 0:08:41.360,0:08:48.240 week and and if anyone is is wanting to find out more they can visit 'oxfordfutureofmarketing.com', 0:08:48.240,0:08:54.080 so what I'm hearing from you is in some sense saying it's not about changing the way that 0:08:54.080,0:09:00.480 you do everything, it's about sort of bringing sort of responsible ethical accountable notions around 0:09:00.480,0:09:06.240 AI and data and analytics usage into, I guess existing workflows which I think is 0:09:06.960,0:09:11.360 it's quite appealing right, it's not about overhauling everything we do for this new way 0:09:11.360,0:09:17.120 of doing things but rather finding I guess some kind of happy medium - am I 0:09:17.120,0:09:22.080 getting it right in in that characterisation the way that you're thinking? Yeah I mean 0:09:22.080,0:09:27.040 I think it's important for managers everywhere really to have that sense of terra firma, 0:09:27.920,0:09:32.640 essentially saying like this is familiar ground, this is something that we do in business 0:09:32.640,0:09:38.640 day in day out, there's a lot of new news with AI, there's a lot of hype with AI, there's a lot of 0:09:38.640,0:09:45.520 uncertainty around AI, even to get into the whole definition and all the variations of it, 0:09:45.520,0:09:51.440 but from the business sense and for the business case use of it it's very useful to go back to 0:09:52.080,0:09:56.480 some of those basic ideas and saying actually I might have a new business model, 0:09:56.480,0:10:02.080 I might have new capabilities but actually the the art and science of management behind it 0:10:02.080,0:10:06.960 it's still relatively stable that I can have appropriate controls over these different 0:10:06.960,0:10:12.560 components, it's a new machine but actually how I manage that machinery doesn't have a whole lot 0:10:12.560,0:10:19.120 of new moving parts to it, we have some concerns that are specific to AI and some uniqueness that 0:10:19.120,0:10:26.080 we described, and we were able to leverage a lot of pre-existing- one of the most exciting 0:10:26.080,0:10:32.160 parts for me for this project was the way that we arrived at these recommendations was by actually 0:10:32.160,0:10:38.960 going to an existing body of knowledge. Everything that companies have mentioned on their stances, 0:10:38.960,0:10:46.400 their guidances - governments, NGOs, intergovernmental agencies - we took all of that body of knowledge, as 0:10:46.400,0:10:53.120 well as academic literature on responsibility and ethics, and the AI put it all together and used 0:10:53.120,0:11:01.920 our own AI machinery on it to organise it and give it the shape to put things in perspective. So 0:11:01.920,0:11:06.960 I think what you're talking about here is i think quite an ... appealing against it - 0:11:08.960,0:11:12.800 I think what you're talking about here is quite an appealing process 0:11:13.360,0:11:19.600 for businesses of all kinds of shapes and sizes to use and I like that kind of 0:11:19.600,0:11:23.760 that that stabiliser, businesses need that terra firma which is business and they want to come back 0:11:23.760,0:11:31.200 to that but it's also suggesting that businesses themselves need to be taking responsibility 0:11:31.200,0:11:37.280 for responsible AI. So I actually want to hear what you all watching think about this so 0:11:37.280,0:11:43.200 we're going to do a poll, and I'm going to invite you to to do an online poll and we'll come back to the 0:11:43.200,0:11:49.920 results a little bit later on in the programme. But the question is, who should be most responsible 0:11:50.480,0:11:57.280 for ensuring that applications of AI in the private and public sectors are appropriate, ethical 0:11:57.280,0:12:03.280 and responsible? So is it government, is it the tech companies and so on and so on, 0:12:03.280,0:12:08.560 whether it's industry bodies or the businesses themselves, maybe intergovernmental organisations 0:12:08.560,0:12:15.600 like the United Nations or individuals. So tell us what you think by going to the link 0:12:15.600,0:12:22.800 that's on the screen now and we'll come back a little bit later to hear 0:12:22.800,0:12:27.520 what you collectively think and we'll discuss that with our panel. But Felipe, 0:12:27.520,0:12:32.240 one other thing I wanted to talk about with you is you mentioned these 0:12:32.240,0:12:36.960 principles or pillars of thinking about what a set of guidelines or policies 0:12:36.960,0:12:41.840 for a responsible AI in the private sector, there's a term that you mentioned that I really 0:12:41.840,0:12:46.800 want to hear a little bit more on which is human-centricity. What do you mean by that? 0:12:48.400,0:12:55.600 So that one is kind of one of the most important layers of the the ethics considerations 0:12:55.600,0:13:01.520 which probably shouldn't surprise anybody in the room right, we're talking about ethics 0:13:01.520,0:13:07.440 and human considerations, the consequences that impacts on individuals human individuals here, 0:13:07.440,0:13:14.880 so here what we're talking about is a combination of actually achievable means and goals within 0:13:14.880,0:13:21.760 the AI to deliver on human benefits so one core thing inside of that is this idea of beneficence 0:13:21.760,0:13:28.400 which is you're going to generate something good out of this process for people, it can 0:13:28.400,0:13:33.920 be very broad, it can be like a good business outcome that comes out of it, a transparency 0:13:33.920,0:13:40.720 which is often discussed in terms of how deploying trustworthy AIs on the idea to have 0:13:40.720,0:13:47.120 an intelligible understandable system, all of these are components that make it centered around 0:13:47.120,0:13:52.640 the human essentially. It's almost like using the word to define itself is the more 0:13:52.640,0:13:58.560 that a human can interact and the more that the system then appreciates that, there's a human that 0:13:58.560,0:14:06.480 is going to bear a consequence of our automated decision making, then the more stable you are 0:14:06.480,0:14:11.680 into being a responsible business, and the more grounded you are in the fact that there is going 0:14:11.680,0:14:16.240 to be a human cost associated with some of the decisions that we make inside of our businesses. 0:14:17.760,0:14:23.200 Thank you. So Felipe, sit tight because we'll come back to you in the Q&A a little bit 0:14:23.200,0:14:27.840 later. So thanks Dr Felipe Tomaz, Associate Professor of Marketing. I want to now bring in 0:14:28.400,0:14:34.400 our two panelists, as I introduced before, Dr Yasmeen Ahmad who is VP of strategy at 0:14:34.400,0:14:41.120 Teradata, which is an enterprise technology company, and Dr Natalia Efremova, who 0:14:41.120,0:14:47.680 is a computer scientist and is the Teradata Research Fellow in Marketing and AI here at the 0:14:47.680,0:14:53.040 Saïd Business School - and Natalia was also heavily involved in in the research work that Felipe was 0:14:53.040,0:14:58.320 just telling us about - so welcome to both of you. What I want to do - and we'll come to 0:14:58.320,0:15:04.640 the poll results a little bit later as well - but but first I want to actually come to a question 0:15:06.320,0:15:12.640 that actually has popped up from some of our Executive AI Diploma students 0:15:13.440,0:15:17.680 which is actually around how well a business is doing 0:15:18.240,0:15:25.280 at the moment. So Andrew, Claire and a few others have asked this question from that group. 0:15:25.280,0:15:30.800 If we sort of think about responsible AI, where is the 0:15:31.360,0:15:37.040 the starting point at the moment with businesses - are businesses already pretty responsible 0:15:37.040,0:15:41.200 with this or is there room for improvement - Let me go to you first. 0:15:42.480,0:15:48.560 Thanks Andrew that is a great question, so how are businesses thinking about this, and I think 0:15:48.560,0:15:55.280 it's worth reflecting on how do businesses think about AI and if I just rewind back 0:15:55.280,0:16:02.400 in my career I was looking after data science analytics teams who were working with our clients, 0:16:02.400,0:16:09.760 some of the largest companies in the world and in new AI data science techniques a few years back. 0:16:09.760,0:16:15.120 At that point AI - some of the newer machine learning techniques etc - 0:16:15.840,0:16:21.680 were isolated to these data science functions, these centres of excellence that had been set 0:16:21.680,0:16:28.880 up, whether that was in the bank or the retail or the telco company, and so it was self-constrained 0:16:28.880,0:16:35.040 in some ways because it was one group that was developing these algorithms, it was one stop shop 0:16:35.040,0:16:40.000 for looking at how are those algorithms being developed, where what kind of use cases are they 0:16:40.000,0:16:45.520 being applied to, where in the business are we leveraging those algorithms? Fast forward 0:16:45.520,0:16:52.800 to today and we really see AI being much more pervasive across the organisation, those use cases 0:16:52.800,0:16:58.880 are no longer limited to one group that's doing the development work or in one business function 0:16:59.440,0:17:05.760 We see, whether it's supply chain, customer experience, operations, your fraud departments: 0:17:06.320,0:17:13.120 they're all leveraging AI analytics techniques to improve their operations, to create differentiation. 0:17:13.840,0:17:20.320 That makes it really challenging for really large organisations to now be able to control how that 0:17:20.320,0:17:27.920 AI is applied, to control the decisions that AI is making, and coming back to what Felipe said 0:17:27.920,0:17:34.720 there, the human centricity piece even when the decision is automated is there a human who is 0:17:34.720,0:17:40.400 ultimately responsible for the decision that's being actioned? So I think many organisations 0:17:40.400,0:17:45.840 have had to move to looking at general guidelines, putting in frameworks to support 0:17:46.640,0:17:52.720 application of AI, some of those organisations because they've seen the backlash of when they get 0:17:52.720,0:18:01.440 it wrong and consumers don't like it, AI being applied in certain circumstances and so we're 0:18:01.440,0:18:09.120 seeing that AI is becoming the responsibility of roles like the chief data officer to think about - 0:18:09.120,0:18:15.840 not just enabling the organisation with the tools - but also how do we start to govern how 0:18:15.840,0:18:20.560 AI is used and applied across the business and really pervasively across the business. 0:18:21.680,0:18:27.200 So Natalia, I want to bring you in here and take the question in a slightly different way, 0:18:27.760,0:18:31.440 do we actually need this? I mean I know you and others have worked hard on thinking 0:18:31.440,0:18:37.360 about these guidelines but why is there a need for it, it sort of seems like 0:18:37.360,0:18:41.360 we have to keep on convincing businesses they need to actually do these 0:18:41.360,0:18:47.760 things because they're not perhaps innate, so why is this a problem that we need to solve? 0:18:50.640,0:18:56.400 Thank you for the question Andrew, and it's a really really good one, an important one and this 0:18:56.400,0:19:02.880 is something not all businesses ask themselves - are we using the data correctly, are we implementing AI 0:19:02.880,0:19:10.320 correctly - because AI function is in many cases not the central to the business, it is 0:19:10.320,0:19:17.120 somewhat support, it has support role in their organisation, and in this case businesses do not 0:19:17.120,0:19:23.120 pay that much attention to what is happening there, how they should curate their data, how they should 0:19:23.120,0:19:32.240 care take care of their data, how they use it in operations, this is all very important and AI 0:19:32.240,0:19:38.880 is something new, this is not something we use in practice for very long and best practices are 0:19:38.880,0:19:49.440 simply not there. So every business when coming to the question 'is my AI ethical?' has to decide it 0:19:49.440,0:19:55.360 for itself and it's not always the case that they have resources to do that 0:19:55.360,0:20:01.680 or they have education to do that or simply they cannot find proper guidelines, 0:20:02.480,0:20:09.440 so in practice AI is as ethical as business decides, it should be ethical for now. 0:20:11.040,0:20:17.120 And do you think it varies, Natalia, around the world? Are some parts of the world 0:20:17.120,0:20:24.320 thinking more about this than others or thinking differently about this to others? And I'll go with 0:20:24.320,0:20:27.760 you, Natalia, first but I also want Yasmeen's perspective on this, given that you both 0:20:28.320,0:20:32.960 have perspectives on different parts of the world. So Natalia, what's 0:20:32.960,0:20:41.840 your thoughts? Well of course it's not the same, it differs a lot across geographies, 0:20:41.840,0:20:49.840 I think that from my personal experience, AI ethics is mostly developed in more 0:20:49.840,0:20:57.680 developed countries with more technical businesses and it's far less developed in smaller economies 0:20:57.680,0:21:05.600 as just historically the less resources and less opportunities for businesses to look at 0:21:05.600,0:21:13.120 these problems. So in many cases AI has to comply with regulations but in some countries it's 0:21:13.120,0:21:18.240 completely unregulated and so it's more difficult for businesses to regulate themselves, of course. 0:21:19.440,0:21:25.440 Yasmeen, from your experience working with with clients, customers in lots of different 0:21:25.440,0:21:32.400 parts of the world, what's your take on this? - I would have to agree with Natalia, I don't 0:21:32.400,0:21:38.960 think it's consistent across the globe, in fact I think it has a lot to do also with societies and 0:21:38.960,0:21:45.440 cultures and what's acceptable, what's not, and often for a society and culture what's acceptable 0:21:45.440,0:21:53.280 or what consumers, individuals, citizens are willing to accept is reflected in law, 0:21:53.280,0:21:59.600 so there's typically more regulation in Europe and so a lot of regulation will often drive 0:21:59.600,0:22:05.200 companies to take those steps even before the regulation is written because there's that 0:22:05.200,0:22:10.800 that feeling of responsibility and it will be regulated. Whereas having lived 0:22:10.800,0:22:15.840 in Europe and now living in the US, I see the difference there between 0:22:17.280,0:22:21.600 between those geographies; in the US there's typically less regulation; 0:22:22.320,0:22:28.160 in some areas, there's more innovation pushing the boundaries and then 0:22:28.160,0:22:33.200 at some point some regulation will come in but it's not consistent across the US. And then you 0:22:33.200,0:22:39.840 see in Asia Pacific, in Australia it really again varies by country, varies by geography 0:22:40.560,0:22:47.360 and it is linked to that legal regulatory system of what's accepted and how our 0:22:47.360,0:22:52.160 business is governed and crucially what are citizens and consumers expecting. 0:22:53.360,0:22:58.080 And actually what about, I mean we talk about geography but what about different industries? 0:22:58.080,0:23:01.360 Are there differences there and actually maybe some industries 0:23:02.080,0:23:08.320 are either more on top of this than others or more in need of thinking about 0:23:08.320,0:23:14.960 responsible AI than others. What's your experience with all those different industries? 0:23:16.400,0:23:22.880 That's a great lens to look at it, Andrew, and I would even take it a step deeper than 0:23:22.880,0:23:29.360 the industry level: it's the use case and the application level because even within an industry 0:23:29.360,0:23:33.920 there are some business use cases, some business functions, departments where 0:23:33.920,0:23:40.800 AI is more freely leveraged versus other areas in the business and I think it comes down to 0:23:40.800,0:23:47.200 what's the business outcome so how much risk are we willing to accept with that business outcome 0:23:47.200,0:23:54.400 if the AI gets it wrong or if the AI is biased? So say we have AI applied to our supply chains, 0:23:54.960,0:24:00.880 if the supply chain is not quite efficient it might have a negative impact on the business but 0:24:00.880,0:24:07.360 nobody's going to to splash that across the front pages as being a biased algorithm or unethical. 0:24:07.360,0:24:12.320 However, any time you're dealing with citizens or customer experience or consumer 0:24:12.320,0:24:19.840 applications of AI it takes on another lens so I think naturally some industries have more 0:24:19.840,0:24:26.800 consumer, citizens - say it's healthcare or it's banks or retailers - have a lot of consumer and 0:24:26.800,0:24:32.000 citizen data and they're leveraging AI on that data and so there is more scrutiny there. 0:24:32.000,0:24:38.240 Whereas if you go to the manufacturing industries maybe less so because again the application of 0:24:38.240,0:24:47.200 the AI, the outcome, there's maybe less ethical considerations on that outcome so it's a good 0:24:47.200,0:24:52.480 point sort of how close you are, I guess to the humans which is- it's essentially back to Felipe's 0:24:52.480,0:24:58.320 principle of human centricity, I guess. Natalia, I want to come back to you with 0:24:58.320,0:25:06.160 a different question though, around how do we get businesses to really pay attention 0:25:06.160,0:25:10.320 and embed this, it sort of comes to a question that's come up already from some of our 0:25:11.120,0:25:18.640 executive diploma students but if if I'm a manager in some organisation 0:25:18.640,0:25:23.840 how do you know convince me that - beyond it just being kind of the right thing to do - 0:25:23.840,0:25:28.080 that I really need to think deeply about this and embed it in the way that we're doing 0:25:28.080,0:25:32.880 things? When it's yet another thing that I need to in some sense comply with, 0:25:33.520,0:25:41.280 so practically speaking how do we get the right people in organisations to be thinking 0:25:41.280,0:25:48.320 about this in an action way as opposed to a token gesture kind of way? 0:25:50.480,0:25:53.120 It's a great question, thank you Andrew, and 0:25:54.560,0:26:01.200 if we look even deeper we need to ask ourselves how can any individual working with the AI 0:26:02.480,0:26:08.720 impact the outcome and be ethical because when it comes to AI implementation 0:26:08.720,0:26:15.520 it's not only about managerial decisions, in many cases when it comes to managing 0:26:15.520,0:26:24.960 there is a lot of steps done, for example data collection or data cleaning or just dissemination 0:26:24.960,0:26:31.520 some information about how AI is operating, so I would say it is important to every 0:26:31.520,0:26:38.480 business role to think about what it is that we're doing and it's important to understand at least on 0:26:38.480,0:26:47.040 very high level what is AI doing, with which you're working. In many cases managers and 0:26:47.040,0:26:54.000 other industry roles don't really know very well what's happening and it is up to higher management 0:26:54.000,0:27:02.320 to educate to the level people understand what are the consequences and not only short term like 0:27:02.320,0:27:09.280 tomorrow consequences but more longitudinal consequences, if I use this data on what will be 0:27:09.280,0:27:17.600 impact on the society or on my clients next year, in two and ten years. Unfortunately there are 0:27:17.600,0:27:26.160 not many use cases now and in general management is not aware what can go wrong 0:27:26.800,0:27:36.640 but it's probably our role as an educators and institutions to build more these cases to say 0:27:36.640,0:27:44.000 that there are so many things that can go wrong and we need to think about them now and the 0:27:44.000,0:27:50.400 role of education here is super important. So I guess if I'm to synthesise actually what 0:27:50.400,0:27:56.000 both you and Yasmeen said means that there's the extent to which the consequences are almost 0:27:56.000,0:28:03.280 the proximity to potential consequences for human beings - your customers, society, 0:28:03.280,0:28:11.040 and citizens - that suggests how in some sense seriously this needs 0:28:11.040,0:28:15.360 to be thought through, and then I guess that's Yasmeen's point and then Natalia, your point is 0:28:15.360,0:28:20.560 well actually it's everyone's responsibility, this is not something that just has to exist in sort of 0:28:21.120,0:28:25.040 the technical parts of an organisation with the computer scientists or data scientists 0:28:25.040,0:28:30.160 and engineers, who have to certainly think about it from certain dimensions and govern it in certain 0:28:30.160,0:28:35.440 ways, it's not just for the middle and upper management to say hey we need to 0:28:35.440,0:28:41.120 impose these rules and these regulations, these frameworks, it's for everyone, everyone who's 0:28:41.120,0:28:46.720 thinking about how data may be used, how algorithms may be used to inform or make 0:28:46.720,0:28:50.800 decisions or make recommendations and so on and so forth, which I think is these are 0:28:50.800,0:28:56.080 very very important practical perspectives, it's sort of a top down and a bottom-up 0:28:56.880,0:29:02.480 technical and non-technical sort of set of things to think about I guess. The other point 0:29:02.480,0:29:08.640 I'd added Natalia is to never forget about the law of - what I call - the law of unintended consequences. 0:29:08.640,0:29:13.440 We may see that through the lens of uncertainty but we 0:29:13.440,0:29:20.480 may also see that as almost a proposition or a challenge to people, to an organisation to 0:29:20.480,0:29:25.440 say it's not just about what's obvious that could go wrong here and therefore how 0:29:25.440,0:29:29.760 do we prevent that but what are the less obvious things and maybe to think a little 0:29:29.760,0:29:35.520 bit more out of the box on those unintended consequences. Obviously you can't have everything 0:29:35.520,0:29:41.040 on your radar but at least expanding that set of possibilities may indeed be another way to 0:29:41.040,0:29:47.680 practically help in thinking about implementing responsible, ethical AI so I think 0:29:47.680,0:29:52.400 that's all. I've got more questions for you but I actually want to come to a question that was 0:29:52.400,0:30:00.000 was posed by one of our diploma students, Joanna, who's in the US at the moment, and 0:30:00.000,0:30:04.480 it's to this point I think about everyone being responsible, but her question is 0:30:05.360,0:30:11.600 who needs to be in the room? So how do you get the right people or the right personas 0:30:11.600,0:30:16.880 essentially in the room to express their concerns, to think about perhaps these unintended 0:30:16.880,0:30:21.120 consequences within an organisation, so if it's not just the technical people, 0:30:21.120,0:30:27.280 if it's not just say middle and upper management then who should have a voice 0:30:27.840,0:30:33.360 in governing this within organisations? I think it's a really important question in 0:30:33.360,0:30:40.640 terms of how we implement it. Yasmeen, what do you think? That's a good question and 0:30:41.360,0:30:45.600 I think it's back to what you were saying, Andrew, it's really important that it's not 0:30:45.600,0:30:50.960 just the analysts or the data scientists who are thinking about the implications of 0:30:51.520,0:30:58.640 the ethics or the fairness of algorithms being applied, in fact it's more important than ever 0:30:58.640,0:31:05.200 to have business leaders and business people in the room to discuss the implications of analytics and 0:31:05.200,0:31:10.240 algorithms and AI being applied, because typically it's those business leaders and those business 0:31:10.240,0:31:18.800 people who really understand the application and how that application will go out in the world 0:31:18.800,0:31:27.200 and so what we found it helpful to do is focus your key stakeholders 0:31:27.200,0:31:33.920 on specifics of what is that use case that you're trying to drive and what's the outcome? And when 0:31:33.920,0:31:41.040 thinking about the responsible use of algorithms it's useful to think about what if 0:31:41.040,0:31:47.760 the algorithm got it wrong? And in analytical terms when we're using algorithms we often talk about 0:31:47.760,0:31:52.800 false positives or false negatives - what's the chance that if we predict something to be true 0:31:53.920,0:31:59.440 it's actually not true, it's false positive or a false negative if we say it's not true 0:31:59.440,0:32:05.920 but it does actually happen to be true - what's ... with this application of the algorithm, 0:32:05.920,0:32:11.520 what's acceptable to us as a business and the use case often drives that acceptability. 0:32:11.520,0:32:18.320 So think about in healthcare we've seen use cases around AI and algorithms being used and applied to 0:32:18.880,0:32:26.400 mammograms or CT scans to look at are there abnormal or abnormalities with that image? 0:32:26.400,0:32:33.280 And so actually to get a false negative where the algorithm said there's nothing wrong but 0:32:33.280,0:32:38.640 there actually is in a healthcare situation that might not be acceptable, you need a high level of 0:32:38.640,0:32:45.600 accuracy for an algorithm. And so at that point if you're able to as a business express what 0:32:45.600,0:32:51.920 the algorithm can run but there's a level of accuracy that's acceptable to the business problem 0:32:51.920,0:32:57.360 that then gives your data science teams and your analytics teams a direction to say 0:32:58.160,0:33:03.920 we can go into testing algorithms in this area but until we get to a level of specificity 0:33:03.920,0:33:08.560 and the business teams can help to define what that level of specificity 0:33:08.560,0:33:17.840 is, that algorithm is not acceptable for production or real world use and so 0:33:18.800,0:33:24.800 I describe that scenario because I think it's a useful way... I liked how Felipe, I got 0:33:24.800,0:33:30.160 a preview of the paper that the team had been working on at Oxford with the ICC, 0:33:30.800,0:33:38.880 and Felipe described it earlier that you need a way of making these vague conversations 0:33:38.880,0:33:44.400 about ethics and fairness etc more tangible, more real world, how can you apply them in business; 0:33:45.520,0:33:49.520 getting into a little bit more of the details around the algorithm on what level of accuracy 0:33:49.520,0:33:56.560 is acceptable helps to frame the conversation in an implementation, how can we move forward, what's 0:33:56.560,0:34:03.040 the level of acceptability. Thanks Yasmeen, I think that's a really really helpful way to think about 0:34:03.040,0:34:07.040 these errors, I know it's statistics, we would call them type one and type two errors but 0:34:07.040,0:34:11.840 so the false positives and the false negatives and what might the consequences of those be 0:34:11.840,0:34:16.160 if the algorithm is wrong for whatever reason, whether it's the wrong algorithm or the data 0:34:16.160,0:34:22.240 is not quite right or there's some bias for one way, one reason or another, but then 0:34:22.240,0:34:28.880 in practical terms if it gets it wrong what does that mean and what is our tolerance 0:34:28.880,0:34:34.880 or required level of precision, if you want to think about differently in those 0:34:34.880,0:34:39.600 situations; I think that that brings it down to sort of conversations that you actually have and 0:34:39.600,0:34:44.640 back to Joanna's question, I think therefore the people in the room need to be the people 0:34:44.640,0:34:50.080 who can really talk about what those real life consequences would actually be, what would 0:34:50.080,0:34:56.320 those errors mean to the customers, to other people who might be affected, maybe to your employees or 0:34:57.760,0:35:02.160 to regulators to whoever else might be a relevant stakeholder. I guess the point 0:35:02.160,0:35:07.040 is to have that diversity of opinion and that multi-stakeholder perspective 0:35:07.600,0:35:12.240 'in the room' to think about these things, but the question actually is 0:35:12.240,0:35:16.880 how do you frame the questions, how do you get them to think about these issues, and I think 0:35:16.880,0:35:23.680 that sort of false positives, false negatives is a really useful way of thinking. 0:35:23.680,0:35:29.440 So we're about at the midpoint of our broadcast, I just wanted to welcome anyone who's joined us 0:35:29.440,0:35:33.600 since the beginning, you're obviously watching Leadership in Extraordinary Times here 0:35:33.600,0:35:38.400 at the Saïd Business School at the University of Oxford. I'm Professor Andrew Stephen, the l'Oreal 0:35:38.400,0:35:43.600 Professor of Marketing and the Research Dean and I've been talking with Dr Yasmeen Ahmad 0:35:43.600,0:35:49.840 from Teradata and Dr Natalia Efremova from here at the Business School, and we're talking 0:35:49.840,0:35:55.600 about how businesses can use AI responsibly. I want to now go to have a look at our poll 0:35:56.240,0:36:02.880 and see what you all thought about who should be responsible for responsible AI so if we can take a 0:36:02.880,0:36:09.600 look at the results and then I'm going to see what both Natalia and Yasmeen think about this. So 0:36:10.720,0:36:16.800 there's no clear front runner I suppose here but the government is on top, followed by 0:36:16.800,0:36:22.320 the global intergovernmental organisations, the tech companies 0:36:22.320,0:36:29.920 industry bodies, and individuals last, but quite a quite a spread here so 0:36:31.040,0:36:37.680 to be honest that's not exactly what I expected to see. I'm kind of intrigued by this, 0:36:37.680,0:36:42.160 what do you think, Natalia, what's your reaction to our poll results? 0:36:44.320,0:36:51.360 Wow that's very interesting, I also didn't expect this and I would say that would be 0:36:51.360,0:36:57.440 amazing if a government would be able to control it and provide us all guidelines 0:36:57.440,0:37:02.960 what we should be doing. Unfortunately for now it's not the case, we do have some 0:37:03.600,0:37:11.200 recommendations from policymakers and I think that's amazing that we're moving towards 0:37:11.200,0:37:18.640 this direction but unfortunately currently probably the only regulations - hard 0:37:18.640,0:37:26.720 regulations - that exist inside the companies inside big technical industry organisations, 0:37:28.960,0:37:39.200 intergovernmental and international bodies do catch up slowly and they do publish their own 0:37:39.200,0:37:47.360 guidelines and sets of recommendations. What I believe is that inputs from all 0:37:47.360,0:37:53.120 levels are very important and why from technical perspective I believe that it's difficult for 0:37:53.120,0:38:04.880 governments to do is they really don't have a tool for now to check what is AI doing so no one 0:38:04.880,0:38:12.480 outside of the organisation can check what the algorithm is doing and even if someone wants to 0:38:12.480,0:38:19.920 check the code the problem can be not in the code but in the data or elsewhere in the operations 0:38:19.920,0:38:28.880 or production line, so it's very very complex problem and until we develop this centralised 0:38:28.880,0:38:35.200 understanding, centralised guidelines how AI should be developed the responsibilities 0:38:35.200,0:38:39.760 would be largely on the companies themselves because they know their businesses the best. 0:38:41.840,0:38:43.040 Yasmeen, what do you think? 0:38:45.120,0:38:51.920 I have to agree with Natalia there,it's such a complex problem to try and find a regulation 0:38:51.920,0:38:56.320 or a framework that would cover the amount of innovation that is happening in this space at 0:38:56.320,0:39:02.800 the moment, that it's very hard for a government to regulate and having said that regulation 0:39:02.800,0:39:12.080 often happens after the fact, regulation looks at how technologies how digital is being leveraged 0:39:12.080,0:39:18.880 and then looks to regulate uses so I think companies do have a real responsibility to make 0:39:18.880,0:39:24.960 sure that they are responsible and accountable for the algorithms that they're 0:39:24.960,0:39:31.760 developing and how they are applying them and as I think about organisations I also don't 0:39:31.760,0:39:37.840 think it's about just the strict frameworks or guidelines, I think all large organizations will 0:39:37.840,0:39:43.520 have a ethics framework - it was mentioned earlier often the foundations have 0:39:43.520,0:39:48.000 been there in the organisation - they need to be evolved, the ethics frameworks need to be evolved 0:39:48.000,0:39:53.680 to take into account new risks or new factors that have come up because of AI and digital, 0:39:55.600,0:39:59.440 so yes there's existing frameworks they need to evolve but we also need to look 0:39:59.440,0:40:05.600 at company cultures and how a company leads through this evolution in this change because 0:40:05.600,0:40:13.120 AI digital analytics is now impacting all aspects of our lives, all aspects of organisations and so 0:40:13.760,0:40:21.520 some of the ethics and the fair and the equal and unbiased use of algorithms needs to be embedded 0:40:21.520,0:40:28.000 in company culture, it needs to be embedded in how people build these tools, how they apply 0:40:28.000,0:40:35.280 the algorithms how they analyse data. To Natalia's point it's so complex, there's so many touch points, 0:40:35.280,0:40:39.920 there's so many people that are involved in the process, it needs to be part of the the DNA of the 0:40:39.920,0:40:47.360 organisation to want to have ethical use of these new technologies new tools that they're disposable. 0:40:48.640,0:40:53.920 So there's a question that's come up from Richard - in my home country of Australia in 0:40:53.920,0:40:59.600 fact - who's asking about this point about innovation that both of you talked about, so if we 0:40:59.600,0:41:06.720 regulate too much the worry is that it will sort of box in the opportunities that might 0:41:06.720,0:41:12.960 therefore stifle innovation, I think you know an answer to sort of that point that Richard 0:41:12.960,0:41:18.160 has made which I think you've both already talked to, I guess it feels like it's this balance 0:41:18.160,0:41:23.680 of we need to have some guidance from government and intergovernmental organisations 0:41:23.680,0:41:29.920 but obviously they can't cover everything and nor probably should they- and I guess these 0:41:29.920,0:41:35.120 regulations, because of the need for further innovation and the fact that that is happening 0:41:35.120,0:41:42.400 anyway, we need to have these as living regulations in some sense or at least 0:41:42.400,0:41:47.840 guidelines around that and I think to me- my take on the poll results is actually 0:41:48.640,0:41:53.360 it's sort of everyone's responsibility in one way or another so I think this is just 0:41:53.360,0:41:57.680 as we were saying, we need multiple voices in the room to think about those consequences and 0:41:57.680,0:42:02.560 unintended ones and the false positives and false negatives. I guess the 0:42:02.560,0:42:09.120 point is at that higher level we also need this to not only be something that governments do, 0:42:09.120,0:42:14.160 not only something that intergovernmental organisations do, but indeed something that 0:42:15.040,0:42:22.000 companies do, groups of companies or organisations, the tech companies and so on and 0:42:22.000,0:42:30.080 so on, so I think there's a need for this to be governed with a 0:42:30.080,0:42:36.160 small g in a very collective way, which of course is going to be fraught with difficulties 0:42:36.160,0:42:41.200 given all those different stakeholders, but what I'm hearing is that that seems to be a 0:42:41.200,0:42:49.840 way forward. I want to bring back Felipe now and bring him back into the conversation 0:42:49.840,0:42:54.160 because I think we've just got quite a few questions now that have been popping up - 0:42:54.160,0:42:58.240 thank you to those of you who have asked questions - that I think we can start 0:42:58.240,0:43:04.880 to go through and also don't forget that all of these episodes you can listen to 0:43:05.440,0:43:10.800 as podcasts if you just search for Leadership in Extraordinary Times on whichever podcast 0:43:10.800,0:43:17.120 platform you like the most; you will find all of our previous episodes as well as this one 0:43:17.120,0:43:22.640 which will be out soon. So Felipe, welcome back, just quickly before we go on to some 0:43:22.640,0:43:28.400 more questions, what was your take on those poll results? Very similar to everybody else's, 0:43:29.280,0:43:34.720 it does, I think I have a very similar read to you and that it does seem like it is very 0:43:34.720,0:43:40.720 much like it's a joint responsibility, like everybody has to carry some of the burden. 0:43:40.720,0:43:45.840 The other bit that I think, going to that first point that we have, governments are being 0:43:45.840,0:43:52.400 the most responsible even if by a narrow margin, I guess a bit of a word of caution or concern 0:43:52.400,0:43:58.160 here from somebody that hasn't lived forever in the developed world: not everybody's 0:43:58.160,0:44:02.880 government is fantastic. Not everybody's government has your best interest in mind. 0:44:03.440,0:44:08.960 So just saying necessarily that.. whose government is going to decide what is correct to 0:44:08.960,0:44:14.240 do? I think that's something to worry about, that's something that I've put out there to say. 0:44:14.240,0:44:20.640 Yes regulation matters, very much to your point that there is a concern against the 0:44:20.640,0:44:27.040 intention against innovation, but that some companies are going to be restricted 0:44:27.040,0:44:32.160 exclusively to their regional domains and they will have just one set of legislation to 0:44:32.160,0:44:38.800 worry about, a number of companies are going to exist across national boundaries and then 0:44:39.520,0:44:45.120 you run into issues where you're picking and choosing to the bare minimum of legal requirement 0:44:45.120,0:44:50.000 rather than doing the ethical thing or doing the appropriate thing for yourself or your company 0:44:50.000,0:44:55.600 etc by just forcing others to say all right what's the bare minimum I have to do? and that's what 0:44:55.600,0:44:59.840 you do and you exploit everybody at the end, that's kind of moving away from ethics and just 0:44:59.840,0:45:06.720 saying just let me make as much money as I can as quickly as I can until somebody catches on. So a 0:45:06.720,0:45:11.520 good point though, not all governments are are unnecessarily the right governments, I 0:45:11.520,0:45:16.160 suppose you could say here. And then just to reinforce your point about large organisations, 0:45:16.160,0:45:22.000 I think as we've seen actually with other areas of responsibility such as environment and 0:45:22.720,0:45:30.160 social responsibility, I'm thinking about ESG the UN SDGs, we see large multinationals actually 0:45:31.280,0:45:38.880 having a big impact ... different countries start well ... in some sense the norms of business behaviour 0:45:38.880,0:45:43.040 in different countries just because of their their global reach so I think there is a 0:45:43.040,0:45:50.000 an important role of business particularly businesses with an international 0:45:50.000,0:45:55.040 footing to sort of lead by example in a lot of this and of course collaborate with those 0:45:55.040,0:46:00.080 other stakeholders that we talked about. But speaking about government and there's 0:46:00.080,0:46:06.400 a question from Andrew, who's one of our executive students here, who does work for 0:46:06.400,0:46:11.920 a government, and is asking should there be a hierarchy of government concern, starting 0:46:11.920,0:46:17.280 with regulation for physical risk to the individual, so in other words I guess more broadly, 0:46:17.840,0:46:23.040 are there different differences of this that different stakeholders 0:46:23.040,0:46:27.840 or different sort of entities need to be thinking about, maybe governments thinking about 0:46:27.840,0:46:33.520 that type of harm, organisations think about other types of harm perhaps? I don't know, let's start 0:46:33.520,0:46:37.280 with Yasmeen and then I want to hear what the others also think about this 0:46:37.280,0:46:44.160 because I think it's a really important practical question. That's a great question 0:46:45.040,0:46:52.240 because the implications of how algorithms are used it is a multi-dimensional challenge of 0:46:52.240,0:47:00.160 there's various types of risks that can be created, so as I think about even our organisation and 0:47:00.160,0:47:06.800 our risk framework there are a whole diverse set of categories of risks that we consider 0:47:06.800,0:47:13.040 to our business, and for a government I don't see why it wouldn't be the same with AI. 0:47:14.480,0:47:21.360 So as I think about different types of risk the one that popped into my mind we did a use 0:47:21.360,0:47:28.400 case with a retailer around wastage and we know how much sustainability is important right now, 0:47:28.400,0:47:34.560 and in this use case we were actually supporting the retailer through AI to reduce waste, to reduce 0:47:34.560,0:47:41.280 how much grocery products were thrown away at the end of the day. But equally or vice versa 0:47:41.840,0:47:48.960 AI has has also almost accelerated or amplified the fast fashion business and other businesses 0:47:48.960,0:47:54.960 that are creating a ton of waste and so you can begin to think about AI algorithms and 0:47:54.960,0:48:02.000 the different types of risk or implications they have to societies: there is the sustainability in 0:48:02.000,0:48:09.040 terms of green and environmental impacts, there is impacts to humans and peoples and jobs and careers, 0:48:09.040,0:48:20.000 there's impact to diversity, equality, inclusion. So if I was to think about how to put 0:48:20.000,0:48:23.920 that framework together for government I would be thinking about those different categories 0:48:24.480,0:48:30.320 and in terms of prioritisation there's definitely some categories that you may prioritise 0:48:30.320,0:48:36.800 over and above others - those that have impact to human life for example - but equally I think all 0:48:36.800,0:48:42.720 of those categories are important and potentially require specialists or experts or again business 0:48:42.720,0:48:49.840 experts who understand those areas and are able to fully think through the implications of AI 0:48:49.840,0:48:56.080 which may not be apparent initially to you as a developer or a builder of the tools 0:48:56.080,0:49:01.840 or the technology platforms, but when you begin to speak these different entities you realise 0:49:02.480,0:49:10.240 implications which were unknown because this is just such a new application of analytics and 0:49:10.240,0:49:17.600 new areas. So another question, and this one I'm going to direct at you Felipe, this comes from 0:49:17.600,0:49:23.600 Dennis, who's here in the UK, and he's asking or suggesting shouldn't all innovation be governed by 0:49:23.600,0:49:29.920 the purpose of the company, hopefully being mindful to human beings, then the planet, and then finally 0:49:29.920,0:49:36.880 profit in that order? So what do you think and all innovation- I guess including the way that we 0:49:36.880,0:49:43.840 think about AI and develop it and use it. I mean I do like the the word 'hopefully' there, right 0:49:44.640,0:49:50.480 that's the crux of the problem that we're getting here and it's one of my favourite things 0:49:50.480,0:49:55.520 when we talk about even teaching ethics and discussing ethics is that all discussions about 0:49:55.520,0:50:02.480 ethics are trivial, like none of it matters until there's a trade-off being made, 0:50:02.480,0:50:09.280 right. So everybody agrees that we should protect privacy until it's 10 percent of your sales 0:50:09.280,0:50:14.480 that goes away if you take that action, then suddenly everybody in the boardroom just goes 0:50:14.480,0:50:21.360 maybe we think about this some more. So it's never a problem until it touches kind of that 0:50:21.360,0:50:27.360 money component and that's a large part of this initiative and this project and the research 0:50:27.360,0:50:33.120 and the guidances is yeah I think ultimately we want individuals to self-determine and say 0:50:34.000,0:50:39.840 your business is going to decide what is the best innovation and it will make the good choices. 0:50:40.640,0:50:46.320 Frameworks like these are to help people make the right choice when the time comes. 0:50:46.320,0:50:50.560 If you're making a choice when there's no consequence for your business, if you're not giving 0:50:50.560,0:50:55.840 something up, then you're not really facing a choice, you're just doing the right thing kind of 0:50:57.120,0:51:02.400 with no cost. I'm always more curious when it's somebody's mortgage payment, 0:51:02.400,0:51:07.600 if you can't pay rent, if you lose your job because you did the right thing, that's your 0:51:07.600,0:51:13.040 ethical quandary, that's the thorn in the question and that's when it matters having this 0:51:13.040,0:51:17.840 grounding of I know what my company stands for I know I'm gonna be backed for doing the right thing, 0:51:17.840,0:51:23.920 even if it's going to cost us revenue, doing the right thing is just what we're going to do, 0:51:23.920,0:51:28.160 and not just because it was legally required, this is what keeps me out of jail, 0:51:28.160,0:51:33.360 because this is the right thing to do in what you said - for people, for environment, for 0:51:33.360,0:51:40.560 etc. So again comes back to that it can't just be sort of one set of rules imposed 0:51:40.560,0:51:45.760 from on high, whoever that happens to be whether it's government or the organisation or some mix: 0:51:46.320,0:51:50.560 you need that bottom up for that feeling of everyone has to think about this 0:51:50.560,0:51:56.240 but that's the tension as well because once you make it personal then people are 0:51:56.240,0:52:02.320 different and so that's that's why this is messy and complex but that's why we do need 0:52:02.320,0:52:07.360 to be talking about these issues and and finding ways to take action on them. I'm going to go to 0:52:07.360,0:52:13.760 another question, now this one's for you Natalia, is there a sense that algorithmic tools or models 0:52:13.760,0:52:19.520 are held to a higher standard than what would be applied to humans in similar situations? A bit of a 0:52:20.080,0:52:26.640 human versus machine type of type of consideration here, but is that true? 0:52:28.880,0:52:37.360 It's an interesting question and I would say it depends on what 0:52:37.360,0:52:44.560 kind of algorithm we're talking about, so definitely some algorithms that are performing 0:52:45.280,0:52:54.240 much much better than humans, of course they had to be scrutinised more because they just work on a 0:52:54.240,0:53:00.960 very very difficult level of accuracy. For some algorithms it's not really that 0:53:00.960,0:53:07.360 important, think about automatic irrigation system, is it that important that it 0:53:07.360,0:53:14.800 gives the precise amount of water plus two minus three millilitres? Probably not, so it really really 0:53:14.800,0:53:20.320 matters what kind of application we're talking about. Going to back to Yasmeen's means point about 0:53:20.320,0:53:28.720 medical applications, if we think about human health and X-ray analysis results, 0:53:29.920,0:53:37.600 is there like a single chance that it can go wrong? No if something can go wrong there 0:53:37.600,0:53:43.920 if the algorithm can potentially be incorrect even this when this chance is tiny should we go 0:53:43.920,0:53:52.320 for this algorithm? no. It really depends on this large hierarchy of risks and which 0:53:52.320,0:54:00.560 at the top is the human as a human being and his physical and financial and other assets, 0:54:00.560,0:54:07.520 and then going back down to say danger or risk that technology imposes so I would say it's 0:54:08.480,0:54:14.000 again a complex problem, all problems in this discussion seem to be very complex but 0:54:14.000,0:54:23.120 it's really impossible to tell, but we also need to think beyond the problem itself so 0:54:23.120,0:54:31.680 what people often not consider is what are the bigger risks? For example - when we talk about 0:54:32.480,0:54:39.040 large data centres who drown their servers and thus they save electricity on cooling of the 0:54:39.040,0:54:47.360 data centers, do they always think about ocean and warming the water and how it 0:54:47.360,0:54:55.520 impacts wildlife in a long time term? So this is incredibly complex, I would say, system of 0:54:55.520,0:55:01.280 questions that we need to ask ourselves. And I think it comes to sort of the 0:55:01.280,0:55:07.680 undertone there is think about the fallibility of humans versus the algorithms and where 0:55:07.680,0:55:12.160 potential bias might come in. I think we've got time for one more question and I want to 0:55:12.160,0:55:16.080 pose this one to you, Yasmeen, because I think it's related to that and this comes from Walid 0:55:16.080,0:55:23.280 who's in Saudi Arabia and he's talking about essentially the use of AI in judicial decisions 0:55:23.280,0:55:29.280 or implementing laws, for example in the context of getting rid of 0:55:29.280,0:55:36.320 corruption in an environment where maybe there is corruption to let the presumably 0:55:36.320,0:55:43.760 uh non-corrupt algorithm make these decisions. How do you feel about that and and those sorts of 0:55:44.320,0:55:52.000 approaches of replacing the the fallible humans with a less fallible machine? That's a 0:55:52.000,0:55:58.160 very interesting question and it does link on from the previous question. In fact there have been 0:55:58.160,0:56:06.160 algorithms - specifically in the US judicial system - that have been used for example to predict the 0:56:06.160,0:56:14.560 likelihood of reoffending for for people who have been convicted of a crime and some of 0:56:14.560,0:56:21.440 those algorithms the predictions have been used as part of the judicial decision-making process of 0:56:21.440,0:56:28.320 do you give somebody bail or various decisions that might be linked to that person in their life 0:56:28.320,0:56:36.160 and circumstances. And actually I can't quote the exact paper here but happy to share offline some 0:56:36.160,0:56:41.120 of those algorithms have been shown to be biased because again, when you look at the judicial system 0:56:42.640,0:56:48.480 you might end up as you train algorithms you're potentially training them and they're learning 0:56:48.480,0:56:55.440 biases from the world that we live in today and so they are reinforcing those biases or potentially 0:56:55.440,0:57:02.320 amplifying those biases, that they're a person of a certain race or a colour or a background level 0:57:02.320,0:57:10.000 of education is now more likely to reoffend, and that gets very dangerous and actually it's a great 0:57:10.000,0:57:16.800 example of this whole ethical question, we want to give every human person a fair chance in life 0:57:17.360,0:57:24.000 but if the algorithm has learned from real world data that if you're of a certain background, you 0:57:24.000,0:57:29.760 live in a specific zip code and you have a certain level of education you're likely to 0:57:30.320,0:57:36.400 carry a commitment offence then that's dangerous and we're now no longer giving an individual a fair 0:57:36.400,0:57:43.120 chance at life where we're stereotyping them based on circumstances of their upbringing or their life 0:57:43.120,0:57:49.040 or where they've lived. And so this is where it gets very grey as Natalia was mentioning, and 0:57:50.000,0:57:57.280 how do we think about these applications: even though you can do it should you do it? 0:57:59.760,0:58:06.400 And so I think can you do it? Absolutely you can train algorithms. Do you want to apply it, do 0:58:06.400,0:58:11.440 we think it's fair, do we think it's ethical is another question and it's important to consider 0:58:11.440,0:58:17.680 that question before just taking an algorithm based on its output and applying that to society, 0:58:17.680,0:58:23.360 because worst case scenario you now create a flywheel or a perpetuating situation where 0:58:23.360,0:58:28.480 a certain part of society is disadvantaged and the algorithm is reinforcing that 0:58:28.480,0:58:31.920 and people can't escape that cycle. So I think as, 0:58:32.720,0:58:37.840 just looking at that history of where the algorithms have been applied in the judicial 0:58:37.840,0:58:43.360 system we have to be careful about how they were applied and what consequences that has for people. 0:58:44.320,0:58:48.240 I was going to say it's back to that law of unintended consequences, but we know that 0:58:48.240,0:58:52.880 those shouldn't be unintended consequences now and I guess really to me the 0:58:53.680,0:58:59.680 this reminds us that humans might be flawed in making certain types of decisions and maybe 0:58:59.680,0:59:05.520 we might put some hope in an AI system or a set of algorithms but they're not going to be 0:59:05.520,0:59:11.600 perfect either and really I think it's the the bringing together what what I always like 0:59:11.600,0:59:16.640 to call augmented intelligence of the humans and  and the machines where we probably have a better 0:59:16.640,0:59:22.080 chance of actually doing things and reducing those  those types of errors that that Yasmeen introduced 0:59:23.440,0:59:28.400 earlier in the programme. So unfortunately we're  out of time we could keep talking about this but 0:59:29.120,0:59:35.440 you know the clock is is unforgiving so i  just want to thank Yasmeen, Natalia and Felipe 0:59:35.440,0:59:40.160 uh for spending some time with all of us  today to talk about these very complex issues 0:59:40.720,0:59:44.480 and I'm sure this is, it's not the first time we've  talked about these in Leadership in Extraordinary 0:59:44.480,0:59:50.400 times, and it certainly won't be the last. So thank  you to my panelists for joining ,thank you to you 0:59:50.400,0:59:57.760 for joining. Next week on Tuesday at two  o'clock UK time we have our Dean, Peter Tufano 0:59:58.800,1:00:03.760 joined by Gillian Tett who's the US Editor at  large for the Financial Times and they're going 1:00:03.760,1:00:11.440 to be talking about how anthropology interestingly  can explain business and indeed life. That 1:00:11.440,1:00:15.600 sounds like it's going to be a pretty fascinating  conversation with Peter and Gillian: I really hope 1:00:15.600,1:00:22.720 you'll be able to join us for that, but for now  thank you for for watching, once again thanks to 1:00:22.720,1:00:41.840 my guests. This is Andrew Stephen here at the Saïd  Business School, take care, we'll see you soon. 1:01:02.240,1:01:02.740