- A few years ago, we used to think about tech in general, but AI in particular as like the cutting edge and luxury products. And more and more, no, the luxury products is whatever is made for and by human beings. I'm Carissa Véliz, I'm an Associate Professor at the Institute for Ethics and AI at the University of Oxford. I am the author of "Privacy is Power", and I specialise on privacy and AI ethics. I'm Dr. Alex Connock. I'm a senior fellow at the Saïd Business School, University of Oxford, and my specialism is AI and media, and I wrote this book called the "Media Business and Artificial Intelligence". And so, I'm gonna kick off, Carissa, with the idea that AI is going to allow people to create video content of unimaginable complexity that defies the laws of physics and is gonna democratically empower the average citizen to be able to make "Star Wars" or "2001: A Space Odyssey" in a way that was never possible before when the media was controlled by an oligarchical elite. - I think that is the optimistic view, but of course there's so many concerns that underlie that. And one concern is that these systems have been trained on a lots of data on the internet, a lot of YouTube videos and a lot of content by artists. And these artists haven't been paid, they haven't been compensated in any way. And it seems like these systems are mimicking their work. And so, one question is, okay, yeah, we might be able to make these really fun videos, but we're not paying for the labour of the people who enabled it. And also, what happens to all those jobs? The film industry of course, employs hundreds of thousands of people. And what happens to all those people who used to create these amazing images before we could do it with AI? - Yeah. Well, let's take this one by one. So, first of all, on the issue of ownership, there are some who say this might be the use of blockchain that we've been looking for. And that we've been scrambling around for half a decade trying to find something useful to do with blockchain. And actually pinning each, a piece of content to some kind of blockchain identity might allow the training data sets to be in invert, almost owned by the individual creators. And then, it outputs of those training data sets to be monetized by the creators in infinite testimony, small amounts in a digital rights management way, in the way that a piece of music can be claimed on YouTube now. So, we could again, end up with a more democratic form of content ownership where someone's strumming their guitar in civilian Spain, who gets sampled in a Jay-Z track in New York, gets 0.2 of a cent every time that's done. How do you feel about that? - I think it's an open question whether that's empirically feasible. So, some people think that because these systems mix so many things together, it would be impossible to be fair in that allocation of resources. And a second worry is that even if we manage to exactly track what is being copied or used, it might be the case that you would get a few cents for your work. So, we get that in personal data, which is something I write about in "Privacy is Power". And some people think that it might be a good idea if people could sell their data directly. But actually, this data is only very valuable to people, to companies who can have billions and billions of data points. And to the individual, it's not very valuable, because you can't do much with it. And even if you sold it, you'll get very little money. So, whereas an artist could make a living before AI. It might be that after AI, they get a few cents or a few dollars, and that doesn't really add up. - Yeah, no, I take the point. I thought it was fascinating that when Meta were apparently considering buying Simon & Schuster, the publisher in America, they were thinking about the value of a book for training data purposes of about $10, in fact, not even $10. And that's quite galling for people who write books all day in a university like Oxford, isn't it? - Yeah, exactly. If you wanna make a living as an author, that business model doesn't seem very attractive. - Yeah. Well, let's talk about the other point you made at the opening was about jobs. And you said that it might be that entertainment industry jobs were put in peril. What about the alternative argument that the ability of AI to simultaneously translate at scale? So, it could be managing content in a "Squid Game", for the sake of argument, in 150 languages, in economous speech, such that the original voice was synthesised into different languages and the lips moved in sync and so forth. What about the idea that that might actually create a new vernacular, a new global understanding, and that if we're able to see native culture from each country around the world, a civic or our own language, that would at some level, make human understanding amplified. And is that at all reasonable, do you think? - I think it's interesting and we've already seen it a bit. I mean, I remember travelling before AI translation was available and it was a lot harder. Now, you can find yourself in a very remote place and communicate the most basic things like, where is food? And of course, this is an escalation of that and it does have attractive features. but there are a few concerns. So, one concern is the dominance of English. So, it's not exactly that we are gonna be able to access all these languages, more like these languages get translated into English. And in that translation, there is a loss. - Mm-hmm. - there are words that you cannot translate. And of course, these systems have been mostly trained in English, and so they're much less accurate in other languages. - Mm. - So, there's that concern. And then, there's also the concern of like, even if it's true that we have all these good things, what's the price to pay? How dangerous are they? So, one of the things that I'm becoming increasingly concerned about is how much it invites for scams of all kinds, from using AI in ways that make elections worse and that put into question the integrity of elections. And we've already seen this in some countries, so it's not even theoretical. To just everyday citizens being much more exposed to not only fake news, but also scammers of all types. So, people using AI to try to break into bank accounts. And so, is the good more than the bad? - Yeah, but I suppose one could say we're in a short-term period where deepfake technology is better than the cops, the robbers are out running the cops at the moment. But ultimately, AI itself could be used to spot deepfakes or the outputs of deep learning that are fake, whether that's voice or video or text. And that therefore, over time, we might be able to mitigate that risk. Do you think that's likely or do you think the kind of bad actors will always outrun our ability to track them and recognise them? - I'm not very optimistic in this regard for a few reasons. One is that you have to have in place really good laws and you might have countries that don't have an interest in respecting those laws. So, troll farms in Russia might not have an interest in respecting the watermarking techniques of AI. And then, another reason is that it seems like up until now, we haven't figured out how to do that well enough. So, there have been some tests of watermarking AI and it turns out that's quite easy to get rid of that watermark. So, if you're an ordinary citizen, you might not know exactly how to do it, but if you're tech-savvy, you will be able to do it. And then, there's a concern about whether it would have to be regulated very well and we haven't done that yet. I think we will do it and we will get better at it, because it might not be in the interest of companies to have these watermarks. They have an interest for these systems to be used as much as possible among other reasons, because that's the way they get more and more data. - Yeah. And certainly. If you talk to people in the security field, they will tell you that they're not worried about the large language models they know about that might be capable of being watermarked. They're worried that other large language models might be around, especially now that they've been downloaded onto laptops that are not recognised and not capable of being recognised. Okay, second big question. I was listening to an interview with Ilya Sutskever today who, as you probably know, is the tech genius behind OpenAI and one of the guys who created the large language model known as GPT-4 which lay behind ChatGPT. And he was asked, "Do you envision a future where we're all lazily sitting around in a kind of wolly way, because AI is doing all the jobs and AI, AGI, artificial general intelligence, has been discovered?" And he said, "Well, kind of, yes, but actually, I envisioned that as a future, where AI could assist us in making the really big philosophical decisions about our lives, and having a better world model that we're capable of better understanding the true human condition." And he was then asked, "Well, would even incorporate some AI into your own brain?" And he said, "Yes, I might consider doing that." How do you feel about this idea of a future where we have an AI-enhanced enlightenment, like 18th century moment of new understanding? How do you feel about that? - So, I think that is a very typical answer from an engineer. So, I teach many - Okay. - different kinds of students and if I were to get that from a student, it would be from an engineer. So, I think part of what it means to be human and part of what it was for the enlightenment to be the enlightenment is an understanding of the human experience, which I think is fundamentally inaccessible to AI. So, when a large language model says that apples are very sweet, it's not that it understands what it means for something to be sweet or it doesn't have a sense of the texture of an apple. It's just ingested thousands and thousands of accounts in which human beings report that apples are sweet. So, in the same way, you might get AI to say things that are quite poetic and that might make sense, but it doesn't really reflect experience behind it. There's nothing it's like to be an AI up until now. And I think with that piece of the puzzle missing, you cannot have an AI that is a moral agent, and therefore, we shouldn't allow it to make important decisions, and definitely not decisions about like, what it is to have a good life and how do we structure society to get there. - I think that's a really interesting point. So, do you think if I were to, Oxford is the kind of, one of the spiritual homes of libraries. If I was to walk into the Bodleian Library, I would be walking into 700 years worth of Western society summed up in the endless bookshelf of the Bodleian Library. If you had a machine on your laptop that had the summation of that knowledge and had it all vectorized and capable of being re-outputted to answer to any question, do you not feel that might at some level, represent some kind of enlightenment or some kind of new facility? - New facility, yes. Enlightenment I think is a stretch. It might be like a good map to find things that otherwise would be very hard to find. But already, we're seeing evidence of how AI is creating more and more content and in turn, it's ingesting that content as part of its new training. And as it does that, it deteriorates and deteriorates in quality. And I think it's because it's just every time it trains on data that is not created by human beings, it's just one step further removed from the world of things and from experience. So, one analogy that Ted Chiang used is the idea of photocopying. So, AI is photocopying all reports of the world, but every time it photocopies it again, the quality is deteriorated until you get a very blurry image. - And that's dogfooding, isn't it? And it's a really persuasive idea, I have to say. Okay, let's bring it back to the mundane. Let's think about the corporate life. So, how do we think AI is going to improve the corporate life? Do you think there are ways in which the mundane nature of office life or most jobs that people do, or most rote things that companies have to do, or chat, customer response or what have you, do you think any of that is actually going to represent a positive development for humanity and that perhaps jobs that haven't been that fulfilling in the past, whether in the media or in fact any walk of corporate life, are actually going to become more sentient in a way, because the AI will be taking over all the mechanistic roles? - I think that's possible and we should strive to do that, but let's not take it for granted that it's gonna happen, because it depends on us. And as long as companies have profit as their main motive and only motive is not gonna happen automatically, we need to have like a richer sense of what it is to have a good life and how work interacts with that. And so, I'll give you some examples. So, if AI could take care of my email and marketing, that would make my life much better. But in fact, what I've seen with automation is that it's not a blanket thing. So, for instance, every time I use my washing machine, it saves me two hours of manually washing my clothes and that's a fantastic thing. And my grandmother would've been amazed to have that or like my great-grandmother. However, I spend three hours a day doing email that my great-grandmother didn't have to do. - Yeah. - And so, sometimes, automation, some automation in some areas creates other tasks that are menial in other areas. And up until now, we have seen a trend of automation creating jobs that are harder for human beings because the job is adapted for the machine and not for the human being. And so, we have to be very mindful of that. - That's really interesting. Yeah. Okay, final question, I think, do you, thinking about the kind of structure of the way the new media landscape is going to be organised, do you think that this is an opportunity for a new generation of democratic empowerment to come through and a hundred startups will bloom, and we will end up with a much more competitive landscape that frees us from the oligopoly that essentially has owned the internet for the last 20 years? Or do you, on the other hand, I think I know the answer, but do you on the other hand feel that perhaps a new kind of a perpetuation of that oligopoly is happening because of the sheer expense required to build these large language models, and that we might end up with a world where there are just a few God models, in quotes, which are then powering the rest of society and actually power will then be held in even fewer hands than it has been thus far? - I worry about the latter in keeping with this our routine of good cop and bad cop. - Yeah. - And how many large language models can you name? Just a few, just a handful. - Mm-hmm. - And probably a couple of those are more dominant. And part of it has to do with the sheer amount of data that these companies have, but also the amount of wealth. It's very expensive to run these systems. - Yeah. - So, one estimate is that a query for ChatGPT is about a thousand times more expensive than a search engine. And so, there are also questions about the ecology of it. Is it worth it? So, obviously, the internet and these systems run on natural minerals and energy and this real stuff. The cloud is not a cloud. - Yeah. - And so, there are also concerns about that. So, yeah, I think we have to strengthen our democracies and also our business ethics to make sure that AI can be positive in the long run. - Yeah, I saw an interview the other day with Mark Zuckerberg from Meta saying that he estimated they would spend 100 billion dollars on vectorizing their content on mapping the statistical relationships between words across a vast scale of humanity, which is an extraordinary amount of money. That's double the UK's defence budget, for example. And I think that's probably the world we're heading towards. And I think it probably does unfortunately tend towards ownership in the hands of very few. There are probably maybe 10 large language models in the West at the moment, and most other products are derived in some way from them. And most of them, in fact, almost all of them are owned in some way by the previous generation of media companies. Look, it's been really fascinating discussing these Meta issues and we could probably carry on all day with that. And I personally love discussing the way sound or music or film or TV are going to change. But perhaps what we should do is look at some questions that have been sent into us by people who are interested in this conversation. Should we each pick up a card, and then pose the question to the other? - Yes, let's do that. Okay, let me start. "Are there going to be increasingly new low levels of trust in news, factual content due to fabricated news AI inception? What measures will be put in place to ensure news is relevant and real?" - It's such a good question and such a big question. So, I think we've already touched on deepfakes and of course the magic of a deepfake is that a deepfake doesn't necessarily have to be believed by everybody in order to be efficient. And in fact, all the deepfake has to do is introduce enough doubt into society about the veracity of news at large for people not to believe the true news, and therefore the whole idea of news to become undervalued. I think we see that. Now, of course there have been some very useful deepfakes. For example, Khan in Pakistan sending out synthetic versions of his voice when he was in jail. One could argue was a democratic plus, but then there've been many, many negative examples of deepfakes. There's interesting dimensions to it though. So, for example, there was a study by the Reuters Institute in Oxford of companies that had stopped, that had disallowed themselves from being trained upon as training data, and that this was celebrated as a victory for real news because real news was not gonna get ingested into the systems. But the problem was the right-wing news sources are not disallowing themselves from being trained upon, which means that the large language models over time may tend to go right-wing, because they're gonna be fed right-wing content, but not middle of the road content. And so, all the mitigations that you can think about putting in place against deepfakes have quite subtle and quite serious problems down the line. And this is the world we're in now. - Interesting world. - Yeah. Let me try one for you. "Is it ethical for brands to use AI for media creation rather than humans?" Good question. - It's a very good question. It depends. It depends partly if there are people losing jobs, if the content that is being created is being created through having been trained in content that was copyrighted or licenced in some ways, and those artists haven't been compensated. But if the answer is yes, then I think there's arguably yes, it might be unethical. And there might be ways to mitigate that, of course. - Mm. - There are some models that are compensating people, there are some models that are trained in data that hasn't been copyrighted. And of course there are ways to keep the people, your creators, and use AI to make their jobs easier. And that's the most ethical way a company can use AI, - Yeah. - not to get rid of the people, but to actually enhance their capacities with the same resources they have pretty much. - There is an interesting situation now in particularly programmatic advertising, which is the kind of digital advertising that's sent at scale around the internet through dynamic content optimization, being personalised the individual user. And one of the parameters in which they're personalising is ethnicity. But brands are not necessarily going out and hiring 200 different ethnicities to make a commercial. They're using synthetic humans to do that. And so, they actually end up with a faux ethnicity, which may be in fact, doing actors, real actors out of jobs. And there's another issue there as well, which is do we necessarily want people to be capitalising on the homophily effect, where people were more likely to respond positively to likenesses of themselves? And is that manipulative if someone's creating a synthetic version of you in order to market use? There's so many issues there. - Yeah, especially with personalised ads. - Yeah, yeah. - So, one of the things I think we should change is go back to ads that were more wholesome in the sense that they gave out information instead of taking it away. - Yeah. - And they were less targeted and more open to public scrutiny. - Absolutely. - Let's go- - We could do a whole talk, a whole "Tea Talk" on digital advertising. - We could. "Is there going to be a point where AI is used so much, nothing will feel authentic?" - I think you've already alluded to this. I think there are people who suggest that even within five years, 90% of the content on the internet could be synthetic, and that will lead to the potential degradation of content. And in fact, the large language models are already running out of training data, particularly words as opposed to videos and starting to use synthetic content to train upon. And the outputs of those large language models are already synthetic, but the exponential effect of sequential generations of synthetic could cause real problems. So, yes, I think we are gonna reach that point. And arguably, we've already reached it. If you look at TikTok or Instagram, many of the images on there are already synthetic. So, yes, very significant likelihood that that's gonna happen. - Yeah, it's already affecting experience. Every time you see something online, there's a question in your mind, what am I watching? - Yeah. - 10 years ago, you wouldn't even kind of- - Yeah. I do think, however, there's a kind of Newtonian reaction to that. So, just as there's a move towards the synthetic content, it is sort of Newton second law. Every action has an equal opposite reaction. There's also a move towards live. And actually, if you look at the post-pandemic period, there's been a tremendous growth in live entertainment. The Taylor Swift tour, $4 billion, the booming Live Nation venues, the booming sports and so forth. And I don't, Glastonbury sold out within minutes and so forth. And I think that's because people don't just want a synthetic world. They don't just want a metaverse, they also want the kind of muddy fields of Glastonbury and the burning heat of Burning Man and what have you. And I think that people actually are quite ready to coexist both in a real universe and in a synthetic universe. And that, in a sort of utopian sense, could create quite an interesting future. - It's interesting how a few years ago, we used to think about tech in general, but AI in particular as the cutting edge on luxury products. And more and more, no, the luxury products - Yeah. - is whatever is made for - Great point. - and by human beings. And the AI is like the second best thing that is cheap and that we use when we don't have access to that kind of- - That's a great point. Well, thanks, Carissa. Look, it's been really fascinating talking to Carissa today with our good cop, bad cop routine. And what's interesting about AI is that within almost any category of it, there are these really interesting ethical and philosophical conundrums that quite thoughtful people around the world are looking at. Even within the companies that stand to make huge profit out of this, there are some very philosophical people really trying to consider these issues and that none of the answers are easy. So, whether you think about AI bias or AI ethics in general, or AI copyright, anybody who gives you a facile answer is probably not really thinking the issues through, because it's never that simple. I hope you've enjoyed this "Tea Talk" and do please tune in for the next in the series.