The pace of the diploma programme has sped up, and, in a blink, Module three of the Executive Diploma in Artificial Intelligence for Business has concluded.
Our focus this past week has been on the various implementations of AI and how this new tool can be used for scientific advancements, medical applications, entertainment, sustainability, financial security, and the many other fields we hail from as a class. However, AI has also had some colossal failures in the public eye, which have tarnished its reputation as possibly biased or simply inaccurate. For example, facial recognition has erroneously missed identifying entire populations, and AI-powered chatbots have spectacularly failed to stay within the ethical boundaries of conversation. Further, AI tends to make people uncomfortable when those systems act too human.
But how do these biases and mistakes get introduced into AI tools in the first place? Yes, we had stimulating presentations and animated debates in class, but as the discourse carried over into the diverse pub landscape of Oxford in the evenings, and we explored the topic further. These informative after-hours discussions always get me thinking about human intelligence. We (and the press) use it constantly as the backdrop to AI. We call it ‘Artificial Intelligence’ only because we consider what we demonstrate as ‘The Real Intelligence,’ not that “artificial” intelligence you hear about in the news. You might say that AI systems produce predictions and insights that humans have traditionally only created. This might manifest as a solution to a problem, a prediction about future events based on past events, or even the generation of something novel – a form of creativity. We liken the output of AI to intelligence, but we call it artificial since it didn’t come from people. It’s understandable that people become uncomfortable when a machine accurately predicts future events or can carry on dialogue at the level of another human. These traits and capabilities have previously been reserved only for other humans.
Of course, human intelligence isn’t perfect; we forget things, make incorrect predictions, and see patterns where none exist even as we introduce our own biases. But, by most accounts, we’ve done well with our intelligence among the fauna on this planet. But how is human intelligence flawed, and where is it strong? Often AI enthusiasts will say humans are good at detecting patterns with little data, and AI is good at finding patterns in a sea of data. Agreed, but when humans are presented with a new situation, we already have an immense set of experiences to draw on to deal with these new scenarios. We take successful strategies from our past and transfer or apply them to unique situations. While this can often work well, it can also fail miserably; a result that can be attributed to an inherent bias.
For example if, in all our experiences, we’ve only been exposed to certain genders in certain occupations, we will carry this bias forward to a world where gender does not always predict careers (or gender isn’t considered binary). The same happens when we train AI models on data from the past that also exclusively contains these situations. By our modern standards, AI systems will behave in a biased fashion. So what can we do about it?
Firstly, if we want humans to eliminate their gender biases, for example, we need to re-expose humans to a new language and understanding of roles. If we wish for a non-gender biased AI system, we need to train up new AI models either without using past text, or by altering historical accounts to remove biases. Secondly, we should only ask the new AI model questions contained within the domain of the non-biased data we used to train it. Lastly, we should immediately check it for biases before using it.
OK, we are starting to think differently about AI, inherent biases and how to eliminate it; time for another pint.