Expert comment
8 min read

The future of augmented intelligence

Much of the public discourse surrounding artificial intelligence (AI) in business has tended towards the dystopian.

Within the next 25 years, AI could render 60% of occupations obsolete, reinforce gender and racial prejudice with machine-learning algorithms that reflect human bias, exacerbate inequality (or a digital poorhouse’ according to political scientist Virginia Eubanks), or even become a superintelligence that rules the world.

Today, we have AI capable of thinking out loud (see SoftBank Analytics’ Pepper which can vocalise its thought processes), sending people to jail and deepfaking the Queens Christmas message. If you believe Elon Musk, humans must become cyborgs or end up as AIs house cats.

""

In the public mind, at least, theres one group set to benefit from all this: business. Over the past decade, the corporate world has been seemingly embroiled in a Gadarene rush to embrace AI, whether its big tech racing to get AI products to market or companies hedging bets that pandemic-proof, always-on chatbots and other machine-learning systems can cut costs and increase productivity.

Yes, there are potentially sinister elements to AI, but it can also be a force for good: revolutionising healthcare, improving road safety with driverless cars, solving problems for scientists, engineers and weather forecasters alike, alleviating skills shortages (eg Fieldwork Robots’ raspberry-picking robot, which could tackle the UKs lack of human seasonal farm labour) or perhaps removing cognitively repetitive tasks and drudgework for millions (read about an AI debating its own ethics at the Oxford).

Augmented intelligence involves human input and judgement at every step of the process – systems are optimised to bring benefits, and avoid harm, to humans.

Augmented intelligence

To harness the best of AI, a human-centric approach – augmented intelligence – will be crucial, according to experts at the School, which last year introduced a popular augmented intelligence-centred AI course and Oxford’s first AI-focused degree programme, the Oxford Executive Diploma in Artificial Intelligence for Business.

Augmented intelligence is simply the combination of AI and machine learning with human judgement and decision-making: a human-centric approach means that human intelligence must be part of the system,explains Andrew Stephen, Associate Dean of Research and LOréal Professor of Marketing.

'Humans write, select and make decisions about data to go into the algorithms,’ says Andrew. But just because they are involved and are the supposed beneficiaries of AI, it doesn’t mean they are central to these systems and how they are run. A "set it and forget it” approach to AI-driven automation, for example, is not very human-centric. But augmented intelligence has human input and judgement throughout, as well as systems that are optimised for delivering human benefits and avoiding harm to humans.

Augmented intelligence could also generate significant business opportunities too: the global augmented intelligence market is projected to reach $121.5 billion (£91bn) by 2030, according to a report from Allied Market Research, with Andrew predicting those businesses that are more human-focused in the future will be the ones that grow faster and thrive longer.

Trending now

Andrew gives the example of the AI-powered prediction machines’ currently being used by supermarkets and retailers to predict future customer behaviour, perhaps determining which products to put on shelves. As the past two years have shown, issues such as the Covid-19 pandemic, a container ship blocking the Suez Canal or more restrictive international trading regulations, such as those brought on by Brexit, can throw supply chains into disarray.

""

Although prediction machines can be fairly accurate during predictable, stable times, you wouldn’t want them to automate decisions,’ says Andrew. An augmented intelligence alternative would be to let human experts contribute to the predictions, by including different macro-level factors not captured in historical data and perhaps even embedding experts’ intuition and prior experiences. Involving human experts prevents automating decisions based on bad forecasts.

In 2020, academic researchers at the School, including Andrew, released a forecasting tool called Hypertrends that is based on this logic. Hypertrends uses online data (news channels, social media, blogs and reviews) and complex mathematics to make predictions. What our AI system does is find needles in the proverbial haystack of social media,says Andrew. But you wouldn’t want to go automatically with the future scenarios we predict. [Instead], those predictions need to be fed to corporate decision-makers and experts who can then integrate these scenarios with their own knowledge to arrive at the best possible business decisions.’

Augmented intelligence could also help prevent some of the more pernicious aspects of AI for business, such as the way in which its judgements reflect the biases on which machine-learning systems are built and programmed (unsurprising given an estimated 88% of AI researchers are men).

Algorithms are not inherently sexist or racist,’ says Andrew. They can become that way – or appear to – and human interventions are needed to monitor their learning to prevent this.

Early diagnosis

The human judgements and decision-making of augmented intelligence could also be used in tools to prevent social media users being manipulated by algorithms that lead them down the rabbit hole of extremism or conspiracy theories.  

In healthcare, AI has been used to spot early signs of oesophageal cancer and dementia, aid the rapid development of the Covid-19 vaccines and ease loneliness for elderly people via the use of robotic pets.

""

Despite the optimism, the use of AI in healthcare could still backfire. For example, using AI to cure cancer could result in the doomsday scenario recently outlined in this article in The Guardian by Professor Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence in California, whereby it would probably find ways of inducing tumours in the whole human population, so that it could run millions of experiments in parallel, using all of us as guinea pigs.

Augmented intelligence could also be used to reduce the huge environmental impact of AI, such as the electricity needed for machine-learning, or cryptocurrencies. As Andrew explains, A lot of AI and machine learning used in business is what I call brute force” analytics – cloud services have made it easy to run thousands, even millions, of models and just see whats best”. Thats costly in terms of energy, when each model estimated or trained does have a CO2 impact. Humans are needed to design experiments” that dont rely on brute force approaches, but instead analyse only what is needed to be looked at.

Human factor

As such, a human hand is essential. If an AI system is amazing at detecting hard-to-identify tumours on scans, the doctor and patient need to be involved in the process because that algorithm might be great at applying computer vision to scans, but it might not be intelligent enough to know about the patient and their lifestyle: likely factors that matter when the doctor determines the optimal treatment plan,’ says Andrew.

 

""

In their gung-ho adoption of AI, many businesses have neglected the customer journeys. As the thousands of online review forums that berate bank chatbots for turning innocent enquiries into bureaucratic nightmares will attest, many customers prefer advice from a human expert when it comes to discussing, say, fixed-rate mortgages or investment information. This was picked up on in the 2021 Blame the Bot paper, authored by Andrew, Cammy Crolic, Rhonda Hadi and Felipe Thomaz, which found that deploying humanlike chatbots can negatively impact customer satisfaction, overall firm evaluation and subsequent purchase intentions.

Let both machines and humans do what they’re good at. Together, who knows what we can achieve.

Embedding AI

Still, the widespread embedding of AI in business may yet result in significant job losses. According to a recent report by the World Economic Forum, the next wave of automation could result in 85 million (predominantly lower-skilled) jobs across the globe being displaced by 2025. Yet, the same report also estimates that lost employment will be outweighed by the number of new jobs – 97 million – created in the next four years. Andrew foresees new roles for algorithm trainers/coaches, AI supervisors who oversee algorithms’ learning, plus ethics and bias specialists.

Meanwhile, as governments draft new legislation surrounding the safety, security and taxing of AI, new jobs could be invented (for example, in accounting to assess the taxation of driverless cars).

Ensuring AI is integrated into organisations in a way that favours both business and humanity is no easy task, says Andrew. Leaders have to learn how to collaborate with algorithms,’ he says. They will need to deeply understand how AI systems function so they can look under the hood and identify the right places in their systems or processes to bring humans in, and what human expertise is needed.

New regulation could aid the rise of augmented intelligence. Last April, the EU produced the first draft omnibus regulations for AI, which could ban machines from mimicking humans and see businesses subjected to requirements such as human oversight, transparency, cybersecurity, risk management, monitoring and reporting. It may become mandatory for AI to be encoded with algorithmic ethics and be traceable to their creators or built with a series of augmented intelligence checks where the machines stop and ask for human advice.

 

Such legal issues – and the importance of a human-first approach to AI – are taught on the Schools aforementioned diploma in AI in business, which helps develop the strategic skills that will be needed by future leaders in the age of AI. The diploma, and the need for augmented intelligence, comes at something of a tipping point for the technology, which is advancing rapidly (a computer could match the human brain by 2052, according to tech analysts at the Open Philanthropy Project), despite the world being ill-equipped for such an epochal event.

'We’re not placing blind faith in the machines,’ says Andrew. 'We are advocating a human-centric approach, which will ensure our society will do its best to prevent AI from doing harmful things.

""

‘There’s no magic to AI: it’s just a set of tools that organises, structures and analyses data. But the future isn’t captured in data yet: it’s up to our imaginations as business leaders. AI alone won’t have better business judgement than us humans. But we can’t sift through the world’s data like AI can. Let machines do what they’re good at, and let humans do what they’re good at. If we join forces, who knows what we can achieve.

Watch Professor Andrew Stephen keynote 'Using Digital Technology to Become More Human' where he talks through 3 of his research based case studies.