Expert comment
5 min read

How can we correct the warped mirror that AI holds up to nature?

The world of advertising, as we all know, is fake. In the real world, people do not wander into their neighbours’ houses and start conversations about funeral planning or bathroom habits; most houses look lived in rather than inspection-ready; and it is rare that every single member of a group of friends is preternaturally goodlooking.

But we are usually prepared to suspend our disbelief. Effective marketing strikes a balance between reflecting aspects of the real world back at us in a relatable way while at the same time offering an aspirational version of it, indicating what it can look like – or perhaps what we can look like – when certain products or services are bought. 

""

Artificial intelligence (AI) has the potential to contribute to both sides of that balance. Indeed, it is doing so already through enabling advertisers to deliver increasingly personalised content and idealised but realistic imagery. Unchecked, however, it is also inclined to deliver stereotypes and cliches, which in some AI image tools (such as Google’s Gemini, now pausing the ability to generate images of people) veer quickly into outright offensiveness.  

The Future of Marketing Initiative symposium on 18 April 2024 drew on recent research and panel discussions with expert practitioners to tease out some of the biggest issues and challenges with using AI in marketing, as well as sharing current examples of best practice.  

""

AI can only work with what it’s given.

AI tools ‘learn’ about the world through massive datasets siphoned from the internet. Text-based tools such as ChatGPT use billions of words; image-creating tools use pictures paired with captions – often the alt-text which helps web-based software describe images to blind people. The ‘intelligence’ of AI is not of the critical-thinking kind: it simply soaks up the training data that it is given and, when supplied with a prompt, creates an image by selecting the best matches from the pictures and captions that it has absorbed.

Many of the most distasteful AI-generated images have come about thanks to the unreliability or bias built into the alt-text captions – written by people, but not with the intention of creating data for AI tools to learn from. Google’s apparent attempts to correct this resulted in the now notorious pictures of white historical figures as people of colour.

Diversity, equity, and inclusion practices cannot be faked.

So if the world reflected back to us by AI is skewed, what can marketers do to redress the balance? What they can’t do, as Levi Strauss discovered to its cost, is use AI to create a diverse range of synthetic human models. Even worse, in its press release announcing this experiment, Levi Strauss tried to frame it additionally as an exercise in sustainability, testing the use of these models ‘to supplement human models, increasing the number and diversity of our models for our products in a sustainable way’.

The backlash was perhaps unsurprising, although it was interesting that consumers and commentators were not necessarily against using synthetic humans in advertising altogether. What they objected to was the idea that Levi Strauss claimed they were promoting diversity and inclusion by bypassing actual humans in the groups they sought to represent. 

""

This is consistent with findings from FOMI’s recent research into consumer reactions to synthetic humans in advertising. It seems that people do not mind very much when AI-generated images are of over-represented groups (i.e. able-bodied Caucasians). However, learning that images of under-represented groups were AI-generated very much reduced the testers’ opinions of the (fictional, for research purposes) firms that commissioned those images.

These firms were seen as inauthentic, disingenuously signalling that they care about diverse representation but without underpinning it with action. Even worse, they were thought to be exploitative, actively profiting from depictions of diversity while getting away without hiring or paying models from the under-represented groups.

There is a clear message that unless your company has already demonstrated a real commitment to diversity and inclusion, using synthetic humans is not well received.

Echo chambers can be reinforced.

The other AI-adjacent tools that affect the world as it is shown to us online are the algorithms that take in how consumers respond to content and obligingly serve up more of the same. If we combine the function of these algorithms with one of the other findings from FOMI’s research – that people react positively to synthetic humans that moderately resemble them, but negatively to synthetic humans that greatly resemble them – then the logical outcome is that we will all start to see only people who look rather like us, potentially leading to less compassion and understanding of differences.

""

AI unrestrained or misused has the potential to amplify prejudice and exclusion. And like many new technologies it is advancing faster than the development of principles for use that would limit that toxic potential. The marketing industry, as the creator of much of the content that populates the learning datasets for AI, has both the power and the responsibility to define and adopt those principles, starting with:

Increasing understanding, both of exactly how the tools work and of the spillover effects and unintended consequences of AI.

Improving data As the originators of much of the content used to train AI systems, companies are in a strong position to improve the quality, diversity, and tagging of that content.

Scrutinising outputs AI image generation may be fast, but that gives companies, and particularly marketing departments, more time to consider carefully what they are publishing and how they are using it.

Looking below the waterline Marketers are not the only people with a tendency to be distracted by the newest shiny toy, but they need to remind themselves to keep looking for what is not obvious, the trouble that is brewing but not being talked about.

""