The Authenticity Trade-Off: How AI is reshaping trust in advertising
As the Advertising Standards Authority increases its focus on misleading imagery, and platforms like TikTok continue to favour raw, unfiltered content, brands are facing a different kind of creative pressure. At the same time, categories like food are already under scrutiny through HFSS regulation, with higher expectations around realism and representation. This is the environment AI-generated advertising is entering.
AI offers clear advantages in speed, scale and cost. But our latest research at The Harris Poll UK suggests a more complicated picture. While AI may make it easier to produce advertising, it may also make it harder to earn belief.
Consumers think they can spot AI, but confidence varies
Almost half of consumers (46%) say they are confident they can identify AI-generated imagery in advertising. That confidence is not evenly spread; among 18- to 24-year-olds, 72% believe they can spot AI. This falls to 21% among those aged 65 and over. Notably, 17% of over-65s say they see no difference between AI-generated images and reality.
Even with improvements in image quality, people still rely on instinct. Images that look too perfect (51%), feel unrealistic (43%), or contain small inconsistencies (42%) tend to raise suspicion. For 36%, AI-generated imagery simply looks false.
This reflects a wider cultural shift we’re seeing, with content that feels spontaneous and unpolished often performing better on platforms like TikTok than highly produced work. In that context, perfection can work against credibility, rather than reinforcing it.
The clearest finding: trust falls when AI is involved
Seven in ten consumers trust adverts created entirely by humans. This falls to four in ten when AI tools are used; and to just two in ten when an advert is generated by AI alone.
Consumers are not rejecting AI outright. They are pushing back on the idea of AI working without human input.Six in ten say a person should always be involved in creating advertising that uses AI. A further 37% are only comfortable when AI is guided by a human. This suggests that how AI is used matters as much as whether it is used at all.
This has implications beyond brand perception. The Advertising Standards Authority has been taking a firmer approach to misleading advertising. AI-generated imagery that closely mimics reality is likely to come under similar scrutiny. After all, what consumers see as misleading often becomes a regulatory concern over time.
There’s a transparency paradox
Consumers are clear that they want openness. Nearly eight in ten (79%) say brands should always disclose when AI has been used.
But disclosure is not straightforward. Two-thirds of consumers say AI-generated ads feel less authentic (66%) or suggest less effort has gone into them (64%). Being transparent may meet expectations, but it can also affect how the work is valued. So, the issue is not just whether to disclose, but how.
Context matters
Our research shows that the acceptance of AI depends heavily on where it is used. Consumers are less comfortable with AI-generated imagery in categories where realism is expected:
Beauty and skincare (24% acceptance)
Fashion modelling (28%)
Travel destinations (29%)
Food photography (31%)
For UK food brands, this is particularly relevant. Advertising in this space is already under pressure through HFSS regulation, and expectations around accurate representation are high. When 70% of consumers say AI can make an advert feel misleading, the use of synthetic imagery adds another layer of risk. In areas that are already closely watched, AI is likely to attract more attention. There is more acceptance in less literal uses, such as backgrounds or conceptual elements. Here, 53% of consumers are comfortable with AI.
AI does not automatically improve perception
One assumption could be that AI signals innovation, but the data does not support this. Only 39% say it makes a brand feel technologically forward. Just 23% think it feels more culturally in touch. Similar proportions say AI-generated ads are better (23%), more appealing (27%), or more creative (29%). Using AI does not guarantee a positive impression and, in some cases, it may have the opposite effect.
What this means for brands
For brands, the question is no longer whether to use AI, but how to use it without affecting trust.
AI as a production tool, not a shortcut: AI needs to be approached as a production tool rather than a creative shortcut. When it is used to support execution, it can improve efficiency without being visible to the audience. When it becomes central to the creative itself, it risks signalling reduced effort and lowering the perceived value of the work.
Human involvement as a signal of quality: At the same time, human involvement is becoming more important, not less. As synthetic content becomes more common, audiences look for reassurance that real people are behind what they see. Showing the craft, the process, and the individuals involved is no longer just a storytelling device, but a way of reinforcing credibility.
Rethinking transparency: Transparency also requires more careful consideration than a simple disclosure. While consumers expect brands to be open about the use of AI, stating it outright can reinforce perceptions of lower effort or reduced authenticity. How this is communicated needs to be handled as part of the creative, not treated as a compliance exercise.
Where realism matters most: Not every use of AI carries the same level of risk. Where realism is central to the message, such as in product, food, beauty or travel imagery, the stakes are higher and the margin for error is smaller. In more conceptual or background applications, there is greater flexibility to experiment without undermining trust.
Managing the timing gap: Brands also need to be aware of the gap between industry adoption and consumer acceptance. The pace of technological change is fast, but attitudes take longer to shift. The fact that AI-generated creative is possible does not mean it will be persuasive, and moving too quickly can create distance rather than engagement.
Innovation is judged by outcomes, not tools: There is a tendency to assume that using AI will signal innovation, but consumers do not judge the tools behind the work. They respond to what they see and how it makes them feel. If the outcome reduces authenticity or trust, it is unlikely to strengthen perceptions of the brand.
From creative decision to trust decision: Taken together, this points to a broader shift in how AI should be managed. Its use in advertising is no longer just a creative or operational decision, but one that sits alongside wider considerations of trust, reputation and governance as expectations continue to evolve.
The rise of crafted versus generated brands: Over time, this may lead to a clearer distinction between brands that prioritise crafted, human-led work and those that rely more heavily on generated content. In a landscape where production is easier than ever, work that feels considered, intentional and human is more likely to stand out.
In practice, this means the brands that succeed with AI won’t be those that use it most, but those that use it in ways people can still believe.
As AI becomes more embedded in the creative process, understanding how it influences trust will be critical. At The Harris Poll UK, we help brands test and optimise creative in real-world contexts, helping ensure innovation strengthens, rather than undermines, effectiveness.