Text-to-video generative AI (GenAI) models (for example, Sora) that can create realistic and imaginative scenes from text instructions have spurred a tectonic shift in the advertising industry. Brands and agencies are innovating at a rapid pace to leverage AI-generated video content in their advertising. Behind the scenes, brands, agencies, and researchers are assessing the feasibility and pitfalls of using AI-generated creative in how an ad is imagined, produced, and evaluated.
The time and cost savings that AI-generated creative can provide during the development journey of an ad campaign are very attractive for brands. Why spend time and money to create an animatic or a storyboard if you can feed the text prompt and see the rough draft of your idea brought to life in a matter of moments? Some even believe that AI-generated content will be able to eliminate the need to produce a finished copy and will become the standard in the ad creation process. But are we there yet?
How does the brain process AI-generated advertising?
To the naked eye, some of the strongest, high-quality AI videos may look “real.” At least, that’s what we think, or what we say we think…but what does the brain actually tell us? Neuroscience veterans from NIQ BASES Advertising sought to dig deeper into how the brain responds to AI-generated content, centering a study around two key questions:
When we watch an AI-generated ad, is there anything happening at a deeper level that we might not be able to articulate?
If so, how might this affect the success potential of AI-generated ads, and what is the long-term implication for our clients’ brands?
We selected a set of AI-generated, branded advertisements that fell along a spectrum of high to low quality. We then evaluated them using System 1 methodologies—electro-encephalogram (EEG), eye-tracking, implicit response time—in addition to traditional survey measures. In a forthcoming piece, we will share our findings in more detail, but our initial discoveries raise important questions for CMOs and advertisers about use cases for generative AI.
Want to stay ahead of what’s coming with AI-generated ads?
AI-generated ads are often easily identifiable
Consumers were very attuned to the quality of the creatives. When asked about their general impressions of the ad, they—spontaneously and unprompted—identified most of them as being AI-generated. In fact, the only ad that consumers did not immediately perceive as such was created by an advertising professional through considerable iterative editing. Consumers also rated all AI-generated ads as being significantly more annoying, boring, and confusing than non-AI-generated ads.
What this means for brands: AI-generated content can create a negative halo in consumers’ minds, even when they do not explicitly perceive it as such. When using AI, brands should ensure careful prompting and iteration to deliver a quality final product.
AI-generated ads elicit weak memory activation
In comparison with traditional video ads, we found that memory—as measured in the brain with EEG—was weak among most tested AI-generated ads, even those perceived as “high quality.”
Memory helps us understand whether what we are seeing fits with something that we already know or have experienced—essentially, whether we have an existing template within our brain to which we can match what we are seeing. This decreased memory engagement was evident, even for the most edited and polished ad: Consumers’ brains were registering that something wasn’t quite right, even if they weren’t explicitly aware of it.
What this means for brands
Because memory plays a key role in whether we ultimately purchase a product, download an app, talk to our doctor, or donate to a cause—this finding has implications for how motivated a consumer will be to act on what they just viewed.
AI-generated ads are good at activating brand associations
Even the lowest quality AI-generated ads were able to successfully convey the intended brand identity. Each of these ads was able to strengthen the mental network of associations for their respective brands at a strong level (as measured by EEG in a pre-to-post exercise).
In generating visuals, AI models are pulling from all existing representations of the brand from the training set (e.g., the way the brand is portrayed in the media, on the internet, etc.), essentially tapping into a visual stereotype of the brand. This makes brand associations immediately accessible, because our brains respond well to consistency and repetition. Often, creatives are looking to refresh their brand and stretch into new territories, which may require time and repetition to generate a strong brand association. AI, however, is leveraging what is known and familiar.
What this means for brands
The implication is not that AI-generated ads should replace all branding (the world would be so boring!), but there might be a role for AI to play in identifying the strongest branding assets for brand managers to leverage in advertising and marketing.
Keep in mind, however, that consumers reported feeling less positive about the brands for all AI-generated ads, despite strong, implicit brand associations. There is a clear risk for using AI-generated ads: Although the brand may come through well, the negative halo of AI may negate its benefits.
AI-generated ads must be high quality to effectively deliver a message
Ad segments whose visuals looked particularly strange or unrealistic elicited the lowest memory activation and highest attention activation, as measured by EEG. This indicates that a high cognitive effort is needed to understand what is happening during these moments.
The brain has a limited capacity for processing information. If overloaded with complex or unrealistic visuals, key information may be missed. Humans are also incredibly good at detecting any deviation in how a human stereotypically looks and moves—an Uncanny Valley Effect. This means we can feel unsettled by human-like AI and robots that come close but do not quite resemble humans. We can experience a feeling of strangeness, unease, and even fear in response to these objects.
What this means for brands
Brands and agencies who want to use AI-generated ads for early-stage idea testing must keep in mind that their quality can have a big impact on results. When viewers are distracted by odd visuals or are fixated on an unrealistic person or object, this increased cognitive effort ultimately affects how they lean in and the message they take away.
What’s next for GenAI in ad testing?
Our findings indicate that consumers are indeed very sensitive to the authenticity of ad creatives, both on the implicit (nonconscious) and explicit (conscious) levels. At the present time, even the best AI-generated videos can’t quite trick the brain, leading to decreased ad effectiveness. Although this technology is not quite ready yet for prime time, it can still help brand managers in identifying key branding assets, creating story boards, and generating early-stage ideas, which can greatly facilitate the ad development process. And as AI models become more and more sophisticated, it is highly likely that their ability to produce more realistic videos will only increase—empowering agencies and marketers with a valuable resource to create new and effective ads.
Authors
Avgusta Shestyuk, PhD
is the Global Head of Science and Research, Neuroscience for the NIQ BASES Product Leadership team
Megan Belden
is Vice President, Global Lead, for NIQ BASES Advertising
Come see us at CES 2025!
NIQ will be speaking about these findings across generational considerations at the Consumer Electronics Show 2025 panel session, Adapting to Change: Demographic Shifts in Advertising Strategy, on Thursday, January 9 at 10:00 a.m. PST.
CMO Outlook 2025
Our CMO Outlook 2025 report breaks down how marketing leaders should be thinking about AI and data-driven decisioning heading into 2025.