In this article
As another turbulent year in media and tech comes to an end, VIP+ analysts are revisiting their 2024 forecasts — and what they got right and wrong. In this installment, Audrey Schomer assesses how generative AI has impacted Hollywood.
The role of generative AI in Hollywood has been a challenging topic this year. Several key aspects included the development of high-fidelity video generation; growing interest in fine-tuning image and video models for use in content production; content licensing for AI training in contrast to lawsuits by content owners for the unlicensed use of their content; design of methods to enable the authorized use and monetization of talent digital replicas and combat the misuse of talent likeness in deepfakes; and ongoing concern about ethical data and its downstream importance to the usability of generative AI tools in content production by enterprise users.
It’s been important to get many perspectives from a meaningful cross-section of people: founders and leaders at generative AI companies building various tools for image, video and voice synthesis; AI VFX services focused on deepfake special effects, such as face-swapping and lip-sync; studio execs; concept and storyboard artists; VFX CTOs and supervisors; content localization networks; ethical technologists; entertainment lawyers and IP lawyers; talent agency execs and reps; deepfake detection developers; and companies leading the way on standards creation for media provenance.
Here’s a review of how VIP+ tracked the trends driving generative AI’s impact on Hollywood and entertainment this year:
Video Generation Has a “Usability” Problem
I have argued that AI video has a usability problem and was still not possible to use as onscreen “footage” on major studio productions. Studio use of video generation models has been constrained by two main limitations: (1) insufficient performance to meet the needs of the most premium content and (2) copyright uncertainty. For major studios, this remains the reality.
When OpenAI first announced Sora in February, I contended that even such an impressive video model wasn’t ready to “replace” Hollywood because these models still lacked controllability, offered no clear and reliable ways to maintain consistency of character, objects or environment and spontaneously inserted visual irregularities (hallucinations) such as occlusions, morphing and wrong anatomy.
As I further described in the June special report “The State of Generative AI in Hollywood,” “Repeatedly, visual and VFX artists and AI leaders alike confided to VIP+ the main challenge and pitfall of today’s video generation models is unpredictability and lack of fine-grained control over what AI outputs in response to a text prompt.”
As expected, video generation has only continued to improve in photorealism, most impressively in Google DeepMind’s Veo 2 launched earlier this month. But controllability remains a challenge of video generation overall. Likewise, even Sora and Veo 2 exhibit hallucinations in their outputs. Hallucinations may never completely go away, even if they can be substantially minimized.
Yet even as models improve, gen AI’s copyright conundrum remains a final critical barrier for enterprise use on content intended for commercial distribution. Two legal uncertainties pose risk factors: It’s still unclear if and when AI content is copyrightable, and, if and when AI content could infringe, if it was made with models that were trained on unlicensed copyrighted content (which most video models invariably have been).
Copyright concerns ranked among the biggest obstacles to using generative AI cited by media and entertainment decision makers and workers alike, according to a May 2024 VIP+ survey conducted by HarrisX.
Until more legal clarity comes to the market, studios are still using video generation for previsualization rather than final frames and restricting the ability of VFX to use AI content as more than reference material.
In a future scenario where video generation models are “usable,” video generation would become simply another way of creating screen visuals. Several possibilities have been proposed, including B-roll, insert shots, establishing shots, pickup shots, simulations (e.g., explosions) and backgrounds. Video models could also enable “impossible shots,” as I argued in the June report. “Video models could further enable ‘footage’ otherwise unachievable with cameras or traditional VFX.”
Despite these challenges, several AI studios have emerged in 2024 with a general goal of (re)imagining film and TV production workflows with generative AI, including video generation.
Studios Are Experimenting With Fine-Tuning Image and Video Models
The “usability” challenges of video generation models don’t mean that major studios haven’t taken a serious look at the tech. Fine-tuning video generation models with catalog film and TV content is underway.
In July, I honed in on fine-tuning as an opportunity studios were exploring shortly after Runway launched the alpha version of its latest video model update, Gen-3. Then in September, Lionsgate announced its Runway partnership to develop an exclusive video model for internal use by the studio and affiliated filmmakers, custom trained (fine-tuned) on some of the studio’s library of film and TV content.
Lionsgate isn’t the only one, I indicated: “Fine-tuning to have exclusive video models trained on owned content IP is likely proceeding across the industry, a source told VIP+."
The practical benefits of fine-tuning are still mostly a mystery. I’ve argued that studios are likely considering it because it gives them a private tool for internal experimentation, which they can offer to filmmakers as they reduce production budgets. It's also “safer” than licensing and allows them to test-run a version of a model trained on their content.
Major Studios Have Reasons Not to License Their Content to Train AI, but Others Are Doing Deals
Amid the year’s rash of licensing deals, still to date no major studio has yet struck a content licensing deal for AI model training, at least not publicly. Studios are withholding their content for multiple reasons. That includes a lack of deal precedents that would inform how much their content is worth and how a fair deal should be structured for this fundamentally new purpose (AI training is not the same as distribution); uncertainty over whether studios have the “right rights” to license, where actors or other third parties might interpret licensing as in breach of preexisting contracts on past productions (and conceivably sue); and preferring to keep competitive advantage in-house (such as by fine-tuning a video model on their content rather than conferring improved capabilities on a widely used general model).
It remains to be seen whether any major studio bites on the licensing opportunity. Elsewhere, dataset brokers like Calliope Networks (recently acquired by Protege to become its media data arm, Protege Media) have been building large, high-quality datasets of film, TV and creator video and actively negotiating nonpublic licensing deals with AI developers building video generation models.
Celebrity Talent, Agency Reps and Media Companies Will Need Solutions to Combat Deepfakes
This year has seen a growing number of instances of nonconsensual, problematic celebrity deepfakes and synthetic content spreading online, misappropriating the name, image, likeness and voice (NILV) of entertainers from actors and music artists to TV personalities and online creators for fraudulent ads and deepfake nonconsensual intimate imagery (NCII).
Ordinary, manual approaches to finding these infringements and executing takedowns will simply be inadequate to address the scale, complexity and subtlety of the problem, as I argued in VIP+’s December special report “Gen AI, Celeb Deepfakes & Digital Replicas.” Deepfake detection that’s capable of identifying not just if a piece of content is synthetic but when it contains a specific individual’s NILV, will be more needed.
This capability has come from third-party tech providers like Loti. Social platforms, where celebrity deepfakes commonly circulate, are also recognizing the need and building tools designed to help creators, artists and other public figures detect and manage AI content that contains their likeness.
Talent Will Also Need Solutions to Track and Trace Authorized Digital Replica Uses
Authorized uses of talent AI likeness are also beginning to emerge. Projects could originate from film, TV, gaming or animation studios, sports leagues and brands as well as tech and generative AI companies.
Talent who engage and authorize use of their digital replica asset — such as a voice clone or 3D scan — will need to have assurance that their asset is securely stored and mechanisms that enable them to give or deny consent for its use and to be able to track and trace how their asset is being used by a specific partner in a specific context as content spreads across platforms.
So far, industry focus has centered on developing robust, interoperable provenance techniques and unified standards that will work for distributed content.
Celebrity Chatbots Are Compelling but Risk Prone
In multiple instances this year, AI companies sought to partner with celebrity or creator talent to create content experiences with conversational AI, usually by customizing apps or LLM-powered chatbots with talent voices or personas. Notable examples include Meta’s AI chatbot on Facebook, Instagram and WhatsApp, featuring options for five celebrity voices, including Awkwafina, John Cena and Judi Dench; and ElevenLabs’ Iconic Voices, offering voices of four famous deceased actors in its Reader app. These deals have likely been lucrative for talent and offered audiences personalized content at scale.
But as I argued in July, celebrity or character-based chatbots carry some reputation risk as users engage with them in ways that a celebrity or rights holder can’t completely control, even when developers restrict dangerous outputs.
Yet another risk of personified chatbots is one that has surfaced more visibly and painfully as users have become emotionally attached to chatbots due to a human tendency to anthropomorphize traits of computer systems. Such false connections have led some users, particularly young people, to take their own lives, spurring new lawsuits against AI companies such as Character.ai for distributing these experiences.
Other VIP+ 2024 lookbacks ...
• Tyler Aquilina revisits the media business’ first year in a post-streaming wars world
• Rob Steiner revisits 2024 in the creator economy — and whether it made its mark
• Kaare Eriksen revisits 2024 domestic box office — and which films didn’t measure up