Using AI to Create Key Art Posters for a Sci-Fi Film
PART SIX OF A MULTI-PART SERIES EXPLORING THE USE OF GENERATIVE AI IN KEY ART CREATION.
Hello, and welcome back to my multi-part exploration of how AI tools might be used to improve the Key Art production process.
If you need more context for what this series is about, click here.
Disclaimer: The methodologies I’m describing in this article were performed as a test to see how fully I could replicate a Key Art production process using AI and do not represent my official opinion on how all this should be done. I’m just having fun here :)
Last time, I used several different AI softwares to create some versions of a logo for my fictional remake of Stanley Kubrick’s first film, Fear and Desire. You can check out how that went here.
Here are some additional items for better context about this overall project:
-The Key Art brief I originally generated: DOWNLOAD PDF
-Initial Key Art Concepts I made with MIDJOURNEY and DALL-E 3
All right!
So now the task ahead of me is to assemble all the pieces and parts I've generated into my final versions of key art. To review, these are the concepts I’m going with:
Teaser Key Art:
Isolated Helmet Reflection
Silent Tension
Premiere Key Art:
Faces of Desperation
Concepts I've killed since I last updated you, and why:
-Propaganda Style (The large amount of text in this one made using AI impossible. Has to be laid out by hand)
-Portraits against a Futuristic Dystopia (the images I generated were too weird)
-Dual Exposure (Couldn’t get it to generate something good)
To see the original list of 20 Key Art concepts, click here.
For a Title Treatment, I’m using a logo I made with WordSwag, which was NOT created with generative AI. But tbh, the logos I generated from the various AI softwares I used last time just weren’t correct for a film. They didn’t have the right kind of style.
I could achieve a better style with Firefly or Midjourney, but then I ran into the issue that the text came out as gibberish unless I fed it one letter at a time.
So I figured I’m at least still operating under the constraint of having to use a logo made automatically with software rather than being handmade, so at least I’m still in the ballpark of having this key art be “robot-made”.
Recommended by LinkedIn
I went to assemble my character assets and tease art concepts together with title treatment, billing block and realized…I was missing tagline options!
Back to chatGPT I went, where I got these options for taglines:
And friends, I apologize because after all that…the test fails here! 😬
When I grabbed all the pieces I’d assembled and fed them back into Midjourney and ChatGPT asking them to assemble them for me - they kept generating new things that didn’t look anything like the reference assets. 🫠 (see below screengrabs)
I really didn’t want to come away from this part of my study without something that resembled final Key Art posters. So, I had to go directly into Photoshop to create layouts, add texture and hand-painted grit to the logo, add the billing block, add the tagline, arrange the images and text in a layout, and color correct everything. I also removed things that were odd about the generated images (for which I did use Generate w/in PS). All in all, I spent a while manipulating things to look decent enough to pass for a real film poster. And honestly? I don't really see that human part of it going away anytime soon. No matter how great the tech gets, you still are going to need a person who fundamentally understands how to direct an art process and can manipulate the software to realize a specific vision. Regardless of whether that is Photoshop, Midjourney, DALL-E, Figma, or what have you.
See below for the FINAL images I’ve created. All in all, creating the Tease Art version was infinitely more successful due to the nature of Tease Art (it’s meant to be a sneak peek, so you don’t always have to show key cast), and I was able to form more options, while I really only have one for the Premiere Art that shows the cast members’ faces.
So what do you think? If you saw one of these, would it seem passable to you?
As I detailed in a previous article — I don't think it's viable to generate fake images of real people, even if the technology gets worked out so that it is more reliable. Yes, some live action films have created CGI versions of actors for special situations…but passing off a bunch of computer-generated images as a real actors is problematic at best, for obvious reasons. No, I think we'll still be working with real source photos for the foreseeable future, and the AI component of it will just be that as time goes on we'll have more and more enhancements that allow someone to make a specific edit with more ease.
I’m curious to repeat his experiment with different genres to see if it or how it changes my opinion about the final result.
Anyways, y’all - this may SEEM like this is the end of the Key Art experiment, but our poster still has a loooooong journey ahead of it! We now need to proceed with the Finalization & Adaptation and Rollout & Integration phases, and I believe that’s where all the REAL gains from AI-assisted automation are to be had. So I’ll be back at it in a little bit later to explore that. Bye for now!