Adobe Sneak Peaks

Adobe Sneak Peaks

Arthur C. Clarke’s third law states that “any sufficiently advanced technology is indistinguishable from magic.” AI, especially the generative stuff, causes me to experience much magic, and Adobe’s upcoming software is enthusiastically waving its magic wand.

Here’s what has got me excited and why.

1. Rotate vector artwork.

The recent Adobe demo that left my jaw on the ground has the most underwhelming proposition: to rotate vector artwork. I was initially confused as to why this was such a big deal until I saw Project Turntable rotate the perspective orientation of 2d artwork. If this doesn’t make sense, just watch the video.

Why is this a big deal?

We’ve long had software to create ‘tweened’ artwork between keyframes. But the generated frames of animation are only generative for linear valued properties, such as position, rotation, and size.

I predict that eventually you are going to be able to draw one or two reference images of a character and tween between arbitrary camera angles without constraint. Vector character designs will be analogous to the T-Pose used in 3d character design from which any camera angle or pose can be created.

I wouldn’t be surprised if we eventually see this feature in Photoshop too.

Here's a video of the full demo.


2. Borrow another designer’s style.

One of the earliest AI demos from Adobe saw generative creation of a variety of aspect ratio layouts from a single original artwork. Project Remix A Lot shows us more of this, and it’s good to see it live and performing well.

The big deal feature though, is quietly dropped at the end of the demo where a borrowed existing design is applied to the basic artwork.

Start with your own very basic design and then upload work from another designer that you like.
Like magic, it's as if the other designer did your work for you.

Why is this a big deal?

Much has been said about the power to imitate existing visual artists’ style using generative AI tools — it’s as simple as mentioning an artist by name in a text prompt. Graphic designers are about to experience exactly the same issue, as everyone with an Adobe license gets a magic button to generate design almost like you did it for them.

Here's a video of the demo.


3. Harmonize.

Project Perfect Blend changes the lighting in multiple photos to help create a more realistic montage. This doesn’t sound mind-blowing, but here’s a before and after shot.

This source photo was taken in low light, but we want to place it in a sunny context.
The magic of the Harmonize button, coming to a Photoshop near you.

Why is this a big deal?

Combining elements from photos to make a single image sounds easy until you try it. Even if all your photos are taken at the right angle you end up with weird lighting inconsistencies that explains the value of a talented compositor who takes their years of experience to tweak the colour adjustments of each photo to create the required harmony.

As you can see from the above video, even if your source photo is partially obscured by darkness, what will likely be Photoshop’s new Harmonize button will do the magic to change the lighting to whatever you require.

Watch a video of this new feature here.

4. Manual control.

A common thread throughout Adobe’s implementation of AI is to provide creators with more control between the text prompt and the resulting generative content.

Project Scenic, Project in Motion, and Project HiFi all aim to increase the control for generative creators.

Why is this a big deal?

Existing generative tools are criticized by professional creatives for providing limited control through a text prompt and reference image. The demos above show we’re going to have more control over the generative process, a trend we’re going to see a lot more of, as can be seen with upstarts like BlendBox.

5. Sound tracked.

Project Super Sonic creates generative audio inside the timeline of your video editing software.

Why is this a big deal?

I have a large collection of high quality sound effect libraries that I’ve accrued over the years, but I still often grab a sound effect online that I then have to pay for. I probably already have a sound effect that will do, but I’m paying for the convenience of their tagging that allows me to text search and preview relevant options.

Thing #3 Hoss Has Learned states “Reduce the cost of what you do often.” Being able to bring this process inside your NLE allows for faster iteration.

Tomos Gill

Creative sales and account management professional - 10 years experiences of sales, customer service and tech support. Design and artistic skills

1mo

Thanks for sharing Hoss, nice summary. That first feature rotating the vector image is crazy!

Where’s the ‘🤯’ Like Button ?

Dylan Welsh, PMP

Producing interactive environments that connect people to art and adventure in dynamic and meaningful ways | Innovator | Strategist | Placemaker | Experience Producer | Technology Integrator | #MAKEFORGOOD | PMP

1mo

Project Turntable made me giggle with glee!

Eric Dolecki

Principal Product Designer & Software Engineer @ Bose | Prototyping | Published Author

1mo

Do you mean using AI to create surfaces on a 2D vector that transforms it into 3D that you can manipulate like a model? Can that be imported into After Effects for rendering to video? Or into Premiere perhaps? Sounds excellent whatever it is. 3DSVG? :)

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics