AI can be ethical and give great results. Invisible Universe is fovusing on that, putting side by side AI and 3d talents to collaborate an produce great quality animations.
#AIanimation#aivideo#ethicalai#aicharacters
To IP holders, character consistency isn't a nice-to-have but a must have. Invisible Studio has built custom-trained and owned image generating models for the company's suite of IP. Check out how it's working for Qai Qai! #AiCharacters#AiStorytelling#DigitalArt#KidsAnimation
Curious about how we achieved character consistency in AI-generated animation? 🤔
We've just published an in-depth article: https://lnkd.in/gUrmabxy
🎨 Learn how we built and trained custom image-generating models for our IP suite
🔍 Dive into the technical challenges of maintaining character consistency
🚀 Explore the advantages of owning our AI models
To IP holders, character consistency isn't a nice-to-have but a must have. Invisible Studio has built custom-trained and owned image generating models for the company's suite of IP. Check out how it's working for Qai Qai! #AiCharacters#AiStorytelling#DigitalArt#KidsAnimation
To IP holders, character consistency isn't a nice-to-have but a must have. Invisible Studio has built custom-trained and owned image generating models for the company's suite of IP. Check out how it's working for Qai Qai! #AiCharacters#AiStorytelling#DigitalArt#KidsAnimation
Character consistency is key in the world of IP 🔑 Check out IU Labs to see how we're arming ourselves with the tools to unlock our characters' creative potential!
To IP holders, character consistency isn't a nice-to-have but a must have. Invisible Studio has built custom-trained and owned image generating models for the company's suite of IP. Check out how it's working for Qai Qai! #AiCharacters#AiStorytelling#DigitalArt#KidsAnimation
Using custom created imagery seems like the next step in evolution, connecting ComfyUI to Foundry Nuke for a seamless compositing experience. ComfyUI combined with Stable Diffusion provides a dynamic solution to create content faster, using AI for backgrounds, textures and concept art, saving time on manual tasks. It also boosts creativity, allowing us to quickly experiment with new ideas and visual styles.
Francisco Contreras was one of the first to implemet these ComfyUI nodes into Nuke allowing compositors to build the same node networks in Nuke. One of the many reasons why it's so important to learn the basics of native ComfyUI networks. This allows us to create unique elements like sky replacements or backgrounds directly in our compositing pipeline and test different AI models without switching software.
Read the FULL article with Tim Riopelle:
https://lnkd.in/exA7v9gC
Not only will reference footage evolve into stylized AI filters, but it also underscores the enduring importance of carefully planning your scenes. While the tools may change, the necessity of thoughtful scene planning remains crucial.
A behind-the-scenes look at my process for creating this AI water video: On the left, I recorded myself performing all the actions and demonstrated some of my compositing techniques using masking and shape layers. On the right, you can see the final result after being generated in ComfyUI.
creAItive dAIrection
A good example of where the journey is heading and that generative #AI does not completely replace the creative process, but shortens the production process and replaces other artists of course because one person can now do it. The result is still difficult to control compared to a method with photo/film, 3D CGI/simulation and compositing, but it is incomparably faster and creative direction will become more and more possible in the near future.
But the question is: can you make a good living with the results in a short time? I fear that AI promotes more "pay by time" instead of "pay by value".
By the way, this text was translated with DeepL. 😎
A behind-the-scenes look at my process for creating this AI water video: On the left, I recorded myself performing all the actions and demonstrated some of my compositing techniques using masking and shape layers. On the right, you can see the final result after being generated in ComfyUI.
NYC-Based VJ | MultiMedia Artist | Creative Technologist | Art Educator - specializing in VFX, Projection Mapping, LED video installation, VR, Interactive Visual Content
I've developed this interactive and immersive video wall!
Real-Time motion tracked video wall:
Graphics/visuals are detecting and following the movement of people in real-time. This allows you to interact with graphics/visuals.
I am able to set up this interactive experience at clubs, theaters, live event venues, commercial facilities, restaurants/bars, video shooting studios and any public places. Available to integrate this interactive video system into projection mapping & LED video screens.
In order to remove the real-life background and put up the black background with the motion feedback in real-time, I am using the nvidia background remover powered by AI. So we can set up this video wall anywhere we like.
Made with touchdesigner. More interactive art works coming soon!
Would you like to bring your dream interactive & immersive experience to life ? Let me help you out !
#motiontracking#vfx#interactiveart#touchdesigner#realtimevfx#touchdesigner#projectionmapping#ledscreen#mediaart#emergingtech#creativecoding#immersiveexperience#creativetechnology#innovation#futuretechnology#livevisuals#newmediaart#creativetechnologist#nycevents#ai#aiart
A quick test using LCM for multi character composition with different styles applied.
LCM offers a quick turnaround at the cost of quality, but I think this is a good example of how multiple styles can be applied within the same composition,
all within a one shot diffusion render.
#generativeai#multicharacteranimation
AI video generation tools are evolving beyond simple prompt interfaces into full-fledged studio suites including character dialogue with lip-sync, face motion capture, canvas tools for precise composition, keyframe control for smooth animations,...
While we're still in a trial-and-error era of AI movie making, it's clear that these intuitive production tools in the hands of online creatives will spur a new wave of AI-generated video content.
Screenshots made in the new LTX Studio suite: https://ltx.studio/
🚀 Presenting ConvoFusion, our latest work towards controllable co-speech Gesture Synthesis to be presented in #CVPR24. Joint work with Muhammad Hamza Mughal, Marc Habermann, Lucia Donatelli, Christian Theobalt and Ikhsanul Habibie.
Project page: https://lnkd.in/dPZc2aqu
Generating human gestures is hard! While X-to-motion research has come a long way, the uncanny valley for gesture motions is painfully wide to cross in one leap. What makes it wider is that gestures are extremely person specific. Consequently, most existing works inadvertently learn to correlate gestures with the 'beats' in the speech signal. Moreover, very little research has focused on multi-party conversational gestures.
🔍Our contributions
1. Word-level control:
If we allow for word-level 'controllabiliy', we can offload a lot of ambiguities to the discretion of the animator.
2. Dungeons and Dragons:
Isn't that a game where everyone roleplays in a virtual world? Yes! Naturally, these conversations are loaded with gestures. We capture experienced DnD players in a mocap setup playing the game for hours. This unique dataset offers a rich source of natural and diverse multi-party gestures :-).
We invite the community to explore our work and the dataset (already released).