Missed our M-body.ai webinar? No worries! The recording is now available online! 🎥 On July 23, our expert panel introduced M-body.ai and delved into some fascinating aspects of the project, including: 🎯 Project goals, deliverables, and team 🏃♂️ Our human motion datasets: from skeletal body animation to facial capture and beyond 🛠️ Capture methodology and ML research challenges 🧠 Software tools to support generative animation 📜 Open-source licensing and how to get involved Whether you’re a production studio pro, researcher, developer, or animator, this webinar is packed with insights you won’t want to miss! 🎬 Watch the recording now: https://lnkd.in/eFC5aSuS
M-body AI’s Post
More Relevant Posts
-
We’re excited to share a special course review video from Natanael Fonseca Loaiciga, one of our experience officers who participated in the 4th TouchDesigner x ComfyUI Creative Workshop. 🎉 In this video, Nathan takes you through his journey, offering a firsthand perspective on the course structure, hands-on sessions, and how the MediaPipe workflows and naked-eye 3D AI animations unlocked new creative possibilities. 📹 Watch Nathan’s full review to discover why this workshop has been a game-changer for digital art enthusiasts! Thank you, Natanael Fonseca Loaiciga, for sharing your thoughts and for being an integral part of this creative journey. #TouchDesigner #ComfyUI #AIAnimation #DigitalArt #TEACommunity
To view or add a comment, sign in
-
We’re excited to share a special course review video from Natanael Fonseca Loaiciga, one of our experience officers who participated in the 4th TouchDesigner x ComfyUI Creative Workshop. 🎉 In this video, Nathan takes you through his journey, offering a firsthand perspective on the course structure, hands-on sessions, and how the TouchDesigner MediaPipe workflows and naked-eye 3D AI animations unlocked new creative possibilities. 📹 Watch Nathan’s full review to discover why this workshop has been a game-changer for digital art enthusiasts! Thank you, Natanael Fonseca Loaiciga, for sharing your thoughts and for being an integral part of this creative journey. #TouchDesigner #ComfyUI #AIAnimation #DigitalArt #TEACommunity
To view or add a comment, sign in
-
Check out the new paper, "UniTalker: Scaling up Audio-Driven 3D Facial Animation through A Unified Model." This research introduces UniTalker, a unified model designed to enhance the scalability and realism of audio-driven 3D facial animations, significantly improving the expressiveness and synchronization of facial movements. The paper provides a detailed analysis of the techniques used, making it a valuable resource for developers and researchers in AI and computer graphics. The findings highlight substantial improvements in real-time facial animation and audio synchronization. For those interested in the technical aspects and practical applications of 3D facial animation, this paper is essential reading. It offers insights and solutions that contribute meaningfully to advancements in the field. Read the full paper here: [https://lnkd.in/g8CEYGjV] #AI #MachineLearning #ComputerGraphics #DataScience #Research #Innovation #3DFacialAnimation #UniTalker
To view or add a comment, sign in
-
So, I've been diving back into technology, and I must say, experimenting with new tools every day is both exciting and empowering. Remember the days of editing on Adobe Flash? If you know, you know—yeah, that was back in 2005. The tools we had then are nothing like what we have now. AI has completely changed the game, turning the impossible into possible. Recently, I've been exploring the capabilities of the tool SoraAI animation tool, and I figured it out in just a few hours! I encourage everyone to participate and share your creativity to enhance The new economic system. The future is now! This post is part of the design competition initiated by DAO Proptech. Any followers who want to join in can tag me and Ansar Moughis. #AIChangesEverything #DesignCompetition #FutureIsNow #TokenizationRWA #DLT #TechRevolution
To view or add a comment, sign in
-
It's review o'clock. My review With the help of your feedbacks, It gives me a sense of ''You can be more creative'' and I'm already anticipating to see my growth after the end of the course. My Review As much as It's very demanding it has been great all along putting together focus, patience, creativity and innovative perceptive and I know In time i'm rising to become an AI image generator guru. My Review. I'm in awe to what I'm being expose to in respect to this course, It an unlocked mystery that opens up the beauty of life. Purchase the AI art and 3D Cartoon Animation course for N3,000 and get certified.
To view or add a comment, sign in
-
Experimenting with Bezi to prototype some of my spatial interactions on-device. As with my 3D animation explorations, I wanted to keep these short and quick while learning new prototyping techniques. The goal was to quickly experiment with different inputs, movements, and behaviors to interact with 3D content in space. Bezi provides an intuitive set of prototyping features that require no coding. The more I worked on these experiments, the more I found ways to creatively use each feature for building more complex interactions. Even though they’re not perfectly refined, building these quick prototypes really helps test and communicate potential experiences in spatial computing beyond 2D screens. It’s been fun doing these quick explorations to explore new immersive tools and techniques. 🙂 #xr #spatialcomputing #bezi #design
To view or add a comment, sign in
-
See WHAT'S NEW in Reallusion iClone 8.52. Lots of updates for professional animation editing, including AccuPOSE for natural 3D posing. It's an an AI-powered innovation utilizing deep learning models trained on ActorCore’s extensive motion database. Find out more at befores & afters in this latest #vfxinsight story. https://lnkd.in/e5se2g-m
To view or add a comment, sign in
-
SIMS: Simulating Human-Scene Interactions with Real World Script Planning Simulating long-term human-scene interaction is a complex yet captivating challenge, especially when it comes to generating realistic, physics-based animations with detailed narratives. Some Important Keywords before reading, -Human-scene Interaction -Time-series Behaviors -Dual-aware Policy -Kinematic Datasets This study introduces a groundbreaking framework combining: -LLMs for logical storyline creation and script planning inspired by rich data from films and videos. -A dual-aware policy to guide character motions within spatial and contextual constraints, ensuring plausible interactions. To achieve this, the framework includes: 1. Comprehensive planning datasets featuring diverse motion sequences from real-world and annotated kinematic datasets. 2. Advanced training mechanisms for versatile task execution in dynamic scenarios. Extensive experiments show significant improvements over traditional methods, marking a leap forward for animation and interaction modeling.
To view or add a comment, sign in
-
-
Genie 2: AI for generating worlds from a single picture Came across an interesting report from the developers of Genie 2. No access yet, but they claim very interesting features (not all of them, but the ones that caught my eye): 0. Create a variety of interactive 3D worlds from a single image. 1. Action control: you can simulate the movement of the character from the keyboard (with the usual WASD) 2. Physical simulation: The video coolly simulates water, light reflection etc. But we saw that in sora (https://lnkd.in/eEBCVGUr) too 🤡 🤡 3- Character Animation. The idea of the tool is a working one, as it would be cool to quickly conceptualize and visualize mechanics during the design phase of a project. But from sad - we will wait another year to a less working version 🥲 Read the original https://lnkd.in/gu4xwW7s #aicorner #research #genie2
To view or add a comment, sign in