It's not too late to sign up for CG Spotlight #6! Join us at Autodesk tomorrow, January 17th! Montreal ACM SIGGRAPH and Autodesk are very pleased to invite you to the sixth edition of CG Spotlight! Join us for an evening of presentations and networking with the Montreal multimedia community! During this sixth edition, we will present the following talks (all talks will be given in English) : Moment Factory - Guillaume Borgomano, Prompt to Anything: Exploring Generative AI's Iterative Potential in Mapping Show Production Frontier VFX - Martin Lipmann, Making the VFX of CHICKEN RUN : DAWN OF THE NUGGET Université de Montréal (University of Montreal) - Noam Aigerman, Manipulating Geometry Using Machine Learning Roblox - Myriam Beauvais, Principles of the Roblox Engine Doors open at 6:30pm and talks will start at 7pm. Snacks and drinks will be served during the event. We are looking forward to seeing you there!
Montréal ACM SIGGRAPH’s Post
More Relevant Posts
-
💡 Step into the future of game art during the Autodesk Developer Summit at GDC 2024! Sloyd will unveil how 3D AI is accelerating game asset creation and tackling tough industry challenges such as: maintaining top-notch topology quality, optimizing inference time, and addressing copyright concerns. Register here: https://autode.sk/gdc-2024
Autodesk at GDC 2024
autodesk.com
To view or add a comment, sign in
-
Introducing SceneScript Today, Meta #RealityLabs Research is announcing #SceneScript, a novel method of generating scene layouts and representing scenes using language. Rather than using hard-coded rules to convert raw visual data into an approximation of a room’s architectural elements, SceneScript is trained to directly infer a room’s geometry using end-to-end machine learning. This results in a representation of physical scenes which is #compact, reducing memory requirements to only a few bytes; #complete, resulting in crisp geometry, similar to scalable vector graphics; and importantly, #interpretable, meaning that we can easily read and edit those representations. https://lnkd.in/gs-tVEdH
Introducing SceneScript, a novel approach for 3D scene reconstruction
ai.meta.com
To view or add a comment, sign in
-
📣 New research: Meta 3D Gen delivers text-to-3D generation with high-quality geometry and textures. This system can generate 3D assets with high-resolution textures & material maps end-to-end with results that are superior in quality to previous state-of-the-art solutions — at 3-10x the speed of previous work. Details in the paper ➡️ https://lnkd.in/d3PCJxZV
Meta 3D Gen
ai.meta.com
To view or add a comment, sign in
-
🚀At twinzo, innovation never sleeps! 💡 We're thrilled to share some highlights from our recent demo day where we showcased the cutting-edge advancements in our #technology. 🎉 🔍Exploring #3D: During our R&D sessions, we delved into the most recent advancements in 3D technology. One area of focus was Gaussian Splatting and light probes – innovative techniques that promise to revolutionize the way we create with #digitaltwins. 💡Not Quite Ready for Prime Time: While these features aren't yet ready for production release, we're incredibly excited about the potential they hold. Our team is hard at work refining and perfecting these technologies, ensuring they meet our high standards of quality and #performance. #gaussiansplatting Radiance Field methods and 3D Gaussians to improve visual quality in real-time. Integrating sparse points from camera calibration, this method represents scenes while preserving continuous volumetric #radiancefields. It also introduces interleaved optimization/density control of 3D Gaussians and a rapid rendering #algorithm supporting anisotropic splatting. This technique shows promise for instant generation of high-quality 3D scenes from 2D images. #lightprobes Light probes are vital tools in #computer graphics for accurately capturing and replicating lighting conditions in virtual environments. Placed strategically, they gather #data on light intensity, color, and directionality from different angles, enabling the creation of realistic lighting effects and reflections. This enhances the visual #quality of virtual worlds, making them more immersive and visually stunning in applications like video #games and #virtualreality simulations. Stay tuned as we continue to push the limits of #innovation and bring you the future of digital twinning! 🚀💫 #digitaltwin #industry40 #digitalization #future
To view or add a comment, sign in
-
It has been a couple weeks already since SIGGRAPH 2024 and I am still thinking about the experience! It was actually my college's SIGGRAPH student chapter that original got me interested in computer graphics as a field. Despite being heavily involved in the club for all four years, I never attended the conference myself. And I really didn't know what I was missing until now. SIGGRAPH is a great conference because of its variety; a mixture engineering and artists meeting to share techniques and advances in real-time rendering, production rendering, VFX, scientific visualization, and more. Here are my favorite sessions from a few of those categories: Research Papers - VR, Eye Tracking, and Perception: some great talks on optimizing rendering for VR as well as optimizing the accuracy of motion perception - NeRFs and Lighting: NeRF and Gaussian Splatting are all the rage, and this talk was a good look at the direction these techniques are moving - Simulation: Fluid simulations are getting really good, and often by using hybrid techniques Production Rendering - TMNT production session: The stylized motion blur technique was really creative Games and Optimization - Moving Mobile Graphics: Sebastien Aaltonen's deep-dive into HypeHype's hyper-optimizations was very informative Thanks to Samsung for sending me out to my first SIGGRAPH. I learned a lot and my passion for computer graphics was reinvigorated. Thanks also to my SIGGRAPH@UIUC friends -- it was nice to catch up and see how everyone was. Now that I've had a taste, I can't wait for SIGGRAPH 2025 in Vancouver! #siggraph2024 #WeAreSarcAcl
To view or add a comment, sign in
-
Implement these cloth physics in AR with photorealistic textures and it would give AR spaces a good pinch of realism. 🌐https://lnkd.in/dcViXt-V PhysAvatar, a new framework, combines inverse rendering with inverse physics to estimate human shape and appearance along with fabric physical parameters from multi-view video data. The framework automatically estimates human shape, appearance, and fabric physical parameters from multi-view video data. PhysAvatar uses a mesh-aligned 4D Gaussian technique mesh tracking and a physics-based renderer for material property estimation, achieving high accuracy. With a physics simulator, it optimizes garment physical parameters via gradient-based optimization for realistic cloth simulation. This innovation enables PhysAvatar to generate high-quality renderings of avatars in loose-fitting clothes, even under unseen motions and lighting conditions. #gaussian #physics #3d
To view or add a comment, sign in
-
🚨SIGGRAPH Asia 2024 Paper Alert 🚨 ➡️Paper Title: MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting 🌟Few pointers from the paper 🎯Crafting a single, versatile physics-based controller that can breathe life into interactive characters across a wide spectrum of scenarios represents an exciting frontier in character animation. An ideal controller should support diverse control modalities, such as sparse target keyframes, text instructions, and scene information. 🎯While previous works have proposed physically simulated, scene-aware control models, these systems have predominantly focused on developing controllers that each specializes in a narrow set of tasks and control modalities. This work presents “MaskedMimic”, a novel approach that formulates physics-based character control as a general motion inpainting problem. 🎯Authors key insight is to train a single unified model to synthesize motions from partial (masked) motion descriptions, such as masked keyframes, objects, text descriptions, or any combination thereof. This is achieved by leveraging motion tracking data and designing a scalable training method that can effectively utilize diverse motion descriptions to produce coherent animations. 🎯Through this process, their approach learns a physics-based controller that provides an intuitive control interface without requiring tedious reward engineering for all behaviors of interest. The resulting controller supports a wide range of control modalities and enables seamless transitions between disparate tasks. 🎯 By unifying character control through motion inpainting, MaskedMimic creates versatile virtual characters. These characters can dynamically adapt to complex scenes and compose diverse motions on demand, enabling more interactive and immersive experiences. 🏢Organization: NVIDIA 🧙Paper Authors: Chen Tessler, Yunrong Guo, Ofir Nabati, GAL CHECHIK Xue Bin Peng 1️⃣Read the Full Paper here: https://lnkd.in/e5rq6ST8 2️⃣Project Page: https://lnkd.in/e_if6spn 3️⃣Code: Coming 🔜 🎥 Be sure to watch the attached Technical Summary - Sound on 🔊🔊 Find this Valuable 💎 ? ♻️REPOST and teach your network something new Follow me 👣, Naveen Manwani, for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements. #SIGGRAPHAsia2024 #motiontracking #reinforcementlearning
To view or add a comment, sign in
-
🚨I3D 2024 Paper Alert 🚨 ➡️Paper Title: FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces 🌟Few pointers from the paper 🗿In this paper authors have presented a novel representation that enables high-quality volumetric rendering of an actor's dynamic facial performances with minimal compute and memory footprint. 🗿It runs natively on commodity graphics software and hardware, and allows for a graceful trade-off between quality and efficiency. 🗿Their method utilizes recent advances in neural rendering, particularly learning discrete radiance manifolds to sparsely sample the scene to model volumetric effects. 🗿They achieve efficient modeling by learning a single set of manifolds for the entire dynamic sequence, while implicitly modeling appearance changes as temporal canonical texture. 🗿They exported a single layered mesh and view-independent RGBA texture video that is compatible with legacy graphics renderers without additional ML integration. 🗿Authors demonstrated their method by rendering dynamic face captures of real actors in a game engine, at comparable photorealism to state-of-the-art neural rendering techniques at previously unseen frame rates. 🏢Organization: Google, Massachusetts Institute of Technology, ETH Zürich 🧙Paper Authors: Safa C. Medin, Gengyan Li, Ruofei Du, Stephan Garbin, Philip Davidson, Gregory W. Wornell, Thabo Beeler, Abhimitra Meka 1️⃣Read the Full Paper here: https://lnkd.in/g7fy4Z2X 2️⃣Project Page: https://lnkd.in/g5bKrx_8 🎥 Be sure to watch the attached Video-Sound on 🔊🔊 Find this Valuable 💎 ? ♻️REPOST and teach your network something new Follow me 👣, Naveen Manwani, for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements. #I3D2024 #ar #vr #Ml #rendering #FaceModeling
To view or add a comment, sign in
-
🚀 Enhancing Design Excellence Through AI and Parametric Integration ✨ I have been conducting an exploration into the integration of AI tools to refine parametric design and rendering processes. Utilizing 3ds Max for advanced modeling, combined with V-Ray and Corona Render engines, I have achieved enhanced levels of visual realism and workflow efficiency. The incorporation of AI for post-render adjustments demonstrates the powerful synergy between parametric design and artificial intelligence, streamlining the creative process and elevating the final output. #AIinDesign #ParametricDesign #3dsMax #Vray #CoronaRender #Architecture #DigitalInnovation #ai #computationaldesign #mixeduse #aiinarchitecture
To view or add a comment, sign in
-
✨ RadSplat: Pioneering High-Speed 3D Rendering with Radiance Fields and Gaussian Splatting ✨ 💡 Introduction: Introducing RadSplat, an innovative rendering system that utilizes radiance fields and Gaussian splatting to deliver real-time, high-quality view synthesis for large-scale scenes. This breakthrough achieves a staggering 900+ FPS, revolutionizing the efficiency of 3D rendering without compromising on visual fidelity. ⚙️ Main Features: RadSplat stands out with its novel pruning technique, reducing point count while preserving scene quality, and a test-time filtering approach that accelerates rendering for larger scenes. It leverages the power of neural fields with point-based scene representations, offering smaller model sizes and faster rendering. RadSplat's utilization of the state-of-the-art radiance field Zip-NeRF as a prior and supervision signal ensures stable optimization and exceptional reconstruction quality. 📖 Case Study or Example: Tested on the MipNeRF360 dataset, RadSplat demonstrated its prowess in rendering complex indoor and outdoor scenes with high-frequency texture details. It outperformed existing methods, including 3DGS and Zip-NeRF, in both quality and speed, showcasing its potential in applications like human and avatar reconstruction, and SLAM systems. ❤️ Importance and Benefits: RadSplat's importance lies in its ability to balance speed and quality in real-time 3D rendering, a critical aspect for various industries such as gaming, virtual reality, and architectural visualization. Benefits include a significant reduction in storage costs, seamless integration with graphics software, and robust handling of real-world captures with variable lighting and exposure. 🚀 Future Directions: While RadSplat has set new benchmarks in rendering speed and quality, future research will focus on further reducing training times and bridging the performance gap in large-scale scenes. The goal is to push the boundaries of real-time rendering and unlock new possibilities in visual computing. 📢 Call to Action: Dive deeper into the world of high-speed 3D rendering with RadSplat and discover how it's shaping the future of visual experiences. For a more comprehensive understanding, visit the full research paper at [https://lnkd.in/ev_9RFyJ] #RadSplat #3DRendering #RadianceFields #GaussianSplatting #RealTimeRendering #ComputerGraphics #Innovation #TechTrends
To view or add a comment, sign in
551 followers