It's not too late to sign up for CG Spotlight #6! Join us at Autodesk tomorrow, January 17th! Montreal ACM SIGGRAPH and Autodesk are very pleased to invite you to the sixth edition of CG Spotlight! Join us for an evening of presentations and networking with the Montreal multimedia community! During this sixth edition, we will present the following talks (all talks will be given in English) : Moment Factory - Guillaume Borgomano, Prompt to Anything: Exploring Generative AI's Iterative Potential in Mapping Show Production Frontier VFX - Martin Lipmann, Making the VFX of CHICKEN RUN : DAWN OF THE NUGGET Université de Montréal (University of Montreal) - Noam Aigerman, Manipulating Geometry Using Machine Learning Roblox - Myriam Beauvais, Principles of the Roblox Engine Doors open at 6:30pm and talks will start at 7pm. Snacks and drinks will be served during the event. We are looking forward to seeing you there!
Montréal ACM SIGGRAPH’s Post
More Relevant Posts
-
💡 Step into the future of game art during the Autodesk Developer Summit at GDC 2024! Sloyd will unveil how 3D AI is accelerating game asset creation and tackling tough industry challenges such as: maintaining top-notch topology quality, optimizing inference time, and addressing copyright concerns. Register here: https://autode.sk/gdc-2024
Autodesk at GDC 2024
autodesk.com
To view or add a comment, sign in
-
📣 New research: Meta 3D Gen delivers text-to-3D generation with high-quality geometry and textures. This system can generate 3D assets with high-resolution textures & material maps end-to-end with results that are superior in quality to previous state-of-the-art solutions — at 3-10x the speed of previous work. Details in the paper ➡️ https://lnkd.in/d3PCJxZV
Meta 3D Gen
ai.meta.com
To view or add a comment, sign in
-
It has been a couple weeks already since SIGGRAPH 2024 and I am still thinking about the experience! It was actually my college's SIGGRAPH student chapter that original got me interested in computer graphics as a field. Despite being heavily involved in the club for all four years, I never attended the conference myself. And I really didn't know what I was missing until now. SIGGRAPH is a great conference because of its variety; a mixture engineering and artists meeting to share techniques and advances in real-time rendering, production rendering, VFX, scientific visualization, and more. Here are my favorite sessions from a few of those categories: Research Papers - VR, Eye Tracking, and Perception: some great talks on optimizing rendering for VR as well as optimizing the accuracy of motion perception - NeRFs and Lighting: NeRF and Gaussian Splatting are all the rage, and this talk was a good look at the direction these techniques are moving - Simulation: Fluid simulations are getting really good, and often by using hybrid techniques Production Rendering - TMNT production session: The stylized motion blur technique was really creative Games and Optimization - Moving Mobile Graphics: Sebastien Aaltonen's deep-dive into HypeHype's hyper-optimizations was very informative Thanks to Samsung for sending me out to my first SIGGRAPH. I learned a lot and my passion for computer graphics was reinvigorated. Thanks also to my SIGGRAPH@UIUC friends -- it was nice to catch up and see how everyone was. Now that I've had a taste, I can't wait for SIGGRAPH 2025 in Vancouver! #siggraph2024 #WeAreSarcAcl
To view or add a comment, sign in
-
🚀 Enhancing Design Excellence Through AI and Parametric Integration ✨ I have been conducting an exploration into the integration of AI tools to refine parametric design and rendering processes. Utilizing 3ds Max for advanced modeling, combined with V-Ray and Corona Render engines, I have achieved enhanced levels of visual realism and workflow efficiency. The incorporation of AI for post-render adjustments demonstrates the powerful synergy between parametric design and artificial intelligence, streamlining the creative process and elevating the final output. #AIinDesign #ParametricDesign #3dsMax #Vray #CoronaRender #Architecture #DigitalInnovation #ai #computationaldesign #mixeduse #aiinarchitecture
To view or add a comment, sign in
-
🚨SIGGRAPH 2024 Paper Alert 🚨 ➡️Paper Title: 3D Gaussian Blendshapes for Head Avatar Animation 🌟Few pointers from the paper 🎯In this paper authors have introduced 3D Gaussian blendshapes for modeling photorealistic head avatars. Taking a monocular video as input, they learn a base head model of neutral expression, along with a group of expression blendshapes, each of which corresponds to a basis expression in classical parametric face models. 🎯Both the neutral model and expression blendshapes are represented as 3D Gaussians, which contain a few properties to depict the avatar appearance. The avatar model of an arbitrary expression can be effectively generated by combining the neutral model and expression blendshapes through linear blending of Gaussians with the expression coefficients. 🎯High-fidelity head avatar animations can be synthesized in real time using Gaussian splatting. Compared to state-of-the-art methods, their Gaussian blendshape representation better captures high-frequency details exhibited in input video, and achieves superior rendering performance. 🏢Organization: State Key Lab of CAD&CG, Zhejiang University 🧙Paper Authors: Shengjie Ma, Yanlin Weng, Tianjia Shao, Kun Zhou 1️⃣Read the Full Paper here: https://lnkd.in/g5fyjhkv 2️⃣Project Page: https://lnkd.in/gbAMvxA4 3️⃣Code: https://lnkd.in/gsGtgjqP 🎥 Be sure to watch the attached Technical Summary -Sound on 🔊🔊 Find this Valuable 💎 ? ♻️REPOST and teach your network something new Follow me 👣, Naveen Manwani, for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements. #SIGGRAPH2024 #gaussiansplatting
To view or add a comment, sign in
-
🚀At twinzo, innovation never sleeps! 💡 We're thrilled to share some highlights from our recent demo day where we showcased the cutting-edge advancements in our #technology. 🎉 🔍Exploring #3D: During our R&D sessions, we delved into the most recent advancements in 3D technology. One area of focus was Gaussian Splatting and light probes – innovative techniques that promise to revolutionize the way we create with #digitaltwins. 💡Not Quite Ready for Prime Time: While these features aren't yet ready for production release, we're incredibly excited about the potential they hold. Our team is hard at work refining and perfecting these technologies, ensuring they meet our high standards of quality and #performance. #gaussiansplatting Radiance Field methods and 3D Gaussians to improve visual quality in real-time. Integrating sparse points from camera calibration, this method represents scenes while preserving continuous volumetric #radiancefields. It also introduces interleaved optimization/density control of 3D Gaussians and a rapid rendering #algorithm supporting anisotropic splatting. This technique shows promise for instant generation of high-quality 3D scenes from 2D images. #lightprobes Light probes are vital tools in #computer graphics for accurately capturing and replicating lighting conditions in virtual environments. Placed strategically, they gather #data on light intensity, color, and directionality from different angles, enabling the creation of realistic lighting effects and reflections. This enhances the visual #quality of virtual worlds, making them more immersive and visually stunning in applications like video #games and #virtualreality simulations. Stay tuned as we continue to push the limits of #innovation and bring you the future of digital twinning! 🚀💫 #digitaltwin #industry40 #digitalization #future
To view or add a comment, sign in
-
𝗙𝗲𝗲𝗹 𝘁𝗵𝗲 𝗱𝗿𝗲𝗮𝗺! 𝗣𝗵𝘆𝘀𝗗𝗿𝗲𝗮𝗺𝗲𝗿, a method for enabling static 3D objects to dynamically respond to interactive stimuli like forces or manipulations in a physically realistic way. PhysDreamer takes a static 3D object represented as 3D Gaussians and first renders it from a viewpoint. It then uses an image-to-video generation model to create a reference video showing the object in realistic motion. PhysDreamer optimizes a spatially-varying material field (specifically the Young's modulus representing stiffness) and an initial velocity field for the 3D object by running a differentiable physics simulation. The optimization aims to make the rendered simulation output match the reference video. By estimating the physical material properties from the video generation model's learned dynamics priors, PhysDreamer can synthesize realistic 3D object motions in response to novel interactions like applied forces. The authors evaluate PhysDreamer on examples like flowers, plants, and clothing, showing more realistic interactive dynamics compared to previous methods. 👉 Follow the link to read the research https://lnkd.in/dXrxUepz 👉 Be sure to applaud researchers in the comments below Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, William T. Freeman 🐾 𝗢𝘂𝗿 𝗖𝗼𝗺𝗺𝗲𝗻𝘁 Approaches like PhysDreamer could potentially allow creating virtual objects that behave and respond much more realistically to interactions, enabling more immersive and engaging gaming experiences on mobile devices. The estimated physics properties make the objects move naturally based on the game's physics engine when forces or manipulations are applied. 🔔𝗔𝗳𝗿𝗮𝗶𝗱 𝗼𝗳 𝗺𝗶𝘀𝘀𝗶𝗻𝗴 𝗼𝘂𝘁 𝗼𝗻 𝗼𝘂𝗿 𝗻𝗲𝘄𝘀? Worry not! Place the emoji 😀 in a comment below, and we'll make sure to tag you once we publish anything new. 👑 Here are our VIP alert subscribers: Martyn Redstone, Illia Shestakov, Maryanne Collins, Marianna Inozemtseva, Asya Polyak
To view or add a comment, sign in
-
We've been writing a lot about Gaussian splatting recently. This high-fidelity alternative to photometry is a relatively new technique for turning 2D photos or video into 3D scenes. Adobe's free new app Substance 3D Viewer supports it, and we've seen some impressive results for 3D visualisations in Unreal Engine 5 (also see our V-Ray 7 review). Gaussian splatting can be used to capture accurate lighting and reflections as well as geometry. Now researchers have just described a new tech could mean you don't even need to capture the images or video for gaussian splats yourself. ActiveSplat is an autonomous tool that walks its way around an indoor space and produces a realistic and accurate 3D representation with incredible efficiency.
Impressive new tech can automatically map 3D spaces.
creativebloq.com
To view or add a comment, sign in
-
This epoch of Nights&Weekend s5 organised by buildspace I want to give my idea a personal touch and purpose. "Nights and Weekends" is a constructive effort to encourage you to learn in public and network with people around the world. They even have an AI LLM which clears your doubts and matches you up with people around the world 🌍 who can help you with your work. I really resonate with the idea and charm of buildspace to make learning in public cool and fun. Here is my idea, feel free to comment.....it's part of the game 😉. Let's dive into the world of 3d modelling, 3d graphics in JavaScript and Retrieval Augmented Generative LLM
To view or add a comment, sign in
-
✨ RadSplat: Pioneering High-Speed 3D Rendering with Radiance Fields and Gaussian Splatting ✨ 💡 Introduction: Introducing RadSplat, an innovative rendering system that utilizes radiance fields and Gaussian splatting to deliver real-time, high-quality view synthesis for large-scale scenes. This breakthrough achieves a staggering 900+ FPS, revolutionizing the efficiency of 3D rendering without compromising on visual fidelity. ⚙️ Main Features: RadSplat stands out with its novel pruning technique, reducing point count while preserving scene quality, and a test-time filtering approach that accelerates rendering for larger scenes. It leverages the power of neural fields with point-based scene representations, offering smaller model sizes and faster rendering. RadSplat's utilization of the state-of-the-art radiance field Zip-NeRF as a prior and supervision signal ensures stable optimization and exceptional reconstruction quality. 📖 Case Study or Example: Tested on the MipNeRF360 dataset, RadSplat demonstrated its prowess in rendering complex indoor and outdoor scenes with high-frequency texture details. It outperformed existing methods, including 3DGS and Zip-NeRF, in both quality and speed, showcasing its potential in applications like human and avatar reconstruction, and SLAM systems. ❤️ Importance and Benefits: RadSplat's importance lies in its ability to balance speed and quality in real-time 3D rendering, a critical aspect for various industries such as gaming, virtual reality, and architectural visualization. Benefits include a significant reduction in storage costs, seamless integration with graphics software, and robust handling of real-world captures with variable lighting and exposure. 🚀 Future Directions: While RadSplat has set new benchmarks in rendering speed and quality, future research will focus on further reducing training times and bridging the performance gap in large-scale scenes. The goal is to push the boundaries of real-time rendering and unlock new possibilities in visual computing. 📢 Call to Action: Dive deeper into the world of high-speed 3D rendering with RadSplat and discover how it's shaping the future of visual experiences. For a more comprehensive understanding, visit the full research paper at [https://lnkd.in/ev_9RFyJ] #RadSplat #3DRendering #RadianceFields #GaussianSplatting #RealTimeRendering #ComputerGraphics #Innovation #TechTrends
To view or add a comment, sign in
553 followers