Last week, we showed manipulation tasks trained in Vlearn and inferenced in Vlab. This week, instead of manipulation tasks, we are going to show locomotion tasks. To start, we trained the classic humanoid running task in Vlearn and inferenced in Vlab. The fully converged policy trained over 1000 epochs takes approximately 7 minutes to train on a 3080 laptop and 3 minutes on a 4090 desktop GPU simulating 4096 humanoids concurrently. However, a stable running gait emerges during training within 2 minutes on the 3080 laptop and in less than a minute on the 4090. Training graph for the 4090 is in the comments. #digitaltwin #robotics #AI #ML #RL #simulation #vsim #vlearn #vlab
Vsim ’s Post
More Relevant Posts
-
Summary series of Robotics/AI development over several years Episode 13 Building on the hardware setup introduced in the previous episode, an efficient way to collect data and train models is by utilizing simulators such as the Nvidia Isaac simulator or OpenAI Gym. In this case, we used the Nvidia Isaac simulator as an example. We trained the policy using a state-of-the-art reinforcement learning algorithm, Soft Actor-Critic (SAC). To accelerate training, we ran the policy in parallel across 100 or more independent scenarios, enabling the robotic arm to successfully stack a red block on top of a green one. Enjoy the video.
To view or add a comment, sign in
-
From OpenCV Webinar Success to Weekend Robotics Fun! 🤖✨ Hello LinkedIn Community, Last week, we had an incredible time at the OpenCV Live Webinar, where I showcased the latest advancements in Robot Studio and the integration of robotics digital twins and AI-powered learning. A huge thank you to everyone who attended and engaged! 🚀 Over the weekend, I had some productive (and super fun) time working with the SO-100 robotic arm. I'm thrilled to share a sneak peek of my latest progress: 🎥 Wireless SO-100 in Action: The SO-100 robotic arm is now working wirelessly with the help of an ESP32 module! Not only that, but the real robot is replicating the movements of my left arm using an OAK device combined with a hand and body tracker. The result? A seamless connection where my gestures translate directly into robot actions in real-time. 🤯 Wireless and real robot movements will soon unlock full SO-100 integration within Robot Studio, featuring exciting capabilities like inverse kinematics (IK) and LeRobot support, allowing the SO-100 to perform complex tasks. I can’t wait to keep pushing forward and sharing more with you all! What do you think about controlling robots through intuitive gestures? Let me know in the comments below! 🤖💬 #SO100RobotArm #RobotStudio #OpenCV #ESP32 #OAKDevice #ComputerVision #Robotics #AI #STEM #Innovation #GestureControl
To view or add a comment, sign in
-
Hello connections... Excited to share my latest video on YOLO V8 OBJECT DETECTION In this video, I delve into the incredible world of yolo v8 , exploring their state-of-the-art Al . Special thanks to AIMER Society - Artificial Intelligence Medical and Engineering Researchers Society and Sai Satish sir for continuous support and inspiration. #AIMERS #AIMERSOCIETY #APSCHE #DATASCIENCE #OPENCV #HUGGINGFACE #ARTIFICIALINTELLIGENCE #MACHINELEARNING #DEEPLEARNING #BIGDATA #COMPUTERVISION #NATURALLANGUAGEPROCESSING #TECHNOLOGY #INNOVATION #ROBOTICS #AUTOMATION
To view or add a comment, sign in
-
Hello connections... Excited to share my latest video on YOLO V8 OBJECT DETECTION In this video, I delve into the incredible world of yolo v8 , exploring their state-of-the-art Al . Special thanks to AIMER Society - Artificial Intelligence Medical and Engineering Researchers Society and Sai Satish sir for continuous support and inspiration. #AIMERS #AIMERSOCIETY #APSCHE #DATASCIENCE #OPENCV #HUGGINGFACE #ARTIFICIALINTELLIGENCE #MACHINELEARNING #DEEPLEARNING #BIGDATA #COMPUTERVISION #NATURALLANGUAGEPROCESSING #TECHNOLOGY #INNOVATION #ROBOTICS #AUTOMATION
To view or add a comment, sign in
-
『1つのデモビデオから直感的に四足歩行のスキルを学習するためのパイプライン』 https://lnkd.in/gDhEAPtJ SDS ("See it. Do it. Sorted."): Quadruped Skill Synthesis from Single Video Demonstration https://lnkd.in/gNyfSW6K Leveraging the Visual capabilities of GPT-4o, SDS processes input videos through our novel chain-of-thought promoting technique (SUS) and generates executable reward functions (RFs) that drive the imitation of locomotion skills, through learning a Proximal Policy Optimization (PPO)-based Reinforcement Learning (RL) policy, using environment information from the NVIDIA #IsaacGym simulator. SDS autonomously evaluates the RFs by monitoring the individual reward components and supplying training footage and fitness metrics back into GPT-4o, which is then prompted to evolve the RFs to achieve higher task fitness at each iteration. The method was validated on a Unitree Robotics Go1 robot, demonstrating its ability to execute variable skills such as trotting, bounding, pacing and hopping, achieving high imitation fidelity and locomotion stability. SDS shows improvements over SOTA methods in task adaptability, reduced dependence on domain-specific knowledge, and bypassing the need for labor-intensive reward engineering and large-scale training datasets. #quadrupedal #robot #Unitree_Go1 #GPT4o #ReinforcementLearning #RL #NVIDIA #IsaacGym #simulation #SDS #RPL #UCL
To view or add a comment, sign in
-
From OpenCV Webinar Success to Weekend Robotics Fun! 🤖✨ Hello LinkedIn Community, Last week, we had an incredible time at the OpenCV Live Webinar, where I showcased the latest advancements in Robot Studio and the integration of robotics digital twins and AI-powered learning. A huge thank you to everyone who attended and engaged! 🚀 Over the weekend, I had some productive (and super fun) time working with the SO-100 robotic arm. I'm thrilled to share a sneak peek of my latest progress: 🎥 Wireless SO-100 in Action: The SO-100 robotic arm is now working wirelessly with the help of an ESP32 module! Not only that, but the real robot is replicating the movements of my left arm using an OAK device combined with a hand and body tracker. The result? A seamless connection where my gestures translate directly into robot actions in real-time. 🤯 Wireless and real robot movements will soon unlock full SO-100 integration within Robot Studio, featuring exciting capabilities like inverse kinematics (IK) and LeRobot support, allowing the SO-100 to perform complex tasks. I can’t wait to keep pushing forward and sharing more with you all! What do you think about controlling robots through intuitive gestures? Let me know in the comments below! 🤖💬 #SO100RobotArm #RobotStudio #OpenCV #ESP32 #OAKDevice #ComputerVision #Robotics #AI #STEM #Innovation #GestureControl
To view or add a comment, sign in
-
🚀 Exciting Project Update! 🚀 I'm thrilled to share the latest progress on my robotic arm project! i have integrated deep reinforcement learning, specifically Proximal Policy Optimization (PPO), to enhance the movement and precision of our robotic arms in simulation. Watching these agents learn and accomplish tasks using reinforcement learning has been incredibly impressive. However, this cutting-edge technology demands significant GPU power. Currently, i am limited to training a small batch of robots on a single PPO model due to resource constraints. To push the boundaries and train more robots simultaneously, a higher GPU capacity is essential. #Robotics #ReinforcementLearning #DeepLearning #PPO #ArtificialIntelligence #GPU #Innovation #Technology
To view or add a comment, sign in
-
"Raft Optical Flow Implementation with PyTorch and Gradio 🌊🚀 I've successfully implemented the Raft optical flow model using PyTorch and Gradio, creating a visually stunning and interactive application. This project allowed me to delve deeper into optical flow techniques to detect motion and explore their applications in various fields, such as computer vision, robotics, and autonomous systems. Thanks to the power of PyTorch and Gradio, I was able to build a user-friendly interface that allows users to upload their own videos and instantly visualize the optical flow results. Github Link - https://lnkd.in/gSDmj58Q #OpticalFlow #PyTorch #Gradio #ComputerVision #MachineLearning"
To view or add a comment, sign in
-
I'm excited to present you, our Quadruped Locomotion Project. In this project our goal was to make a quadruped to walk using two methods: Model Predictive Control and Reinforcement Learning. For the #MPC we applied the following paper "Dynamic Locomotion in the MIT Cheetah 3 Through Convex Model-Predictive Control" into #QuadSDK in #ROS. To be able to achieve that we delved deep into #OptimalControl and #LeggedRobots theory. For the #RL a Proximal Policy Optimization algorithm was used in NVidia's Isaac Sim. It was an intense project which definitely challenged us but in the end what matters is the knowledge and skills we acquired which are priceless. I was fortunate to do this with two amazing colleagues and friends João Parlatore Lancha and Dyuman Aditya from Centrale Nantes which made the process even more enjoyable. The paper we used: https://lnkd.in/em6-8YEh MPC Code - GitHub: https://lnkd.in/e_uCYWWB #robotics #control #quadrupeds
To view or add a comment, sign in
-
A BIG shout out to my brother Jeffrey Rivero for all the late hours listening to my nag and provide support, input and help! YOU are the Best! 💙 Ever tried to train a massive language model on consumer based GPU's like RTX 4090. Well, It's not that easy as you can imagine! Read all about it: https://lnkd.in/dwE_GrRX #GenerativeAI #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #NaturalLanguageProcessing #ComputerVision #Robotics #Automation #Innovation Rock steady! 5 hours and counting!
To view or add a comment, sign in
980 followers
Product Growth @ Speech Graphics | Ex Epic Games
1moAwesome work Kier Storey and Michelle Lu. Moving from days/hours to minutes/seconds for these tasks will really help. I can't believe how long we spent training robotic arms at Xihelm only for it to not work.