We’re excited to introduce our latest generative AI model, GenSim-2! GenSim-2 is Helm.ai’s state-of-the-art generative AI video foundation model that enables augmented reality re-simulation in autonomous driving development. The model brings advanced AI-based video editing capabilities, such as: 🟢 Dynamically modifying weather and lighting conditions (e.g., rain, fog, snow, glare, day/night) for any driving scene in any geography 🟢 Customizing object appearances (e.g., road surfaces, vehicle types/colors, pedestrians, buildings, vegetation, guardrails, and more) with a high degree of control 🟢 Realism and consistency across video sequences and simultaneous multi-camera perspectives GenSim-2 supports both augmented reality modifications of real-world video footage and the creation of fully AI-generated video, at up to 696x696 image resolution and up to 30 frames per second. Trained using Helm.ai’s proprietary Deep Teaching™ technology, GenSim-2 enables the creation of diverse, highly realistic video data tailored to a wide variety of specific requirements. From enriching datasets to addressing rare corner cases, it offers a scalable, cost-effective solution for accelerating autonomous driving development and validation. 🔗 Learn more about GenSim-2: https://lnkd.in/gREUqqBc #autonomousdriving #selfdrivingcars #embodiedai #machinelearning #artificialintelligence #generativeai #computervision #deepteaching #helmai
Helm.ai
Software Development
Redwood City, California 9,246 followers
Helm.ai is building the next generation of AI technology for ADAS, autonomous driving, and robotics automation.
About us
Helm.ai is building the next generation of AI technology for ADAS, autonomous driving, and robotics automation.
- Website
-
http://helm.ai
External link for Helm.ai
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- Redwood City, California
- Type
- Privately Held
- Founded
- 2016
Locations
-
Primary
Redwood City, California 94063, US
Employees at Helm.ai
-
Vitaly Golomb
Managing Partner @ Mavka Capital - Mobility, Energy Transition, AI | Independent Board Member | Best-Selling Author | Speaker
-
Tobias Wessels
Chief Development Officer | Helm.Ai
-
Serge Lambermont
Serge Lambermont – Automated Driving & AI Safety Expert Automated Driving | AI Safety | Embedded Systems | Vehicle Electrification | Strategic…
-
Steven Pav
Math & Statistics Hacker
Updates
-
We'll be exhibiting at NVIDIA GTC again this year! Visit us in Exhibit Hall 3 to see the demos of our AI-first software and foundation models for autonomous driving and robotics. Reach out to us at https://helm.ai/contact-us to schedule time with our business development team. #GTC25 #nvidia #NVIDIAInception NVIDIA for Startups
-
-
In the recent interview with SAE International, Helm.ai's CEO Vladislav Voroninski discusses how our AI technology—originally developed for autonomous driving—is demonstrating its applicability in mining. The same Deep Teaching™-powered perception systems that enable high-end ADAS and L4 autonomous driving can also enhance safety and efficiency in mining operations. By leveraging scalable AI training and real-world field data, Helm.ai's technology can help detect hidden hazards, navigate complex environments, and execute critical tasks in an AI-driven fashion. 🔗 Read more in the full SAE interview: https://lnkd.in/gQRmwP84 #AI #artificialintelligence #machinelearning #AutonomousDriving #MiningTech #DeepLearning
-
-
GenSim-2 enables AI-based, multi-camera video editing with scene consistency. Our foundation model ensures coherent edits across perspectives, generating diverse, realistic datasets for training and validation. This capability helps automakers develop robust, safety-critical systems and address edge cases more effectively.
-
With GenSim-2, automakers can leverage AI to change appearances of objects in driving data. Dev teams can customize vehicles, pedestrians, road surfaces, buildings, vegetation, and even rare objects like guardrails, creating infinite diverse scenes. Our generative AI model supports both augmented reality modifications of real-world videos and fully AI-generated video scenes. These capabilities help automakers enrich datasets with varied scenarios, improving robustness and addressing rare edge cases in training and validation. Reach out to us at https://helm.ai/contact-us to learn more.
-
Our founder explains how AI model training can be compute-efficient
As we've seen recently with the release of DeepSeek, there is substantial room for improvement in large scale foundation models, both in terms of architectural efficiency and unsupervised training techniques. While the discussion has been mostly about LLMs, there is also a strong need for improvement to the scalability of generative AI in other domains such as video and multi-sensor world models. In the last several months we have released multiple foundation models for video and multi-sensor generative simulation for the autonomous driving space: VidGen-1 and 2, WorldGen-1 and GenSim-2. These models were developed fully in-house (and not fine-tuned from any open-source models) using only ~100 H100 GPUs (inclusive of all the R&D and final training runs), which is a tiny percentage of the typical compute budgets associated with video foundation model development (thousands to tens of thousands of H100 GPUs). How did we achieve industry leading foundation models with much less compute? We combined DNN architecture innovation with advanced unsupervised learning techniques. By leveraging our Deep Teaching technology and improvements to generative AI DNN architectures, we were able to use smaller parameter/more efficient models and to simultaneously accelerate the unsupervised learning process, leading to superior scaling laws compared to industry-typical methods, which means higher accuracy per compute dollar spent, both during training and inference. We have verified that these scaling law advantages persist at larger scales of compute/data, and look forward to keep pushing the frontier of world models for autonomous driving and robotics by scaling up. In essence, combining Deep Teaching with generative AI architecture innovation, leads to a highly scalable form of generative AI for simulation.
-
Join us tomorrow at AUTONOMOUS summit to learn about commercial applications of generative AI in autonomous driving and robotics! Register here: https://lnkd.in/efaV-U5c
-
-
We’re excited to share that Helm.ai has achieved Automotive SPICE (ASPICE) Capability Level 2! This achievement underscores our commitment to meeting the rigorous standards required for the volume production of safety-critical software in mass-market road vehicles. Through UL Solutions' comprehensive assessment, we’ve demonstrated that all of our engineering processes are systematically planned, monitored, and controlled—ensuring reliable work products and efficient resource management in line with automotive industry standards. Click to learn more: https://lnkd.in/gk6-hCN6
-
-
Beyond autonomous driving, generative AI’s adaptability offers transformative opportunities in industries like mining and robotics, enabling innovation across uncrewed systems. In the latest research publication by AUVSI — Association for Uncrewed Vehicle Systems International, learn how Helm.ai leverages generative AI to address some of the most pressing challenges in autonomous system development, including real-time decision-making, predictive modeling, and system simulation. Read here: https://lnkd.in/gE2yEMYt
-
-
Join us next week at AUTONOMOUS summit, the leading virtual conference for AI innovators! Our CEO & Founder, Vladislav Voroninski, will deliver a talk on how generative AI powers scalable development and validation for autonomous driving. Follow this link to register for the virtual conference: https://lnkd.in/gMb24CAW
-