Robots use special sensors, like lidar, to "see" the world around them 🤖. Lidar shoots lasers to create a 3D map of the environment, helping robots understand where things are. But sometimes, lidar can't catch everything, like small obstacles or bumpy ground 😕. Now, companies like NVIDIA are giving robots better vision using cameras and AI. With these upgrades, robots not only see distances but also recognize objects! Other companies, like Boston Dynamics and Inovance, are also working on improving robot vision. Smarter robots mean safer teamwork with humans and exciting possibilities for the future! 🚀 #AI #RobotVision #FutureTech #Innovation
Tavishi Jaglan’s Post
More Relevant Posts
-
How Robots See the World?! Robots rely on a combination of sensors to "see" the world, and a key player is lidar. This technology uses lasers to create a 3D map of the environment, but it has limitations. While lidar excels at measuring distance, it can miss crucial details. Take low-lying obstacles or uneven terrain – these might be invisible to a standard lidar. NVIDIA with their Isaac Perceptor system are using multiple cameras and AI to give robots a more complete picture. Imagine robots with 3D surround vision, able to not only gauge distance but also understand the objects they see! NVIDIA isn't the only player in the robot vision game. Companies like Boston Dynamics with their advanced cameras and Inovance with their depth sensors are all pushing the boundaries of robot perception. Robots are getting smarter, and their vision is getting better. This means safer collaboration between robots and humans, and opens doors for even more intelligent and versatile robots in the future! Follow Endrit Restelica to stay up to date with AI. #ai #tech #robotics #innovation #gtc2024
To view or add a comment, sign in
-
How Robots See the World?! Robots rely on a combination of sensors to "see" the world, and a key player is lidar. This technology uses lasers to create a 3D map of the environment, but it has limitations. While lidar excels at measuring distance, it can miss crucial details. Take low-lying obstacles or uneven terrain – these might be invisible to a standard lidar. NVIDIA with their Isaac Perceptor system are using multiple cameras and AI to give robots a more complete picture. Imagine robots with 3D surround vision, able to not only gauge distance but also understand the objects they see! NVIDIA isn't the only player in the robot vision game. Companies like Boston Dynamics with their advanced cameras and Inovance with their depth sensors are all pushing the boundaries of robot perception. Robots are getting smarter, and their vision is getting better. This means safer collaboration between robots and humans, and opens doors for even more intelligent and versatile robots in the future! Follow Endrit Restelica to stay up to date with AI. #ai #tech #robotics #innovation #gtc2024
AI | Tech | Marketing | +8 Million Followers and +1 Billion Views 👉 I will help you scale your brand and community 🏆📈
How Robots See the World?! Robots rely on a combination of sensors to "see" the world, and a key player is lidar. This technology uses lasers to create a 3D map of the environment, but it has limitations. While lidar excels at measuring distance, it can miss crucial details. Take low-lying obstacles or uneven terrain – these might be invisible to a standard lidar. NVIDIA with their Isaac Perceptor system are using multiple cameras and AI to give robots a more complete picture. Imagine robots with 3D surround vision, able to not only gauge distance but also understand the objects they see! NVIDIA isn't the only player in the robot vision game. Companies like Boston Dynamics with their advanced cameras and Inovance with their depth sensors are all pushing the boundaries of robot perception. Robots are getting smarter, and their vision is getting better. This means safer collaboration between robots and humans, and opens doors for even more intelligent and versatile robots in the future! Follow Endrit Restelica to stay up to date with AI. #ai #tech #robotics #innovation #gtc2024
To view or add a comment, sign in
-
How Robots See the World?! Robots rely on a combination of sensors to "see" the world, and a key player is lidar. This technology uses lasers to create a 3D map of the environment, but it has limitations. While lidar excels at measuring distance, it can miss crucial details. Take low-lying obstacles or uneven terrain – these might be invisible to a standard lidar. NVIDIA with their Isaac Perceptor system are using multiple cameras and AI to give robots a more complete picture. Imagine robots with 3D surround vision, able to not only gauge distance but also understand the objects they see! NVIDIA isn't the only player in the robot vision game. Companies like Boston Dynamics with their advanced cameras and Inovance with their depth sensors are all pushing the boundaries of robot perception. Robots are getting smarter, and their vision is getting better. This means safer collaboration between robots and humans, and opens doors for even more intelligent and versatile robots in the future! -------- Follow Muhammad Ehsan to stay up to date with AI. #ai #tech #robotics #innovation #lidar #visiontechnology #NVIDIA #IsaacPerceptor #BostonDynamics #Inovance #robotperception #smartrobots #futuretech
To view or add a comment, sign in
-
6G / Remote Operated Driving / Object Detection: We received "Excellent" feedback for our work at the review of the AI-NET-ANTILLAS project yesterday! We’ve shown how our COOL-Fusor component improves the environment perception by integrating LIDAR data from nearby vehicles. In turn, this improves automated and remote operated driving. For the evaluation, we used our #EclipseMOSAIC simulation environment. It models and simulates vehicles and LIDAR sensors as well as all communication characteristics like network latencies, and thus reveals potential for improvement in the environmental perception of vehicles. More information at: https://lnkd.in/dsC6Mrp5 We will also participate with AI-NET and our work at Berlin 6G Conference in July: https://lnkd.in/dzqP_r8r Fraunhofer FOKUS Eclipse Foundation Bundesministerium für Bildung und Forschung VDI/VDE Innovation + Technik GmbH CELTIC-NEXT EUREKA CLUSTER
To view or add a comment, sign in
-
Time for another round of what I'm seeing, and this time, it's not about the humanoid 😁 It's Tesla giving a clue about how they solve 𝐢𝐧𝐝𝐨𝐨𝐫 𝐩𝐨𝐬𝐢𝐭𝐢𝐨𝐧𝐢𝐧𝐠 𝐰𝐢𝐭𝐡𝐨𝐮𝐭 𝐋𝐢𝐃𝐀𝐑 (you know, the sensor Elon really despises👇) ➡ So there's only two sources of odometry: the footsteps bringing the torso forward and the camera+imu in the head. ⛸ Both are not 'absolute' --> they will drift over time, meaning you measure a relative movement with a small error, and errors will add up. 🗺 They fix this by 'mapping' the area and comparing the map with what the robot sees. They show a map with many 'dots' in the video. But these dots are not classical 'point clouds' which you get from LiDAR, since there is none! 👀 The clue is given in the camera image overlays: Tesla uses a feature tracker which identifies stable points in the image and tracks these in 3D as the torso/head moves. Watch it, the points are stable in the video! 💯 By storing many of these features in a single 3D map, it will self-correct for errors while offering the robot many reference points to compare its current position against. Win-Win. Happens to be exactly what we're working on at Intermodalics. Reach out if you want to learn more! Source: Milan Kovac ❤️
To view or add a comment, sign in
-
🔲 WILL RADAR SOON REPLACE LiDAR? 4D IMAGING RADAR BASED ON TDA4. 👉 When moving from 3D to 4D RADARs, we got a much better resolution, and less noise. What we can note is that LiDAR and Radar sensors got better, and could work as standalone? The vid compares a 4D Radar with the new 4D Radar/TDA4. ◻ LiDAR: The FMCW makes it better with weather conditions and adds the velocity estimation. ◻ RADAR: The overall better resolution removes lots of noise, and makes it easier to find distances, classify, etc... 👉 Altos V2 is probably, the world's first 4D imaging radar product that's based on the TI's TDA4 processor. It is designed for ADAS and full autonomous driving application. 👉 The TDA4VM provides high performance compute for both traditional and deep learning algorithms at industry leading power/ performance ratios with a high level of system integration to enable scalability and lower costs for advanced automotive platforms supporting multiple sensor modalities in centralized ECUs or stand-alone sensors. 👉 Yellow - Lidar points, serving as ground truth in both cases. 👉 Green - stationary points from Radar 👉 Red - approaching points from Radar 👉 Pink - departing points from Radar 👉 The grid is 20m x 20m 👉 Identical setting for both cases left/right ◻ The test location is Beijing Road in Beijing, China. ◻ Founded in January 2023 by experts of leading tech firms including Apple, Pony.ai and Mozilla, Altos Radar secured investments from venture capitalists - ZhenFund 真格基金, Monad Ventures and Hesai CEO, Yifan Li, ◻ In an outstanding company move, former head of next-generation Imaging Radar technology at Bosch (Germany), Dr. Mingkang Li, has been appointed President at Altos Radar, in 2024. video: Altos Radar/globalnetwork #Altos #AltosV2 #AltosRadar #LiDAR #4DImaging #TexasInstruments #sensor #autonomousdriving #ADAS #Bosch Li Niu Michael Wu Mingkang Li Altos Radar Texas Instruments LightWare LiDAR Bosch Mobility
WILL RADAR SOON REPLACE LiDAR? 4D IMAGING RADAR BASED ON TDA4.
To view or add a comment, sign in
-
#AI has the potential to detect and process millions of input at once. Did you know that you can theoretically detect the same amount, but your brain can only process 60 at a time? 🧠 These discrepancies can leave room for human error, especially in scenarios like driving, where every detail counts.💥AI can help: The innovative LiDAR sensors developed by ZEISS spin-off Scantinel Photonics are designed to detect objects and foresee potential road incidents up to ten seconds before they happen, paving the way to help the EU reach its goal of zero traffic deaths by 2050. 🚗 #AutonomousDriving Read more about making mobility safer and what that means for your future here: https://lnkd.in/dUJzdwJw #ZEISS #Mobility #Innovation #VisionZero
To view or add a comment, sign in
-
Here’s a fun fact: while you can theoretically detect the same amount of input as AI, artificial intelligence can detect and process millions of inputs at once. On the other hand, your brain can only process about 60 at a time!🧠💥 Human error can happen in a blink, especially while driving! That’s where Scantinel Photonics comes in—our FMCW LiDAR sensors can predict road incidents up to 10 seconds ahead! 🚗✨ Check out the full post from our ZEISS Group family to discover how we’re on track to help the EU achieve zero traffic deaths by 2050 and make mobility safer for everyone! 🔗 https://lnkd.in/eBjfzdHe #ZEISS #ScantinelPhotonics #Innovation #FutureMobility #ArtificialIntelligence #VisionZero #AutonomousDriving
#AI has the potential to detect and process millions of input at once. Did you know that you can theoretically detect the same amount, but your brain can only process 60 at a time? 🧠 These discrepancies can leave room for human error, especially in scenarios like driving, where every detail counts.💥AI can help: The innovative LiDAR sensors developed by ZEISS spin-off Scantinel Photonics are designed to detect objects and foresee potential road incidents up to ten seconds before they happen, paving the way to help the EU reach its goal of zero traffic deaths by 2050. 🚗 #AutonomousDriving Read more about making mobility safer and what that means for your future here: https://lnkd.in/dUJzdwJw #ZEISS #Mobility #Innovation #VisionZero
To view or add a comment, sign in
-
Advantech in partnership with CronAI! CronAI uses 3D Lidar sensors together with their senseEDGE Deep Learning perception software to produce highly accurate and reliable object data. This allows fully anonymous tracking and monitoring of people and vehicles across a wide range of applications including Automation, Smart Cities and Intelligent Transportation Systems (ITS) Find out more about @CronAI: https://lnkd.in/dvt5YF8Y https://cronai.ai/ #DeepLearning #Automation #SmartCities #IntelligentTransportationSystems #Advantech #NVIDIA #AI #InferenceAI #Jetson #perceptionsoftware
To view or add a comment, sign in
-
Robots with Super Vision, LiDAR Sensors can See the Unseen😳 Imagine robots that navigate dark caves, saving lives in disaster zones, or flawlessly zipping around warehouses. Think of them as 3D laser eyes. Robots use LiDAR to map their surroundings, "seeing" in darkness and navigating complex environments. The future of robotics is here, and it has superhuman vision. Follow for more Rinor Restelica #LiDAR #Tech #AI #AR #Productivity
To view or add a comment, sign in