📖 New Publication Alert! We are thrilled to announce the publication of PLIADES project first conference paper: "Automated Data Labelling for Pedestrian Crossing and Not-Crossing Actions Using 3D LiDAR and RGB Data". This innovative paper, co-authored by Kosmas Tsiakas, Dimitris Alexiou, Dimitris Giakoumis, Antonis Gasteratos, and Dimitrios Tzovaras, was presented at the 2024 IEEE International Conference on Imaging Systems and Techniques (IST) in Tokyo, Japan. 🔗 Read more here: https://lnkd.in/d59J9_e2 #PLIADES #DataSpaces #AI #Innovation #HorizonEurope #Pedestrians #Annotations #Tracking #Roads #Pipelines #Neural_networks #Autonomous_vehicles DS2 | CEDAR EU | CyclopsProject | NOUS
PLIADES project’s Post
More Relevant Posts
-
📃Scientific paper: AYDIV: Adaptable Yielding 3D Object Detection via Integrated Contextual Vision Transformer Abstract: Combining LiDAR and camera data has shown potential in enhancing short-distance object detection in autonomous driving systems. Yet, the fusion encounters difficulties with extended distance detection due to the contrast between LiDAR's sparse data and the dense resolution of cameras. Besides, discrepancies in the two data representations further complicate fusion methods. We introduce AYDIV, a novel framework integrating a tri-phase alignment process specifically designed to enhance long-distance detection even amidst data discrepancies. AYDIV consists of the Global Contextual Fusion Alignment Transformer (GCFAT), which improves the extraction of camera features and provides a deeper understanding of large-scale patterns; the Sparse Fused Feature Attention (SFFA), which fine-tunes the fusion of LiDAR and camera details; and the Volumetric Grid Attention (VGA) for a comprehensive spatial data fusion. AYDIV's performance on the Waymo Open Dataset (WOD) with an improvement of 1.24% in mAPH value(L2 difficulty) and the Argoverse2 Dataset with a performance improvement of 7.40% in AP value demonstrates its efficacy in comparison to other existing fusion-based methods. Our code is publicly available at https://lnkd.in/ekQ-yWST ;Comment: This paper has been accepted for ICRA 2024, and copyright will automatically transfer to IEEE upon its availability on the IEEE portal Continued on ES/IODE ➡️ https://etcse.fr/6TSHQ ------- If you find this interesting, feel free to follow, comment and share. We need your help to enhance our visibility, so that our platform continues to serve you.
AYDIV: Adaptable Yielding 3D Object Detection via Integrated Contextual Vision Transformer
ethicseido.com
To view or add a comment, sign in
-
DynamicCity, a novel 4D LiDAR generation framework designed for generating large-scale, high-quality dynamic LiDAR scenes that evolve over time. The primary objective of DynamicCity is to improve the generation of LiDAR data, especially in dynamic environments, making it ideal for applications like autonomous driving. Here are the key points in the document: 4D LiDAR Scene Generation: DynamicCity introduces the ability to generate 4D LiDAR scenes that capture both spatial and temporal data, unlike traditional models that focus on static 3D scenes. HexPlane Representation: DynamicCity uses a VAE model to encode LiDAR data into a compact 4D representation called HexPlane. This representation consists of six 2D feature maps that capture various spatial and temporal dimensions. Efficient Compression and Expansion: A novel projection module is used to compress high-dimensional LiDAR data into the HexPlane efficiently. Additionally, the framework employs an "Expansion & Squeeze Strategy" to improve accuracy and training efficiency, allowing faster and more memory-efficient reconstruction of 4D LiDAR data. Diffusion Transformers (DiT): The HexPlane data is used in a DiT model, which generates the 4D LiDAR scenes. The model progressively refines the scene, capturing complex spatial and temporal relationships in the data. Applications: DynamicCity supports several downstream applications like trajectory-guided generation, command-driven generation, and inpainting. These features allow the model to control scene dynamics and modify LiDAR data during generation, making it highly flexible for real-world scenarios like autonomous vehicle simulations. Performance: Experimental results show that DynamicCity outperforms state-of-the-art methods in both 4D scene reconstruction and generation, achieving significant gains in metrics like mIoU (mean intersection over union), generation quality, training speed, and memory efficiency. DynamicCity’s innovative approach to 4D LiDAR scene generation positions it as a powerful tool for simulating dynamic environments, particularly useful in areas like robotics and autonomous driving. #AI #LLM #GPT #RLHF
To view or add a comment, sign in
-
Mapping with LiDAR (Light Detection and Ranging) offers several advantages over conventional methods such as traditional surveying and photogrammetry. Here are some of the key benefits: Accuracy and Precision: LiDAR provides highly accurate and precise data, capable of capturing fine details of the terrain and structures with a high level of detail. Speed: LiDAR can cover large areas quickly compared to conventional ground surveys. This efficiency reduces the time required for data collection. Data Density: LiDAR systems can capture millions of data points per second, resulting in high-resolution maps and models. This data density is far superior to that of traditional surveying methods. Penetration Through Vegetation: LiDAR can penetrate through vegetation and other obstacles to measure the ground surface beneath. This capability is particularly useful in densely forested areas where traditional methods might struggle. Automation and Processing: LiDAR data can be processed automatically using specialized software, enabling rapid generation of digital elevation models (DEMs), 3D models, and other outputs. Versatility: LiDAR can be deployed in various platforms, including aerial (drone, helicopter, and airplane), terrestrial (ground-based), and mobile (vehicle-mounted), making it adaptable to different surveying environments. Safety: For hazardous or hard-to-reach areas, LiDAR allows data collection from a safe distance, reducing the risk to personnel. Time of Day and Lighting Conditions: LiDAR is an active sensing technology, meaning it does not rely on external light sources. It can operate effectively both day and night, unlike photogrammetry, which requires good lighting conditions. These advantages make LiDAR a powerful tool for a wide range of applications, including topographic mapping, forestry, urban planning, infrastructure monitoring, and disaster management.
🇰🇷Koreas #1 Robotics Voice | Your Partner for Robotics in Korea🇰🇷 | 💡🤖 Join 71,000+ followers, 50mio views | Contact for collaboration!
🌍🤖 𝗠𝗮𝗽𝗽𝗶𝗻𝗴 𝘁𝗵𝗲 𝗨𝗻𝘀𝗲𝗲𝗻: 𝗧𝗵𝗲 𝗣𝗼𝘄𝗲𝗿 𝗼𝗳 𝗟𝗶𝗗𝗔𝗥 𝗶𝗻 𝗨𝗻𝘃𝗲𝗶𝗹𝗶𝗻𝗴 𝘁𝗵𝗲 𝗪𝗼𝗿𝗹𝗱 Have you ever wondered how autonomous vehicles "see" the road? The secret lies in a powerful technology known as LiDAR (Light Detection and Ranging). But what exactly is LiDAR, and how does it transform beams of light into detailed maps? 🔍 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗟𝗶𝗗𝗔𝗥 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆: LiDAR sensors emit rapid pulses of laser light, which bounce off objects and return to the sensor. The time it takes for the light to return is measured, allowing the sensor to calculate the distance to objects with high precision. This process occurs thousands of times per second, creating a dynamic, real-time map of the environment. 𝗞𝗲𝘆 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗼𝗳 𝗟𝗶𝗗𝗔𝗥 𝗠𝗮𝗽𝗽𝗶𝗻𝗴: ✅ 𝗛𝗶𝗴𝗵 𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻: LiDAR provides highly accurate distance measurements, essential for applications requiring detailed geographic information. ✅ 𝟯𝗗 𝗠𝗮𝗽𝗽𝗶𝗻𝗴: It captures the shape, size, and even the texture of objects, contributing to the creation of three-dimensional maps. ✅ 𝗩𝗲𝗿𝘀𝗮𝘁𝗶𝗹𝗶𝘁𝘆: Effective in various lighting and weather conditions, LiDAR is utilized in numerous fields, from autonomous driving and forestry to flood modeling and urban planning. ❗ 𝗦𝘁𝗮𝘆 𝘁𝘂𝗻𝗲𝗱 and follow Robert 지영 Liebhart for more insights into how #robotics, #AI, and #automation are reshaping the world. Credits: Stefan Nitz Seen also at: Ulrich M., Lukas M. Ziegler, Michal Ukropec Michal Gula
To view or add a comment, sign in
-
In what scenarios can 3D LiDAR be used in the field of industrial sensing? ✅ Traffic ✅ Robot ✅ Volume detection ✅ Container positioning ...... 👇 Click to learn more about products and applications:https://lnkd.in/eAPMpnc4 #3DLiDAR #Industrial
To view or add a comment, sign in
-
👀 Ever wondered what powers the precision of LiDAR sensors? Let’s take a look *inside* the tech that’s revolutionising mapping, object detection, and autonomous navigation. This image reveals the intricate mechanism of a LiDAR sensor: 🔸 Laser Source: Emits laser beams that measure distances by reflecting off objects. 🔸 Tilting Mirror: Redirects the beam, enabling wide field-of-view scanning. 🔸 Optical Rotary Encoders: Provide precise angle tracking for accurate mapping. 🔸 Servo Motor: Powers the mirror’s rotation, ensuring smooth, continuous scanning. 🔸 Receiver: Detects reflected beams and calculates distance through time-of-flight data. Put them all together, and they result in a detailed 3D map of the surroundings – accurate, efficient, and indispensable for industries from autonomous vehicles to geospatial mapping. The Digiflec team is proud to be the UK’s leading distributor of cutting-edge LiDAR sensors. Whether you're building smarter cities, safer vehicles, or innovative robotics, our sensors are at the heart of your success. Want to see what LiDAR can do for your project? Drop us a message or visit our website www.digiflec.com to learn more. 🚀 Image credit: Kuan-Min Huang, Tung-Lin Hsieh, Chia-An Yi, and Chan-Yun Yang #LiDAR #Innovation #TechExplained #AutonomousSystems
To view or add a comment, sign in
-
🔓 Adaptive AI: Unlocking Spatial Intelligence in Autonomous Mobility with Single-Chip FMCW LiDAR Let’s take a look at how Scantinel Photonics' Single-Chip FMCW LiDAR minimizes computational load and boosts AI response times: 🔍 Rich Datasets Unlike frame-by-frame Time-of-Flight (ToF) systems, our LiDAR provides instant spatial and velocity data, empowering AI with precise object detection, classification, and tracking. 🔗 Seamless Integration High-resolution, low-noise data flows directly into AI algorithms, enhancing real-time predictions and fine-tuning decisions in dynamic scenarios. 🧠 Continuous Learning Our LiDAR’s continuous stream of high-quality data fuels adaptive learning, allowing AI to sharpen its spatial intelligence km by km. In a Nutshell: Our FMCW LiDAR empowers AI with precise detection and real-time learning—all within a compact, single-chip solution. 🚀 Next Week: We'll touch on why interoperable LiDAR is key to future-proofing autonomous mobility. #LiDAR #SingleChip #FMCW #AdaptiveAI #FutureOfMobility #Scantinel
To view or add a comment, sign in
-
🌍🤖 𝗠𝗮𝗽𝗽𝗶𝗻𝗴 𝘁𝗵𝗲 𝗨𝗻𝘀𝗲𝗲𝗻: 𝗧𝗵𝗲 𝗣𝗼𝘄𝗲𝗿 𝗼𝗳 𝗟𝗶𝗗𝗔𝗥 𝗶𝗻 𝗨𝗻𝘃𝗲𝗶𝗹𝗶𝗻𝗴 𝘁𝗵𝗲 𝗪𝗼𝗿𝗹𝗱 Have you ever wondered how autonomous vehicles "see" the road? The secret lies in a powerful technology known as LiDAR (Light Detection and Ranging). But what exactly is LiDAR, and how does it transform beams of light into detailed maps? 🔍 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗟𝗶𝗗𝗔𝗥 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆: LiDAR sensors emit rapid pulses of laser light, which bounce off objects and return to the sensor. The time it takes for the light to return is measured, allowing the sensor to calculate the distance to objects with high precision. This process occurs thousands of times per second, creating a dynamic, real-time map of the environment. 𝗞𝗲𝘆 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗼𝗳 𝗟𝗶𝗗𝗔𝗥 𝗠𝗮𝗽𝗽𝗶𝗻𝗴: ✅ 𝗛𝗶𝗴𝗵 𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻: LiDAR provides highly accurate distance measurements, essential for applications requiring detailed geographic information. ✅ 𝟯𝗗 𝗠𝗮𝗽𝗽𝗶𝗻𝗴: It captures the shape, size, and even the texture of objects, contributing to the creation of three-dimensional maps. ✅ 𝗩𝗲𝗿𝘀𝗮𝘁𝗶𝗹𝗶𝘁𝘆: Effective in various lighting and weather conditions, LiDAR is utilized in numerous fields, from autonomous driving and forestry to flood modeling and urban planning. ❗ 𝗦𝘁𝗮𝘆 𝘁𝘂𝗻𝗲𝗱 and follow Robert 지영 Liebhart for more insights into how #robotics, #AI, and #automation are reshaping the world. Credits: Stefan Nitz Seen also at: Ulrich M., Lukas M. Ziegler, Michal Ukropec Michal Gula
To view or add a comment, sign in
-
𝗟𝗼𝗻𝗴-𝗿𝗮𝗻𝗴𝗲 𝗟𝗶𝗗𝗔𝗥-𝗴𝘂𝗶𝗱𝗲𝗱 𝗺𝗼𝗻𝗼𝗰𝘂𝗹𝗮𝗿 𝟯𝗗 𝗼𝗯𝗷𝗲𝗰𝘁 𝗱𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗶𝗻 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝘁𝗿𝗮𝗶𝗻𝘀 🚆 Europe’s #railway systems are vital but face significant modernization challenges, requiring advanced technologies for safety and efficiency. In this context, research led by Raúl David Domínguez Sánchez showed an #AI-based solution for long-range LiDAR-guided monocular #3D object detection in autonomous trains. The proposed system features a pipeline with 4️⃣ 𝗺𝗼𝗱𝘂𝗹𝗲𝘀: 𝟮.𝟱𝗗 𝗼𝗯𝗷𝗲𝗰𝘁 𝗱𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 with distance prediction (using a modified YOLOv9), 𝗱𝗲𝗽𝘁𝗵 𝗲𝘀𝘁𝗶𝗺𝗮𝘁𝗶𝗼𝗻 (U-Net like), and dedicated heads for 𝘀𝗵𝗼𝗿𝘁- 𝗮𝗻𝗱 𝗹𝗼𝗻𝗴-𝗿𝗮𝗻𝗴𝗲 𝟯𝗗 𝗼𝗯𝗷𝗲𝗰𝘁 𝗱𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻. Inspired by the #Faraway-Frustum approach, the method incorporates #LiDAR data during training to enhance depth estimation. During inference, the system operates exclusively with a monocular camera, generating pseudo-point clouds and frustums from depth images. This eliminates the need for LiDAR in deployment but still maintains effective detection capabilities. Tested on the OSDaR23 railway dataset, the system delivered 𝗽𝗿𝗼𝗺𝗶𝘀𝗶𝗻𝗴 𝗹𝗼𝗻𝗴-𝗿𝗮𝗻𝗴𝗲 𝗱𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀. While results were effective, the experiments identified areas for further refinement and optimization. As a take-away, the method provides a 𝗰𝗼𝘀𝘁-𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲 𝗯𝗮𝗰𝗸𝘂𝗽 𝗮𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝘃𝗲 to pure LiDAR-based 3D object detection, particularly in scenarios where LiDAR sensors are unavailable due to cost constraints, hardware failures or other limitations, and a single camera is the only resource. We congratulate Raul on the successful completion of his Master's thesis!
To view or add a comment, sign in
-
Revolutionizing Sensing: The Impact of 4D LiDAR Technology 🚀 In the rapidly evolving world of technology, the introduction of 4D LiDAR is set to redefine our understanding of autonomous systems. Enter the Aeries II—the world’s first 4D LiDAR sensor with camera-level resolution. This groundbreaking device is not just about measuring dimensions; it takes us a step further by incorporating velocity measurement of detected objects. So, what makes Aeries II a game-changer? Leveraging Aeva’s innovative Frequency Modulated Continuous Wave (FMCW) technology, this sensor outperforms traditional Time-of-Flight systems. The unique LiDAR-on-chip silicon photonics design condenses the capabilities of bulky LiDAR components into a compact module, paving the way for unprecedented applications in various fields, particularly in automotive technology. Why 4D LiDAR Matters: ● Enhanced Detection: Aeries II offers an impressive detection range of 500 meters without interference from sunlight or other sensors, ensuring reliability in diverse conditions. ● Versatile Applications: With multiple field-of-view configurations, it adapts to various placements, making it ideal for innovative projects across different sectors. ● Continuous Monitoring: The sensor's ability to track velocity allows for real-time predictions about object movement, a critical factor for developing safe autonomous systems. The Future is Here As we embrace this cutting-edge technology, it's crucial to understand how to integrate it effectively. Organizations looking to harness the power of 4D LiDAR must prioritize: 1. Training and Development: Equip teams with knowledge on how to implement and utilize these advanced systems. 2. AI Integration: Combine AI with 4D LiDAR for enhanced data analysis and decision-making capabilities. 3. Collaboration: Foster partnerships between technology providers and end-users to ensure that the technology meets real-world needs. In a landscape where innovation is essential for survival, 4D LiDAR is not just an upgrade; it’s a leap into the future. #4DLiDAR #Innovation #AutonomousSystems #Aeva #TechnologyTrends #MachineLearning #AI #SensorTechnology #FutureOfTransportation #SmartCities #DigitalTransformation
To view or add a comment, sign in
-
Robots use special sensors, like lidar, to "see" the world around them 🤖. Lidar shoots lasers to create a 3D map of the environment, helping robots understand where things are. But sometimes, lidar can't catch everything, like small obstacles or bumpy ground 😕. Now, companies like NVIDIA are giving robots better vision using cameras and AI. With these upgrades, robots not only see distances but also recognize objects! Other companies, like Boston Dynamics and Inovance, are also working on improving robot vision. Smarter robots mean safer teamwork with humans and exciting possibilities for the future! 🚀 #AI #RobotVision #FutureTech #Innovation
To view or add a comment, sign in
179 followers