LVF-E is a comprehensive front-view fusion and perception stack for entry-level ADAS L2/L2+ highway assist and 5-star NCAP 2025/GSR 2022. LeddarTech’s low-level fusion (LLF) technology pushes the performance envelope, doubling the effective range of the sensors and enabling for the first time a solution with only a single 1.2-megapixel 120-degree front camera and two short-range front corner radars in a 1V2R configuration. Low-cost sensing, together with efficient implementation on the TDA4L platform, achieves the lowest system cost for L2/L2+ entry-level ADAS.
B-sample is planned for Q2 2023, targeting vehicle SOP in 2025/6.
With sensor configuration extended to 1V5R based on a single 3-megapixel 120-degree camera, single front medium-range radar and four short-range corner radars, the LVF-H stack extends the perception support to highway assist applications, including 160 km/h adaptive cruise control, 200-meter range and semi-automated lane change. It also enhances the NCAP 2025 support for overtaking/reverse/dooring scenarios. Furthermore, with efficient implementation on the TDA4 platform and a single Hailo-8 AI processor, low-cost sensing achieves economic front-view L2/L2+ premium ADAS.
B-sample is planned for Q3 2023, targeting vehicle SOP in 2026.
Based on LeddarTech’s comprehensive and demonstrated sensor data low-level fusion expertise, LeddarVision software processes sensor data at a low level to efficiently achieve a reliable understanding of the vehicle’s environment required for navigation decision making and safer driving. LeddarVision resolves many limitations of ADAS architectures based on legacy object-level fusion by providing:
Low-level sensor fusion utilizes information from all sensors for better and more reliable operation. As a result, this sensor data low-level fusion and perception solution provides superior performance, surpassing object-level fusion limitations in adverse scenarios like occluded objects, objects separation, camera/radar false alarms, blinding light (e.g., sun, tunnel) or distance/heading estimation.
Implementing and commercializing a comprehensive, scalable end-to-end perception program to support all levels of ADAS in the automotive market is known to be highly challenging. LeddarVision makes your vision a technical and commercial reality with groundbreaking fusion and perception innovation that democratizes the deployment of advanced, cost-effective ADAS and AD features, enabling safer and smarter vehicles under increasingly complex driving scenarios.
Scaling from a front-view family of products to surround-view and parking assist, LeddarVision is designed to enable not only L2/L2+/L3 ADAS but also a 5-star safety rating (ADAS) for new car assessment programs (NCAP) and general safety regulations (GSR).
With the LeddarVision software stack, OEMs and their Tier-1 and Tier-2 suppliers are able to leverage a scalable, unified platform that resolves key sensor fusion and perception challenges, improves ADAS performance and accelerates time-to-market. LeddarTech has strong domain expertise and a complete, demonstratable work process to bring your technology integration from concept to practice.
Automotive applications enabled by LeddarVision include include highway assist (HWA), park assist, adaptive cruise control (ACC), collision warning systems (front and rear), automated emergency braking (AEB [C2C and VRU]), lane keep assist (LKA), lane change assist (LCA), speed assist (SA), blind spot detection (BSD), traffic light recognition (TLR), traffic jam assist (TJA) and driver-initiated automated lane change.
From farms to mining sites, an increasing number of industrial vehicles are being equipped with environmental perception solutions aimed at providing advanced driver assistance capabilities, increasing productivity or fully automating the vehicle’s operations. LeddarVision delivers up to 360° perception in real-time and state-of-the-art unidentified obstacle detection based on low-level data fusion from LiDARs, radars, cameras and other sensors. This advanced, sensor-agnostic software platform offers customizable, high-performance environmental perception solutions for all levels of industrial vehicle autonomy.
To create a highly accurate environmental model for safe and reliable autonomous driving, multiple processes must work in sync. Below is a high-level view of our environmental perception software framework. The process starts with the raw data received directly from the vehicle sensors via software API, and ends with complete environmental model data that is passed to the AV driving software module.
Calibration, unrolling and matching module receives raw sensor data before synchronizing and fusing it into a unified 3D model. Upsampling increases the effective resolution of the distance sensors, resulting in a dense RGBD model with each pixel containing both color and depth information. Localization and motion tracking help to determine the self-position and velocity.
Frame-based object detection and segmentation of the road include obstacles, vehicles, pedestrians, bicycles, motorcycles, lane borders, available free space and more. Detection by classification is performed with DNN-based algorithms that require training. In parallel, detection without classification is performed by another set of algorithms, thus enabling the detection of unexpected obstacles. Multiframe object tracking includes 3D modeling, motion tracking and sizing of each object.
The resulting environmental model data is accessed via our software API and includes an occupancy grid and list of parameters for any tracked objects: localization, orientation, motion vector and more.
E-Book – This complimentary e-book explains the main features and components of the LeddarVision sensor fusion and perception solution for automotive and mobility ADAS and autonomous driving.Read the E-Book
Blog Post – This post introduces to new car assessment programs (NCAP), the role played in enabling road safety and the various NCAP programs across the world, with a specific focus on how the U.S. and Europe have embraced technology in their new car assessment programs and the growing importance of automated driver assistance systems (ADAS).Read the Blog Post
Tech Note – This document explains the principle, configurations and workings of sensor fusion and demonstrates a practical application of sensor fusion in ADAS through attitude estimation, and concludes by presenting the commercially available solution in the market today.Download the Tech Note