Superior Sensor Fusion

Enabling Safer Navigation

Based on LeddarTech’s comprehensive and demonstrated sensor data low-level fusion expertise, LeddarVision’s AI based software processes sensor data at a low level to efficiently achieve a reliable understanding of the vehicle’s environment required for navigation decision making and safer driving. LeddarVision resolves many limitations of ADAS architectures based on legacy object-level fusion by providing:

  • Scalability to AD/HAD
  • Flexible modularity to effectively handle a growing variety of use cases, features and sensor sets
  • Centralized, hardware-agnostic low-level fusion, which optimally fuses all sensors for higher and more reliable performance

Low-level sensor fusion utilizes information from all sensors for better and more reliable operation. As a result, this sensor data low-level fusion and perception solution provides superior performance, surpassing object-level fusion limitations in adverse scenarios like occluded objects, objects separation, camera/radar false alarms, blinding light (e.g., sun, tunnel) or distance/heading estimation.

  • The most accurate 3D environmental model, object detection and classification
  • Based on low-level fusion of sensor data with upsampling
  • Single development platform for optimized cost-effectiveness and scalability
  • Sensor-agnostic – Combines data from LiDAR, radar and camera
  • Provides LiDAR performance levels using only camera + radar in Level 2 applications
  • Improves Level 3+ LiDAR performance and cost

LeddarVision for Automotive

Implementing and commercializing a comprehensive, scalable end-to-end perception program to support all levels of ADAS in the automotive market is known to be highly challenging. LeddarVision makes your vision a technical and commercial reality with groundbreaking fusion and perception innovation that democratizes the deployment of advanced, cost-effective ADAS and AD features, enabling safer and smarter vehicles under increasingly complex driving scenarios.

Scaling from a front-view family of products to surround-view and parking assist, LeddarVision is designed to enable not only L2/L2+/L3 ADAS but also a 5-star safety rating (ADAS) for new car assessment programs (NCAP) and general safety regulations (GSR).

With the LeddarVision software stack, OEMs and their Tier-1 and Tier-2 suppliers are able to leverage a scalable, unified platform that resolves key sensor fusion and perception challenges, improves ADAS performance and accelerates time-to-market. LeddarTech has strong domain expertise and a complete, demonstratable work process to bring your technology integration from concept to practice.

Automotive applications enabled by LeddarVision include highway assist (HWA), park assist, adaptive cruise control (ACC), collision warning systems (front and rear), automated emergency braking (AEB [C2C and VRU]), lane keep assist (LKA), lane change assist (LCA), speed assist (SA), blind spot detection (BSD), traffic light recognition (TLR), traffic jam assist (TJA) and driver-initiated automated lane change.

LeddarVision Perception Framework

To create a highly accurate environmental model for safe and reliable autonomous driving, multiple processes must work in sync. Below is a high-level view of our environmental perception software framework. The process starts with the raw data received directly from the vehicle sensors via software API, and ends with complete environmental model data that is passed to the AV driving software module.

Step 1

Low-level data fusion and upsampling algorithms constructing a high-definition 3D RGBD model

Calibration, unrolling and matching module receives raw sensor data before synchronizing and fusing it into a unified 3D model. Upsampling increases the effective resolution of the distance sensors, resulting in a dense RGBD model with each pixel containing both color and depth information. Localization and motion tracking help to determine the self-position and velocity.

Step 2

Applying context-aware algorithms on the 3D RGBD model to achieve perception

Frame-based object detection and segmentation of the road include obstacles, vehicles, pedestrians, bicycles, motorcycles, lane borders, available free space and more. Detection by classification is performed with DNN-based algorithms that require training. In parallel, detection without classification is performed by another set of algorithms, thus enabling the detection of unexpected obstacles. Multiframe object tracking includes 3D modeling, motion tracking and sizing of each object.

Step 3

Creating the 3D environmental model

The resulting environmental model data is accessed via our software API and includes an occupancy grid and list of parameters for any tracked objects: localization, orientation, motion vector and more.

Featured Products

The newly released LeddarVision Front-View (LVF) family of automotive software products addresses the challenges Tier 1-2 suppliers and OEMs struggle with when developing Level 2/2+ ADAS applications, such as solving safety issues and finding scalable fusion and perception software that offers high performance at a low cost.

The LVF-E and LVF-H are two distinct, comprehensive AI based low-level fusion and perception software stacks that optimally combine sensor modalities for Level 2/2+ ADAS applications achieving a 5-star NCAP 2025/GSR 2022 rating.

  • LVF-E LeddarVision Front – Entry
    • For customers seeking to develop entry-level ADAS safety and highway assistance L2/L2+ applications
    • LVF-E is a comprehensive front-view fusion and perception stack for entry-level ADAS L2/L2+ highway assist and 5-star NCAP 2025/GSR 2022. LVF-E pushes the performance envelope, doubling the effective range of the sensors and enabling, for the first time, a solution with only a single 1 to 2-megapixel 120-degree front camera and two short-range front corner radars in a 1V2R configuration. Low-cost sensing, together with efficient implementation on Texas Instruments’ TDA4VM-Q1, achieves the lowest system cost for L2/L2+ entry-level ADAS. Production samples available!

      Read the Press Release

      LVF-E Product Page

  • LVF-H LeddarVision Front – High
    • For customers seeking to develop premium ADAS safety and highway assistance L2/L2+ applications
    • With sensor configuration extended to 1V5R based on a single 3-megapixel 120-degree camera, single front medium-range radar and four short-range corner radars, the LVF-H stack extends the perception support to highway assist applications, including 160 km/h adaptive cruise control, 200-meter range and semi-automated lane change. It also enhances the NCAP 2025 support for overtaking/reverse/dooring scenarios. Furthermore, with efficient implementation on the Orin platform, low-cost sensing achieves economical front-view L2/L2+ premium ADAS.

      Read the Press Release

      LVF-H Product Page

  • LVS-2+ LeddarVision Surround-View
    • For premium ADAS L2/L2+ highway assist and 5-star NCAP 2025/GSR 2022 applications
    • The newly launched LVS-2+ is a comprehensive fusion and perception software stack supporting premium surround-view L2/L2+ ADAS highway assistance and 5-star NCAP 2025/GSR 2022 safety applications. Based on LeddarVision architecture, LVS-2+ efficiently extends the LVF front-view product family 1VxR sensor configuration to a 5V5R configuration, enhancing support to TJA and HWA applications and enabling automated lane changes, overtaking and extended speed range adaptive cruise control (ACC).

      Read the Press Release

      LVS-2+ Product Page

  • LVP-H LeddarVision Parking
    • For automated parking and parking assist
    • LVP-H is a comprehensive fusion and perception software stack supporting premium ADAS L2/L2+ automated parking and parking assist applications, including intelligent parking assist (IPA), remote parking assist (RPA) and maneuver assist (MA). LVP-H enhances valid parking detection probability to over 95% with low false detections in challenging ODDs and environments and provides superior dynamic and static object detection for enhanced safety, including support to advanced NHTSA IPA safety scenarios. LVP-H utilizes a 4V4R sensor configuration, having four fish-eye cameras with 190° FoV, 1.3 Mpx resolution and four short-range corner radars with support to extension of up to 12 ultrasonic sensors.

      Read the Press Release

      LVP-H Product Page

Related Information

Learn About the Fundamentals of Sensor Fusion and Perception

E-Book – This complimentary e-book explains the main features and components of the LeddarVision sensor fusion and perception solution for automotive and mobility ADAS and autonomous driving.

Read the E-Book

Sensor Fusion and Perception for NCAP

White Paper – This post introduces new car assessment programs (NCAP), the role played in enabling road safety and the various NCAP programs across the world, with a specific focus on how the U.S. and Europe have embraced technology in their new car assessment programs and the growing importance of advanced driver assistance systems (ADAS).

Read the White Paper

Benefits of Low-Level Sensor Fusion for ADAS

White Paper – This document explains the principle, configurations and workings of sensor fusion and demonstrates a practical application of sensor fusion in ADAS through attitude estimation, and concludes by presenting the commercially available solution in the market today.

Download the White Paper