LeddarVision Software = LeddarSense + VayaDrive

The LeddarVision software platform combines AI and computer vision technologies as well as deep neural networks with computational efficiency to scale up the performance of AV sensors and hardware essential for planning the driving path.

Raw Data, Fused to Perfection

This state-of-the-art solution enables the detection of the various objects in the scene, including vehicles, pedestrians, bicycles, drivable road, obstacles, signs, lanes, lane lines, and more. LeddarVision also detects very small obstacles on the road with better detection rates and less false alarms than legacy “object fusion” solutions. Unclassified obstacles are also detected, providing an additional layer of safety to the vehicle.

  • Comprehensive, open sensor fusion and perception platform
  • Combines data from LiDAR, radar, and camera
  • Provides LiDAR performance in Level 2 applications using only camera + radar
  • Improves Level 3+ LiDAR performance and cost
  • Most accurate environmental model
  • Deep tech raw data fusion with upsampling
  • Size, shape, and velocity information for every surrounding object
  • Remarkably accurate and reliable detection and classification

  • LeddarVision excels at detecting even unknown objects that are absent from training dataset.
  • Our novel approach comes with inherent functional safety, detecting objects and dangers even during a sensor malfunction.

SAFER
AUTONOMOUS
DRIVING
With Better Detection

Precise Object Detection

To detect objects, no single type of sensor is enough. Cameras don’t perceive depth, while distance sensors such as LiDARs and radars offer very low resolution. LeddarTech has just the solution: raw data fusion with upsampling.

Detection of small objects absent in training sets

Depth data assigned to every pixel in camera picture

Accurate shape definition of vehicles, humans, and any other object

Best AV Perception at Low System Cost

Cost-effective fusion-based solution with upsampling has low sensor requirements and low computational requirements. Low-cost depth sensors with low resolution suffice to deliver highly reliable environmental perception.

Unmatched Environmental Perception Technology

Our raw data sensor fusion with upsampling generates the most complete 3D environmental model including occupancy grid, a list of classified static and dynamic objects with their velocities and motion vectors, tracking, and more.

Creating the Most Precise Environmental Model

To create a highly accurate environmental model for safe & reliable autonomous driving, multiple processes must work in sync. Below is a high-level view of our environmental perception software framework. The process starts with the raw data received directly from the vehicle sensors via software API, and ends with complete environmental model data that is passed to the AV driving software module.

 

Raw data fusion and upsampling algorithms construct high-definition 3D RGBD Model

Calibration, unrolling, and matching module receives raw sensor data before synchronizing and fusing it into a unified 3D model. Upsampling increases the effective resolution of the distance sensors, resulting in a dense RGBD model with each pixel containing both color and depth information. Localization and motion tracking help to determine the self-position and velocity.

Applying context-aware algorithms on the 3D RGBD model to achieve perception

Frame-based object detection and segmentation of the road includes obstacles, vehicles, pedestrians, bicycles, motorcycles, lane borders, available free space, and more. Detection by classification is performed with DNN-based algorithms that require training. In parallel, detection without classification is performed by another set of algorithms, thus enabling the detection of unexpected obstacles. Multiframe object tracking includes 3D modeling, motion tracking, and sizing of each object.

Creation of the 3D environmental model

The resulting environmental model data is accessed via our software API and includes an occupancy grid and list of parameters for any tracked objects: localization, orientation, motion vector, and more.

Related Content

VayaVision Is Now a LeddarTech Company

LeddarTech’s open platform based on its full-waveform digital technology combined with VayaVision raw data sensor fusion and perception software will deliver the most highly accurate environmental model enabling the volume deployment of cost-efficient ADAS and AD applications.

Read the press release

Environmental Perception for Safer and Reliable Autonomous Driving

VIDEO- Our leading environmental perception solution provides vehicles with crucial information on the dynamically changing driving environment for safer and reliable autonomous driving. The software solution encompasses state-of-the-art raw data fusion with upsampling, AI and computer vision.

Watch on Youtube

Environmental Perception in Rainy Conditions

VIDEO- Detection and sensing performances in adverse weather conditions have always been a challenge for ADAs and AD applications. See our software perform in the rain in this short but telling video teaser!

Watch on Youtube