The fractal complexity problem
A simple flow chart for ADAS order of operations starts with sensor data acquisition, followed by an initial object detection phase called perception, enabled by AI and applied to raw sensor data. Then comes fusion of the aggregate perception data, which is run through another AI model that validates the environment and plans for vehicle action. It concludes with the execution of a vehicle command.
The vital step in that decision-making process is sensor fusion, combining perception data into an environment of accurately identified objects that the vehicle can safely navigate. ADAS is only as good as its sensor fusion performance, which boils down to the capability of the artificial intelligence models used to identify objects and deal with them. With AI still in its infancy, the science of sensor fusion is also in its early stages.
“Sketching out the pieces of an early sensor fusion model is a known process,” a Rivian spokesperson told SAE Media. “The challenge is that each of those pieces is fractally complex and requires high reliability and scale. Continuing to chase down those subsequent nine-tenths of reliability is orders of magnitude more difficult.”
A single camera offers various tunable parameters that hint as this fractal complexity: image resolution in megapixels, focal length, field-of-view, frame rate, and dynamic range. Each decision alters the cost/benefit matrix of data transfer and processing requirements. And each parameter must be tuned for various cameras around the car. For example, adjusting camera height changes tuning choices. The sum of all these choices must then be tuned to work with an AI perception model—for instance, how many megapixels and how many frames per second in a forward-facing camera are ideal for a given model to accurately identify a person in an image, at what speed, and at what distance?
Or take the considerations necessary for lidar, such as the Luminar unit Volvo uses on the new EX90. Lidar output is called a point cloud, a Seurat-like picture of an environment made up of points. Matt Weed, Luminar’s senior director of product strategy, told SAE Media that a sensor fusion design needs to decide on the kind of lidar output. A three-dimensional point cloud requires more compute compared to a two-dimensional array that represents three dimensions. With the latter, designers need to decide how to represent 2D as 3D. “The common one right now is a top-down projection, like a bird-eye view,” Weed said. “You have things called pillars, where you have attribute height and gap and under-drivability, but not in a highly resolved way.” The upside to this is “it’s not blowing up your [processing] computer like crazy, like a full three-dimensional model, but it’s still going to be better than what you get in a camera.” These data-in-AI-identification calibrations must be done for every type of sensor on a vehicle.
Fusion: late or early?
At the other end of fractal complexity lies the question of where sensor fusion occurs in ADAS operations. The two broad approaches are late fusion, also known as object-level fusion, and early fusion, also known as raw data fusion or low-level fusion. The flow chart described above is considered late fusion, the process automakers initially decided on. This technique puts perception work on the sensor itself, filtering raw data at the sensor to identify objects in the environment, then sending those perceptions to the compute for fusion. There, the AI model considers the fused perceptions for its world-building, validation, and vehicle reaction planning.“This approach is simpler because you push much of the hard work to third-party components,” Rivian said. “But [it’s] limited because the outcome is mostly as good as your least-capable sensor… [and] objects must be detected by more than one sensor to be considered valid.” If the camera’s perception data indicates an object ahead that the radar hasn’t registered, the final AI model doesn’t have raw sensor data that it could use to make its own decision about the variation, with potentially deadly consequences.
Today, more companies are interested in early fusion, including Rivian, Luminar and ADAS software developer LeddarTech. LeddarTech chief technology officer Pierre Olivier told SAE Media that early fusion allows ADAS to use lower-cost sensors without high-performance compute power. Because “every step of processing data is a filter,” Olivier said, utilizing the full spectrum of raw data maximizes output quality. Rivian said this provides, “richer and more accurate representations.” Luminar’s Weed said this method is standard for lidar units, because every lidar maker is “trying to keep power budget down in the sensor, which has to live at the extremity of the vehicle and deal with heat.”
“The [early fusion] perception model can be jointly trained with behavioral agent prediction, where we can not only identify and classify objects but predict paths and likely actions of other vehicles and road users,” Rivian said. “Modern AI is incredibly powerful, but it is a tool that requires constant early decision-making, with a view towards the future.”
Raw data offers far more data to process and requires training AI perception models to filter out the kinds of noise each sensor type is prone to include. LeddarTech’s open platform software toolkit for automakers can be integrated into OEM ADAS systems, and claims it offers high decision reliability on lean compute packages thanks to the quality of its perception model.
A new approach called very early fusion is now under consideration. This strategy, “incorporates tuning sensors to work together to reduce data volume almost at the sensor,” Weed said. “You’re actually informing the way the sensors are capturing the environment based on each other.” Camera output accounts for the bulk of sensor data but much of that data is irrelevant. Lidar point cloud data could be used to strip away unnecessary image data before processing.“You don’t need millions of points of the sky or the road right in front of you based on the lidar data, where you see that there is free space,” Weed said.
Improved compute power
The compute module is another area rife with development potential. The explosion in vehicle software has invited major chip makers into vehicle design as OEMs reduce the number of ECUs in a car in favor of powerful brains to run all vehicle functions. Rivian and Volvo use Nvidia’s Drive Orin chip, and Nvidia has its own autonomy solution called Hyperion that is above SAE Level 2 and uses the same Iris lidar on Volvo’s EX90. Intel, already a force in in-car chips, bought ADAS maker Mobileye — which Volkswagen uses — in 2017. And Qualcomm has lately begun doing more to promote its Snapdragon Ride Platform.
Computing demands can quickly escalate when dealing with heaps of real-time raw data, predictive AI, neural networks, and large language models. Even before considering the rigors of a vehicle’s duty cycle, vehicles impose constraints on space, energy draw, and heat. LeddarTech’s Olivier mentioned fuel economy mandates as another hot button, noting that energy draw “in a high-power ECU or GPU, or even some of the robotaxis, could be a few kilowatts.” An ADAS compute package making a constant two-kilowatt draw on the engine equals nearly three horsepower, an additional drain automakers want to avoid, especially with the resurgence of hybrid and plug-in hybrid powertrains.
Established chip makers already offer powerful hardware, but the need to extract efficiencies in every step of the ADAS chain has compelled new investments in dedicated hardware, including system-on-a-chip (SOC) and AI accelerator chips. Luminar designs the silicon and non-silicon semiconductors for its lidar units. LeddarTech counts Texas Instruments among its partners on chip designs tailored to LeddarTech’s AI models. And Rivian developed the SOC for its newest R1 models in-house, an achievement that both shrank the vehicle’s total ECU count from 17 to seven while increasing processing speed from 25 trillion operations per second (TOPS) to 250 TOPS.
“Automotive is throwing way bigger computers than I ever thought they would at consumers, but they’re still struggling,” Weed said.
Finally, ADAS advancements must conform to price pressures. There’s a reason lidar units are limited to luxury brands like Mercedes and Lexus, and Volvo’s flagship battery-electric SUV at the moment.“Tesla’s sort of shown that the upper bound for what customers are willing to pay is about $10,000,” said LeddarTech’s Olivier. “I think that there was a study a few years ago which said that most people were willing to pay about $3,000 for some form of autonomous functionality for privately-owned vehicle. So we’re working on delivering that use case.”
When it comes to SAE Level 5 fully automated vehicles, the transportation dream ADAS technology is building towards, Olivier said don’t hold your breath. “[They’re] probably ten to 15 years out.”