Cutting Costs, Boosting Safety: The Game-Changing Impact of LVS-2+ Surround-Perception on ADAS
In the LeddarTech Lab podcast, Aviv Goner, Sr. Technical Project Lead at LeddarTech, discusses the importance and challenges of surround view perception in ADAS. He highlights the current market’s limitations, mainly the high costs restricting this technology to luxury vehicles. LeddarTech’s innovative approach, using a 5E5R configuration, aims to make this technology accessible for mass-market vehicles. This approach not only reduces costs but also meets high safety and performance standards, crucial for ADAS functionalities. Goner emphasizes the system’s ability to handle complex scenarios and reduce false alarms, enhancing safety and reliability. Positive industry feedback and real-world demonstrations showcase the system’s effectiveness and potential to reshape market expectations.
Podcast Transcript
Hello, and welcome to the LeddarTech Lab podcast brought to you by LeddarTech, where we talk all things sensor fusion, ADAS, AD, perception, and mobility in the automotive industry. And I’m excited about today’s conversation because we’re talking about cutting costs, boosting safety, the game-changing impact of LVS2 plus around perception on ADAS. And I have a great guest to bring on for that conversation, Aviv Goner is Senior Program Manager for LeddarTech. Aviv, thank you so much for being here.
Aviv: Thank you for having me.
Michelle: I’m excited because we have some really interesting things to talk about. So I want to start here. Let’s start off by getting a little framework about surround view perception. What is it? Why is it so important?
Aviv: Well, surround view refers to 360 image of the surrounding of the vehicle, which is actually the eyes of the ADAS part. So it’s everything that goes mostly to the front, but also to the sides and to the back. So the ADAS function knows of potential risks and actually where to go, crossing vehicles, rear overtaking vehicles, and so on. So everything that goes around the vehicle. So that’s a surround view.
Michelle: So this is a big question. You were talking about it. It sounds amazing. We’ll get into more details there, but is it in all vehicles today? And if not, where do you see things going in the future?
Aviv: Well, yeah, definitely not. Definitely when it comes to passenger vehicles, there are surround functions and surround view sensor suites for really luxury premium vehicle sectors. But for mass production passenger vehicle, this is actually rare. Given the high cost of the hardware configuration that requires to cover all the 360 sphere on the vehicle. And again, as we said previously, this is what we’re trying to break. This is a paradigm that we’re trying to break, that we actually need a lot of equipment on the vehicle to cover the entire 360. This is where we are thriving, to have a cost-effective solution that’s based on 5V5R, five cameras and five radars to cover the entire 360 sphere and allow reliable high performance perception that is required to the advanced surround ADAS functions. The reason why LVS was developed to begin with, to make surround view ADAS functions accessible and available, not to the premium luxury market, but to the actual mass passenger.
Michelle: You alluded to cost and of course, cost and performance paramount and some key challenges for ADAS developers at automotive OEMs in tier one. So how does LeddarVision surround view compare against other solutions in the market right now?
Aviv: Yeah, so first of all, we start with the number of sensors and the nature of the sensors. And that again, all reflects the cost and complexity for the OEMs and the tier ones. So as we said, we base our solution on five V5R configuration, which is only five cameras, four fisheye cameras surrounding the vehicle in a non-fisheye camera to the front. And we use only two to three megapixel fisheye cameras and a non-fisheye camera to the front and five radars covering the four corners and a front radar. And today, most of the proposed solutions in the market goes to between eight and 12 cameras. Some of them are non-fisheye on the sides, rotated a little bit to the back and to the rear in order to be able to capture semi-occluded vehicles in adjacent lanes, for example, for automated lane change. And this is something that we want to reduce, to eliminate. So I think that the offering of 5V5R is quite distinguished from what the industry is offering today. Of course, we’re using only radars and cameras, which is another topic. We don’t use LiDAR, which is a higher-cost sensor. And 5V5R is really a lean sensor suite that can do several things.
One is, again, compliance with safety regulations, so NCAP 2025 and GSR. And that requires long enough distances for object detection for the NCAP scenarios, both to the sides, door side, rear, blind spot monitoring, again, of course, to the front for cutting or any collision-relevant situations that are in high velocities.
Second of all, we go beyond the 200 meters for object detection, which is a must in order to have an increased ego-vehicle velocity, allowed velocities of 160 kilometers per hour rather than maybe 100 or 130 in lower-cost sensor set configurations. And that also reflects on processing power, because if you have less density of cameras mainly, then the requirement for computation platform reduces. And then you can, again, allow yourself to use a reasonable, cost-effective processing platform rather than have a huge, power-consuming, high-cost platform, because you have 10, 11, whatever cameras. And so I think that’s a major differentiation between us and the competitors.
And the key thing is how you actually use those camera images, those pixels that you get out of the sensors. And if you use them reliably, again, low-level fusion, which I think was discussed previously, is about an efficient use of even noisy, jittery, partial information from the sensors to come up with a combined fused map and then an AI mechanism that is reliable in the outputs that it gives. And if you do that with the required modeling accuracy, with the required ranges, and with the required false positive rate, which has to be low enough, even for such high vehicle speeds, then you can comply with regulation and then you have a solid, user-oriented ADAS solution with really low sensor costs.
Michelle: And sometimes less is more here, talking about cutting down costs by eliminating the need for as many cameras. But of course, the level of performance needs to be super high, because, I mean, once again, safety is paramount. Because without that, we have nothing. So let’s talk about performance, because it’s so important. So dive a little deeper, if you can, with just how we’re seeing performance levels here.
Aviv: Yeah, so the goal is, as you said, to have not the minimal performance, but everything that is required to have a solid, safe solution for high vehicle velocities. And that goes to first, object detection ranges. And again, that’s not only for the closest in-path vehicle, but in cases, an occluded or partially occluded vehicle, which could be a potential collision-relevant event that we then drive to an undesired emergency braking, or any other safety maneuver that we don’t want to have, both for safety reasons and for user comfort reasons. So you have to be able to accurately model these objects and reliably detect them long enough, before you actually get there, to see those potential cut-in, cut-outs, overtaking scenarios. So 200 meters and beyond for object detection, that’s one thing that is unique, I think, reliably with really good assignment for lanes and modeling accuracy. Rear object detection for high-speed overtaking or cut-in, cut-out scenarios. And then if you need the automated lane change, you have to know what’s going on behind you in a long enough range, so you can avoid those lane changes that would not be safe. And the third thing would be VRUs. So the ability to see VRUs within the danger zone that might then be crossing or going on the road. And for like any NCAP scenario will produce safety-related or risky scenarios that you need to avoid that. So that goes to really around the car, VRU detection. And also road modeling and lane modeling, again, in a long enough range that will allow you all the solid, safe lane keep assist and ELK, which goes to overtaking vehicles in a rural road where there’s no barrier between the opposite directions of lanes. And all these allow you to have a decent ACC, collision avoidance, collision warnings, ADAS functionalities that are required for the L2+, you know, surround functionality.
Michelle: Let me ask as a follow-up to that about corner cases and false alarms. Any insight there?
Aviv: Yeah, so that’s really a good question because even if it’s not in the core of the performance envelope, these corner cases happen and you have to cope with them to have a solid, safe solution, right? For example, getting in and out of tunnels, for example, very instantaneous insertions of VRUs onto the road that you have to see early enough. False alarms, for example, object-level fusion solutions that are based on radars have many, relatively many false alarms that are produced by multi-reflections from guardrails and other vehicles, especially in dense traffic situations. So low-level fusion can reduce those false alarms and then you can actually be compliant with the required false alarm rate, which goes sometimes to the 10 and the minus 5, 10 and the minus 6 false alarms per hour, which is similar to what would be with a human driver situations. So all that, also corner cases that has to do with limited degraded sensor inputs, such as dirty lenses or even a failure of a single, of a specific sensor within that suite of sensors. Then object-level fusion, often if there’s a significant degradation in the integrity of the signal of one of the sensors, that will cause really a blind sector around the vehicle, which is not safe. And the ADAS function would just have to disengage and give back the control to the driver. And low-level fusion allows us to keep functionality, even in a degraded performance, but to keep the functionality around the vehicle because we are not fusing objects to objects. We’re just taking the entire set of information from the sensors, even if it’s degraded, and then we’re producing an enriched map and we take decisions on this map. So even if we have a blurred or a dirty area within one of the cameras, just as an example, that’s not going to be a blind sector. So all these corner cases, because they don’t, you know, they don’t happen quite often. But once they happen, then I think it’s a more robust algorithmic approach that allows us to keep this minimum safe functionality.
Michelle: So the big question, the proof is in the pudding, so to speak. What has the response been from customers? And then maybe more importantly, how can customers see LVS-2+ in action?
Aviv: I see OEMs and Tier 1s that we are engaged with, and we actually show them the functionality and the performance. Get really excited by the fact that this can be achieved with five cameras, two megapixel or three megapixel to the front, which is something that I think, again, that was in such sort of a mindset that it’s not achievable. You have to have, you know, those 10, 12 cameras, sometimes non-fisheye to cover for backup. And that, again, draws higher computation power, higher power consumption, higher costs. So once they see that, we get a lot of good feedback. And once they acknowledge that this is, you know, complete enough, maturing enough and high performance. And again, proof is in the pudding. So you have to actually show that. So we have a small fleet of vehicles and LVS is installed and running on them real time. And we also, I mean, we also work very hard to try to quantify with, you know, very clear metrics and KPIs, what this LVS configurational solution is doing in corner cases and also statistically valid calculations and measurements of recalls and false positives and modeling accuracy, which is not something that always can be seen in a live demonstration, right?
We were talking about corner cases. So we have to see if these corner cases are really addressed in a high performance manner, the way we define it. So it’s all happening. It’s there. We show it, we showed it in the CES, the LVS-2+ with 5V5R in real life on roads in Vegas and the other vehicles that we have demonstrating that as well. So we get a lot of traction from the customers.
Michelle: Aviv Goner, Senior Program Manager for LeddarTech. Pleasure talking with you, Aviv, and exciting to hear, you know, this is actually happening in reality now and getting the word out. And we talked about the proof is in the pudding. It’s the third time we’re bringing this up. But when people are seeing it, you know, you can believe it. And if you go to the website, LeddarTech.com, you can watch videos to see more about what Aviv has been explaining.
Thank you so much for being here, Aviv.
Aviv: Thanks, Michelle. My pleasure.
And I want to thank all of you for listening and tuning in to the LeddarTech Lab podcast. Of course, if you like what you hear and you enjoyed this engaging conversation, you’d like to hear more engaging conversations like the one today, you can subscribe to the LeddarTech Lab podcast on your favorite podcast player. And of course, visit the LeddarTech.com website to learn more about what we’ve talked about and so much more. Thanks again for joining us. I’m your host, Michelle Dawn Mooney. And we hope to connect with you on another podcast soon.