Cars are getting better at seeing their surroundings—but can they understand them?
That’s one question techies are asking at this year’s Elevate Festival in Toronto. It’s a problem that innovators want to solve as driver-assistance technology, and even driverless cars, become increasingly common on North American streets.
“They have to understand where they actually are. Are they on a driveway right now? On the freeway? Are they in the middle of Toronto in rush hour?” said Abigail Coholic, senior director of channel partnerships at Ecopia AI, a Toronto-based geospatial AI company.
Consider this admittedly simplistic analogy: My dog, Bolt, would give a sort of bewildered half bark whenever he heard a doorbell ring on a TV show. What he heard (the doorbell) didn’t match what he saw (nothing), so he reacted for the worst-case scenario—but his confusion meant he didn’t act decisively. He didn’t understand the context of the doorbell in the TV storyline.
Driver assistance and autonomous technology is much more complex, but scientists want to train vehicles to really understand their surroundings. For instance, a camera might see a high-definition image of a vehicle on a billboard, while a radar or LiDAR doesn’t sense any vehicles. Or vice versa: rain or darkness could obscure a camera’s view, while another sensor picks up an obstacle ahead.
“We use our eyes; we understand our environment,” said Coholic. “When it comes to autonomous vehicles, they’re relying on really strong input data.”
- Seeing the big picture: Some existing AVs might hesitate when they get contradicting inputs. Quebec-based LeddarTech fuses data from different sensors into a 3D model, instead of processing the perceptions of each sensor separately. Reza Rashidi Far, Principal LiDAR & AI System Product Engineer, presented at this week’s Canadian Manufacturing Technology Show.
Read the full article HERE