Sensors get smarter
top of page

Sensors get smarter

AutoSens, which was held at the world-famous AutoWorld museum in Brussels, Belgium, last month brought industry leaders together to look at and assess the most recent developments in the driver assistance market (ADAS).


It’s a market that is expected to be worth upwards of $67billion by 2025 driven, in no smart part, by increasing levels of innovation but also by a growing number of initiatives that are accelerating growth in vehicle automation and self-driving cars.



Sensors are becoming increasingly smart and as a result of becoming more intelligent and capable, design engineers are able to add more perception capabilities and functionality to fewer devices.


However, because it’s likely that we will hold self-driving vehicles to a much higher standard when it comes to driver safety, the incremental innovation that we are seeing in terms of the technology needed to support autonomous driving suggests that it’s going to take a long time to reach full autonomy.


The hype around autonomous vehicles is beginning to tail off as engineers and scientists become more realistic about what the development of level 4 and 5 vehicles will actually mean - very significant challenges remain going forward. Claims that we would be seeing fleets of autonomous vehicles, or robotic-taxis, on our roads by 2020 have certainly proved wide of the mark.


But, despite that, progress in this space is being made with on-going research into sensors, computer vision and safety.


One of the most exciting announcements to come out of last month’s event was made by CEVA, a licensor of wireless connectivity and smart sensing technologies.


The company unveiled the NeuPro-S, a second-generation AI processor architecture that’s been designed for deep neural network inferencing at the edge.


In conjunction with NeuPro-S, CEVA also introduced the CDNN-Invite API, which is a deep neural network compiler technology that can support heterogeneous co-processing of NeuPro-S cores together with custom neural network engines, in a unified neural network optimising run-time firmware.


“The NeuPro-S, along with CDNN-Invite API, is suitable for vision-based devices with the need for edge AI processing, in particular autonomous cars,” explained Yair Siegel, the company’s Senior Director Customer Marketing and AI Strategy.


“The NeuPro-S looks to process neural networks for segmentation, detection and the classification of objects. We have been able to include system-aware enhancements that are able to deliver significant performance improvements.”


According to Siegel, these improvements include: “Support for multi-level memory systems to reduce costly transfers with external SDRAM, multiple weight compression options and heterogeneous scalability that enables various combinations of CEVA-XM6 vision DSPs, NeuPro-S cores and custom AI engines in a single, unified architecture.”


As a result, the NeuPro-S is able to deliver, on average, 50% higher performance, 40% lower memory bandwidth and 30% lower power consumption than was the case with CEVA’s first-generation AI processor, said Siegel.


Addressing the growing diversity of application-specific neural networks and processors that are now available, the CDNN-Invite API will allow users to incorporate their own neural network engines into the CDNN framework so that it can holistically optimise and enhance networks and layers to take advantage of the performance of CEVA’s XM6 vision DSP, NeuPro-S and custom neural network processors.

According to Siegel, the CDNN-Invite API is already being adopted by customers who are working closely with CEVA engineers to deploy it in commercial products.


Coccon LiDAR An interesting use of autonomous vehicle technology is in the development of geo-fenced vehicles, which have a more limited range and set of capabilities.


“With the projected growth in urban populations set to soar by 2055 and with the anticipated doubling of vehicles on our roads, the stress on infrastructure can only get worse,” said Vincent Racine, Product Line Manager at LeddarTech.


“We’re facing growing congestion, increased emissions and a real hit to our productivity, if we find ourselves stuck on congested roads.

“In response, we’re seeing demand growing for autonomous shuttles that will operate on geo-fenced routes – in fact some research reports suggest that as many as 2 million of these shuttles could be in use by 2025, moving 4-15 people along predetermined pathways running up to 50km.


“Sensors will be an important component in these vehicles, as they will have to navigate through congested areas and take account of pedestrians, cyclists and animals, all of whose movements can be hard to predict.”


To address this LeddarTech has developed the Leddar Pixell, a Cocoon LiDAR for these types of geo-fenced autonomous vehicles.

“This 3D solid-state LiDAR cocoon solution has been specifically designed for autonomous vehicles such as shuttles and robot-taxis, as well as commercial and delivery vehicles and looks to provide enhanced detection and robustness,” explained Racine.


“It provides highly dependable detection of obstacles in the vehicle’s surroundings and is suitable for perception platforms that are being developed to ensure the safety and protection of passengers and vulnerable road users.”

The solution has already been adopted by over a dozen leading autonomous vehicle providers in both North America and Europe.

“Crucially, the Pixell is able to compensate for the limitations of mechanical scanning LiDAR used for geo-positioning, which has the tendency to generate blind areas that can reach several meters in some cases. There are no dead zones or blind spots with this solution,“ Racine pointed out.


The sensor is able to provide a highly efficient detection solution to cover critical blind spots by using technology embedded in the company’s LCA2 LeddarEngine, which consists of a highly integrated SoC and digital signal processing software.


Situational awareness While technology can help to provide better situational awareness – whether seeing things, perceiving them and then linking them to a user’s location, there’s still a lot of development required in this space.


One of the company’s looking to address this is Outsight, which has developed a 3D Semantic Camera which it describes as a “revolutionary kind of sensor that brings full situation awareness to smart machines.” According to the company’s President and Co-Founder, Raul Bravo, “It’s a sensor that combines software and hardware which supports remote material identification with comprehensive real-time 3D data processing.


“This technology provides greater accuracy, more efficiently, enabling systems to perceive, understand and ultimately interact with their surroundings in real time,” Bravo explained.


“Mobility is evolving rapidly and our 3D Semantic Camera will be able to bring full situation awareness and new levels of safety and reliability to the man-controlled machines that you see in Level 1- 3 ADAS (Advanced Driving Assistance Systems), but it will also help to accelerate the emergence of fully automated smart machines associated with Level 4 - 5 self-driving cars, robots and drones.


“This technology is the first to provide full situational awareness in a single device and this has been made possible through the development of a low powered, long range and eye-safe broadband laser that allows for material composition to be identified through active hyperspectral analysis.


“Combined with its 3D SLAM on Chip capability (Simultaneous Localisation and Mapping), this technology can deliver reality in real-time,” claimed Bravo.


The camera provides actionable information and object classification through its on-board SoC, but does not rely on “machine learning”. As a result power consumption is lower as is the bandwidth required.


“Our approach eliminates the need for massive data sets for training and the guesswork is eliminated by actually ‘measuring’ the objects. Being able to determine the material of an object adds a new level of confidence to determine what the camera is actually seeing,” said Bravo.


The sensor is not only able to see and measure, but comprehend the world, as it provides the position, the size and the full velocity of all moving objects in its surroundings, providing information for path planning and decision making as well as information regarding road conditions.


These examples demonstrate that sensor technology to support autonomous vehicle is undergoing profound change and, crucially, helping to reduce the overall costs of deployment as capabilities are enhanced and improved.


bottom of page