AutoSens, which took place last month at the world-famous AutoWorld Museum in Brussels, Belgium, brought together industry leaders to examine and evaluate the latest developments in the driver assistance systems (ADAS) market.
It is expected that this market will reach more than $ 67 billion by 2025, due not only to increased innovation but also to a growing number of initiatives that are driving growth in vehicle automation and self-driving Cars accelerate.
The sensors are becoming increasingly intelligent, and as a result of increasing intelligence and performance, designers can add fewer devices to more perceptions and capabilities.
However, as it is likely that we will keep
self-driving vehicles at a much higher level in terms of driver safety, the increasing innovation we see in terms of the technology needed to support autonomous driving suggests that this will take a long time Time to achieve full autonomy.
The hype surrounding autonomous vehicles is gradually dwindling as engineers and scientists become more realistic, which will actually mean the development of Tier 4 and 5 vehicles - very significant challenges for the future. The claim that we would see fleets of autonomous vehicles or robotic taxis on our roads by 2020 has undoubtedly been a long way off.
Nevertheless, progress is being made in this area in the exploration of sensors, image processing and security.
One of the event's most exciting announcements last month was made by CEVA, a licensor for wireless connectivity and smart sensing technologies.
The company introduced the NeuPro-S, a second generation AI processor architecture designed for the marginal inferences of neural networks.
In co-operation with NeuPro-S, CEVA also introduced the CDNN Invite API, a deep neural network compiler technology that integrates the heterogeneous co-processing of NeuPro-S cores with custom neural network engines Optimization run for neural networks supports firmware.
"The NeuPro-S, along with the CDNN Invite API, is ideal for visionary devices that require state-of-the-art AI processing, especially for autonomous cars," said Yair Siegel, senior director of customer marketing and AI strategy for the company.
"The NeuPro-S tries to process neural networks for the segmentation, recognition and classification of objects. We've included system-related improvements that can significantly improve performance. "
These enhancements include: "Multi-tiered storage support to reduce costly external SDRAM transfers, multiple-weighting options, and heterogeneous scalability, using multiple combinations of CEVA XM6 Vision DSPs, NeuPro-S cores, and custom AI engines allow in a single, unified architecture. "
The result is that the NeuPro-S consumes on average 50% more power, 40% less memory bandwidth and 30% less power than CEVA's first-generation AI processor, Siegel said.
With the CDNN Invite API, users can integrate their own neural network engines into the CDNN framework to streamline the growing diversity of application-specific neural networks and processors now available, and to improve networks and layers to enhance the performance of CEVA's XM6 Vision DSP. NeuPro-S and custom neural network processors.
According to Siegel, the CDNN Invite API is already being adopted by customers who work closely with CEVA engineers to deploy them in commercial products.
Coccon LiDAR
An interesting use of autonomous vehicle technology is in the development of geo-fenced vehicles, which have a limited range and a more limited power spectrum.
"Given the projected population growth in cities by 2055 and the expected doubling of vehicles on our roads, the infrastructure burden may continue to deteriorate," said Vincent Racine, Product Line Manager at LeddarTech.
"We face increasing congestion, increased emissions and a real loss of productivity when we're on congested roads.
As a result, demand for autonomous shuttles operating on fenced routes is increasing. Some research reports even estimate that 2 million of these shuttles could be in operation by 2025 and that 4 to 15 people will be transported on given routes of up to 50 km in length.
"Sensors will be an important component in these vehicles as they navigate congested areas and must consider pedestrians, cyclists and animals whose movements are difficult to predict."
To address this issue, LeddarTech has developed the Leddar Pixell, a Cocoon LiDAR for this type of geo-fenced autonomous vehicles.
"This 3D solid-state LiDAR cocoon solution is designed specifically for autonomous vehicles such as shuttles and robotic taxis, as well as utility and delivery vehicles, and is designed to provide improved detection and robustness," said Racine.
"It provides highly reliable detection of obstacles in the vehicle environment and is suitable for sensing platforms designed to ensure the safety and protection of passengers and vulnerable road users."
Afast Company, Where Success Is At Home
The solution has already been adopted by over a dozen leading autonomous vehicle suppliers in North America and Europe.
"Crucially, the Pixell can compensate for the limitations of LiDAR mechanical scanning for geopositioning, which tends to create blind areas that can reach several meters in some cases. There are no dead zones or blind spots with this solution, "emphasized Racine.
The sensor is capable of providing a highly-efficient detection solution to cover critical blind spots, using technology integrated into the company's LCA2 LeddarEngine, which consists of a highly integrated SoC and digital signal processing software.
Situational attention
While technology can help create a better situational awareness - be it seeing things, perceiving them, and then linking them to a user's location - much development is still needed in this area.
One of the company's concerns is Outsight, which has developed a 3D semantic camera called the "revolutionary type of sensor that brings intelligent machines up to date". According to Raul Bravo, president and co-founder of the company, "It's a sensor that combines software and hardware and supports remote material identification with real-time 3D data processing.
"This technology provides greater accuracy and efficiency so that systems can discover, understand, and ultimately interact with their environment in real time," said Bravo.
"Mobility is evolving rapidly and our 3D semantic camera will be able to provide the man-controlled machines you see in Level 1-3 ADAS (Advanced Driving Assistance Systems) a complete situational awareness and a new level of safety and reliability It also helps to accelerate the emergence of fully automated intelligent machines for self-propelled cars, robots and drones stages 4-5.
"This technology is the first to offer complete situational awareness in a single device. This was made possible by the development of a low-power, long-range, and eye-safe broadband laser capable of identifying material composition through active hyperspectral analysis.
"Combined with Simultaneous Localization and Mapping (3D) SLAM-on-Chip capability, this technology can deliver real-time reality," says Bravo.
The camera provides usable information and object classification through its integrated SoC, but does not rely on "machine learning". As a result, the power consumption and required bandwidth decreases.
"Our approach makes massive amounts of data unnecessary for training and the guesswork is eliminated by actually measuring the objects. Being able to determine the material of an object creates a new level of security to determine what the camera actually sees, "Bravo said.
The sensor can not only see and measure the world, but also capture the position, size, and full speed of all moving objects in its environment, providing path planning and decision-making information, and road information.
These examples show that sensor technology to support autonomous vehicles is fundamentally changing and, most importantly, helping to reduce overall deployment costs as features are improved and improved.