Renesas Electronics Corporation, supplier of advanced semiconductor solutions, and StradVision, a vision processing technology solutions provider for autonomous vehicles with expertise in deep learning, announced the joint development of a deep learning-based object recognition solution for smart cameras used in next-generation advanced driver assistance system (ADAS) applications and cameras for ADAS Level 2 and above.
To avoid hazards in urban areas, next-generation ADAS implementations require high-precision object recognition capable of detecting so-called vulnerable road users (VRUs) such as pedestrians and cyclists. At the same time, for mass-market mid-tier to entry-level vehicles, these systems must consume very low power. The new solution from Renesas and StradVision achieves both and is designed to accelerate the widespread adoption of ADAS.
StradVision’s deep learning–based object recognition software delivers high performance in recognizing vehicles, pedestrians, and lane marking. This high-precision recognition software has been optimized for Renesas R-Car automotive system-on-chip (SoC) products R-Car V3H and R-Car V3M, which have an established track record in mass-produced vehicles. These R-Car devices incorporate a dedicated engine for deep learning processing called CNN-IP (Convolution Neural Network Intellectual Property), enabling them to run StradVision’s SVNet automotive deep learning network at high speed with minimal power consumption. The object recognition solution resulting from this collaboration realizes deep learning–based object recognition while maintaining low power consumption, making its use suitable in mass-produced vehicles, encouraging ADAS adoption.
Key features of the deep learning-based object recognition solution:
(1) Solution supports early evaluation to mass production
StradVision’s SVNet deep learning software is a powerful AI perception solution for the mass production of ADAS systems. It is highly regarded for its recognition precision in low-light environments and its ability to deal with occlusion when objects are partially hidden by other objects. The basic software package for the R-Car V3H performs simultaneous vehicles, person and lane recognition, processing the image data at a rate of 25 frames per second, enabling swift evaluation and POC development. Using these capabilities as a basis, if developers wish to customize the software with the addition of signs, markings and other objects as recognition targets, StradVision provides support for deep learning-based object recognition covering all the steps from training through the embedding of software for mass-produced vehicles.
(2) R-Car V3H and R-Car V3M SoCs increase reliability for smart camera systems while reducing cost
In addition to the CNN-IP dedicated deep learning module, the Renesas R-Car V3H and R-Car V3M feature the IMP-X5 image recognition engine. Combining deep learning-based complex object recognition and highly verifiable image recognition processing with man-made rules allows designers to build a robust system. In addition, the on-chip image signal processor (ISP) is designed to convert sensor signals for image rendering and recognition processing. This makes it possible to configure a system using inexpensive cameras without built-in ISPs, reducing the overall bill-of-materials (BOM) cost.
Renesas R-Car SoCs featuring the new joint deep learning solution, including software and development support from StradVision, is scheduled to be available to developers by early 2020.