Xilinx and Motovis have announced today that the two companies are collaborating on a solution that pairs the Xilinx Automotive (XA) Zynq system-on-chip (SoC) platform and Motovis’ convolutional neural network (CNN) IP to the automotive market, specifically for forward camera systems’ vehicle perception and control. The solution builds upon Xilinx’s corporate initiative to provide customers with robust platforms to enhance and speed development.
Forward camera systems are a critical element of advanced driver-assistance systems because they provide the advanced sensing capabilities required for safety-critical functions, including lane-keeping assistance (LKA), automatic emergency braking (AEB), and adaptive cruise control (ACC). The solution, which is available now, supports a range of parameters necessary for the European New Car Assessment Program (NCAP) 2022 requirements by utilizing convolutional neural networks to achieve a cost-effective combination of low-latency image processing, flexibility and scalability.
The forward camera solution scales across the 28nm and 16nm XA Zynq SoC families using Motovis’ CNN IP, a unique combination of optimized hardware and software partitioning capabilities with customizable CNN-specific engines that host Motovis’ deep learning networks – resulting in a cost-effective offering at different performance levels and price points. The solution supports image resolutions up to eight megapixels. For the first time, OEMs and Tier-1 suppliers can now layer their own feature algorithms on top of Motovis’ perception stack to differentiate and future-proof their designs.