Tier IV, a provider of open-source autonomous driving (AD) technology, has launched Edge.Auto, a new product with capabilities ranging from individual hardware components to fully-integrated AD systems. The company presented the technology at CES 2024 with multiple live demonstrations, highlighting its integration with an array of third-party solutions.

Edge.Auto is a reference platform designed to support the rapid development of AD systems and is based on Tier IV’s suite of AD technologies. It combines hardware components, including sensors and computers, along with useful software tools. The hardware components operate using sensor drivers provided as open source and software modules included in Autoware. Here, the suite allows for the selection of multiple configurations, ranging from individual hardware components to fully-integrated AD systems depending on the application.

Additionally, each solution provided by Edge.Auto can be used in combination with Tier IV’s existing products, including Pilot.Auto, an Autoware-based autonomy platform, and Web.Auto, a cloud-native DevOps platform. This enables users to select, verify, and validate the necessary software and hardware units for the development of AD systems, allowing them to build bespoke AD products, solutions, and services more quickly.

Included in Edge.Auto are automotive cameras with a dynamic range equivalent to 120 dB, offering resolutions from 2.5 MP to 8.3 MP, including Tier IV’s forthcoming C3 camera set to launch in 2024. The multiple lens options available with these cameras also makes them suitable for ADAS and autonomous driving, as well as surveillance and robotics. Adaptors to connect automotive cameras via USB, and edge perception development kits for recognition applications, are also available.

Designed for the development of advanced perception through sensor fusion, Tier IV’s sensor fusion system comes with open-source tools and comprehensive documentation, offering support for tasks that often involve more specific technical expertise such as sensor calibration and synchronization. Users also have the flexibility to choose sensors and ECUs based on their specific application needs.

A GMSL2 to 10 Gigabit Ethernet Conversion Module leverages ADI’s GMSL2 (Gigabit Multimedia Serial Link generation 2), and can acquire perception data from up to eight cameras – transforming these streams to 10 Gigabit Ethernet. Features such as synchronization with devices such as LiDAR via Perception Time Protocol (PTP), image data timestamping, and individual shutter timing controlled through the timing unit in the Field Programmable Gate Array (FPGA), make the module suitable for many perception technologies critical for applications – including industrial robotics. Set for broad market release in 2024, this module is positioned to extend system value for a wide range of autonomous end applications where sophisticated perception processing is required.

An AI Pilot works to help users develop their own Level 4 AD products, solutions, and services – offering a similar setup to what Tier IV presented in its own AD system that secured L4 certification in Japan in October 2023.