JIDU concept robocar ROBO-01

JIDU, an EV company backed by Baidu and Geely, has unveiled its first concept production robocar – the ROBO-01. The robocar combines intelligent driving capabilities powered by Baidu with intelligent driving configurations. Through it, JIDU hopes to usher in an era of intelligent cars powered by AI. The robocar was unveiled at JIDU’s first ROBODAY, held in the XiRang metaverse. At the unveiling, the a digital human car owner drove and interacted with it.

Highlights pointed out by the OEM include an active deformable structure design, a 3D borderless one-piece screen, zero-gravity seating, and dual max-computing chips. The ROBO-01’s capabilities are based on a unique trainable functionality and JIDUs’ neural JET (JIDU Evolving Technology) – both support a high-level autonomous driving solution with full redundancy, SOA-based intelligent cabin and millisecond-level offline intelligent voice assistant.

Inside, the borderless, ultra-clear, screen has been designed to run from the driving seat to the main passenger seat and provide an immersive audiovisual user experience. In addition, physical controls like the door handle, shift lever, left and right indicator levers are removed to further evolve and streamline the core HMI experience. The zero-gravity seats are lightweight and breathable, wrap around the occupant, and include a swan neck headrest design with an adaptive adjustment function to ensure user comfort.

Further systems equipped in the intelligent cabin are an offline voice assistant, millisecond-level response, 3D human-machine co-driving map and full-scene interaction inside and outside the car. This cockpit is enabled by the 4th generation Snapdragon Automotive Cockpit Platform. Based on the technology firm’s 8295 chip, it facilitates the 3D presentation of the central display, enabling efficient in-vehicle operations spanning navigation, entertainment, online office, and more.

Inside and outside the ROBO-01, the cabin’s millisecond-level intelligent voice response provides 100% coverage of scenarios, while the offline voice function operates independently of online network signals. Its multi-modal interaction capabilities, such as visual perception, voice recognition and lip capture, allow for added convenience and bolstered communications between the user and car.

A focal point of ROBO-01 is its enhanced AI perception and expanded active service capabilities. Here, it is equipped with a set of fully adjustable structures – including front hood collapsible LiDAR, active rear wing ROBOWing, a foldable U-shaped steering wheel, satellite speakers that can be lifted, and an adaptive zero-gravity seat.

The collapsible LiDAR works to enhances the safety of the EV’s intelligent driving system while strengthening the sensing capabilities of its advanced autonomous driving functions. The sensor can collapse, for example, before a crash occurs – with the AI detecting the incident before it happens.

The U-shaped steering wheel helps users access more information provided on the vehicle’s screen. Likewise, the leading steering-by-wire technology of JIDU can support the U-shaped steering wheel to be folded and hidden as needed, as well as enhance the variable steering ratio of the vehicle in automatic driving mode.

Through biometric technologies, ROBO-01 can recognize the user’s emotions and interact with the outside world. Its robotized front design integrates interactive AI pixel headlights and a high-recognition-rate AI voice interaction system. With the system, a voice recognition function is enabled outside the car for natural communication between humans, the vehicle, and the surrounding environment.

In working with Baidu, JIDU confirmed that the Baidu Apollo’s autonomous driving capabilities were extensively applied to its own Robotaxi. In testing, 27 million kilometers (16.7 million miles) were driven autonomously while a large Robotaxi road test fleet conducted real road tests in more than 30 cities across China to help the system operate in urban areas.

Further autonomous support comes from Nvidia’s “dual” Orin X chips and 31 external sensors. This sensor suite includes 2 LiDAR, 5 millimeter-level wave radar, 12 ultrasonic radar and 12 cameras. Based on JIDU’s cabin-driving fusion technology architecture, the true redundancy solution for autonomous driving is created by its use of dual systems to ensure and maximize redundancy. The solution has been successfully tested and run on the JIDU SIMUCar, a software-integrated simulation vehicle. The company has said that these tests verified the safety and stability of its autonomous driving system for mass production.

Capable of point-to-point advanced autonomous driving, the system is also able to adapt to three main driving scenarios: high-speed, urban roads, and parking. It has tested and verified the ability to handle unprotected left turns, traffic light recognition, obstacle avoidance and freeway on/off ramps. Users will be able to access these and other autonomous functions upon starting the vehicle.

JIDU plans to launch a limited version of its first production model in Fall 2022, which will closely resemble the ROBO-01. A production window and release date for the ROBO-01 itself has not yet been confirmed.