Wayve has launched LINGO-1, a vision-language-action model designed to enhance the learning and ‘explainability’ of its proprietary AI Driver technology.
A key motivator behind the development of LINGO-1 is to help Wayve gain deeper insights into the decision-making and reasoning capabilities of its AI models. These insights will not only help the company build a safe driving intelligence for self-driving but open up new capabilities that can support the interpretability of the AI Driver itself.
Trained using real-world data from Wayve’s drivers, who comment on the experience as they drive, LINGO-1 can explain the reasoning behind its driving actions. The company has said that this language-based feature offers a new type of data that can be used to inform and support the interpreting, explaining, and training of new AI models.
In addition to commentary, LINGO-1 can also respond to questions about a diverse range of driving scenarios – allowing Wayve, in turn, to use feedback to improve the model. Here, answers to questions such as “Why did you slow down?” help the company evaluate the model’s scene comprehension and reasoning, enabling it to more easily identify areas of improvement.
Using LINGO-1, Wayve will also look into incorporating further natural language sources, such as the Highway Code and other safety-relevant content, to make retraining its AI models more efficient. By improving the raw intelligence of its AI Driver, the company expects to accelerate the technology’s learning process, enhance its accuracy, and increase its capacity to handle diverse driving tasks.