gearbrain
Nuance

Here’s what the next generation of car head-up display and AI looks like

Nuance shows off eye-tracking artificial intelligence at CES technology show

Like GearBrain on Facebook

The next generation of vehicle head-up display and voice assistance system will see interfaces beamed onto the windshield, advanced artificial intelligence, and eye-tracking technology.

That is the prediction of Nuance, an artificial intelligence company which used the CES technology show in Las Vegas this week to present breakthrough driver-facing technologies.

Read More:

This includes an Alexa-like assistance system which can be spoken to to control various aspects of the car - not just setting the navigation and adjusting the music, but also operating the doors and windows, and even to provide information on whatever you drive past.

With face- and eye-tracking hardware mounted behind the steering wheel, the Nuance system knows what you are looking at when you speak. For example, if you glance at the passenger-side window and say "Open that window a bit", the system will do exactly that.

Nuance

There is no need to use a hot word - like 'Alexa' or 'Hey Google' - as the system knows to listen out for the driver's voice, and will react whenever they say something related to the operating of the vehicle.

The system also understands follow-up commands. For example, after the window has been partially opened, the driver can say "more" and it will be opened more. Say "Now close it" and the same window will be closed.

During a demonstration for GearBrain on the CES show floor, Nuance showed how its vision for a new head-up display includes an interface beamed across the windshield. Semi-transparent and located in the center of the screen - so off to the right of the driver's straight-ahead - the interface is interacted with my glancing and speaking.

The interface is seen projected onto the windshieldGearBrain

After asking the system to call a contact, it will show both of that person's phone numbers on the windshield. You can then glance at the one you want to call, say "that one" and the call will be made. Although we weren't able to try the system for ourselves, an interface designer at Nuance who demonstrated it said only a small movement of the eyes is required to tell the computer which option you want to select.

With regard to the voice assistant, you can ask it to do several things at once. For example, we were shown how saying: "Drive to the Empire State Building, send my ETA to Jack, and call Rebecca" worked just fine. The assistant checked which one of Jack's numbers to use, then did the same with Rebecca, with it responding quickly when the user said "that one".

You can also ask the assistant about buildings and landmarks you drive past. Asking "what's that building?" causes the system to quickly backtrack to work out what you were looking at just before you began speaking.

Eye-and head-tracking system can be seen behind the steering wheelGearBrain

Apart from a couple of hiccups due to the loudness of Nuance's booth confusing the assistant, we were very impressed with what we saw. Not needing a hot word to activate the assistant is a nice touch, and - although we wonder what the legality will be - the head-up display system felt like a natural evolution of current technologies.

Nuance says technologies like these could be available from automakers in the next two to five years, depending on how quickly its customers want to bring the system to market.

Like GearBrain on Facebook
Show Comments ()

The GearBrain

See which products "work with" either Google Assistant or Amazon Alexa by clicking on the device below.