Smart speakers can be frustrating. They'll wake up from hearing their name uttered in casual conversation or even if it's said on the TV. Worse, they sometimes don't wake up at all, particularly if they're in a noisy environment. If a smart speaker is playing music — and you're trying to get them to stop — shouting is sometimes your only recourse. Forget asking it to then connect to an another smart device to run a routine. As with all version 1.0s, however, smart speakers are about to get smarter. And Qualcomm believes its new chip set will play a significant role.
Starting this fall, with more to come through 2020, Qualcomm is pushing out new chips that will be using AI, more powerful processing and DSP in an attempt to push voice command capabilities closer to Star Trek's conversational human-to-machine voice interaction with its new QCS400 smart speaker chip series.
Qualcomm's new quad core neural processing engine, and others likely to follow, is designed to address our biggest smart speaker voice-command complaints, and even add some capabilities for which we've been longing.
What does this mean for our smart home?
Respond when called
Smart speakers will get smarter, responding correctly to their wake wordGoogle
You can expect to see — and hear — smarter speakers that are less likely to respond to sound-alike wake words. Your speaker won't react unless it's spoken to directly.
No more shouting
Getting a smart speaker's attention from across the room won't be as difficult
Speakers will respond faster to your wake word, even in noisier environments or from longer distances. If you're playing music loudly, or are watching TV, your voice assistant will be able to hear a request even through the sound.
Devices like Amazon Echo and Google Assistant will be less likely to wake up if they hear their name on TV
Smart devices will better differentiate between your real-life voice and someone saying the wake word on TV. As smart speakers become more common, and their names as well, this will be crucial so that devices don't react when characters interact with them on a show.
Having a conversation with a smart speaker will be easier, and likely more chatty
Voice assistants will provide more conversational control. For example, just saying "[Wake word], it's cold in here" would prompt a "Would you like me to turn up the thermostat?" response.
Multiple wake words
Smart devices won't be pinned to just one wake word, but will start to answer to different onesGearBrain
AI systems will start to react to multiple wake words instead of having to choose just one. Just as what we call family and friends sometimes change, so too will we be able to speak to our smart devices with different names.
Talking to smart devices
We'll be able to talk to and operate smart devices directly, even without an internet connectioniStock
Our smart home will eliminate the need for an internet connection to control local devices with our voice, and instead will be able to communicate directly without needing to go through Wi-Fi or a cloud.
When speakers are better at hearing their specific wake word, they'll likely be more efficient at using powerAmazon
New chips that are also more power efficient, segregating the often power-sucking need of continually listening for a wake word thereby, lengthening the life of battery-powered smart speakers.
Qualcomm adds that its chips are also designed to "deliver evolved premium audio experiences throughout the home," according to Qualcomm, for streaming audio, including Dolby Atmos and aptX audio, with lower latency — the time between you asking for the speaker to play a song and it actually playing your requested tunes. And, for maximum wireless device compatibility, the new chips will include Wi-Fi, Bluetooth 5.1 and Zigbee radios built-in.
We don't need cloud control
Perhaps most critically, Qualcomm's new chip series — there's actually four different versions, the 403, 404, 405 and 407 to address different capabilities and devices — could free you from the dictatorship of the smart speaker entirely and the need for an internet connection, and, therefore, remove the smart speaker itself as a middle man for local device control.
Right now, in most cases, if you want to use voice-control to run your lights, thermostat, window shades, robot vacuum or locks, you need a smart speaker in the room in which these devices reside. And when you utter a command, there's that pregnant pause of uncertainty as your smart speaker first transmits your voice command to the cloud.
From there, your request is then sent back to your device, a pregnant pause long enough that a fleeting "Did it hear me?" doubt flashes through your brain. That then often leads to a redundant and conflicting second command.
In an ideally run smart home, all your smart speakers need to be able to talk to each other, which means you have to choose one from among the Amazon Alexa, Google Assistant, Apple HomeKit, Samsung's Bixby, or other siloed ecosystems. Slowly but surely, these voice control requirements and limitations are likely to disappear with improvements from those promised by Qualcomm with its new chipsets.
This next phase will eliminate the game telephone smart speakers play to convey your commands to the appropriate devices by enabling direct speaker-to-device command transfer via your local Wi-Fi network — no internet connection needed.
Not only will local command eliminate that pregnant pause of doubt, but your smart products will continue to respond to voice commands even if your internet goes down. Better yet, one application of the QCS400 series eliminates the need for the smart speaker as controlling middleman entirely.
While there are a handful of devices, such as C by GE's SOL lighting products and the ecobee4 Smart Thermostat, that have Alexa built-in, with Qualcomm's new chip, a smart device can be individually voice-controlled without a smart speaker as command conduit, without fealty to a particular voice-control ecosystem, and, most importantly, without a connection to the internet.
This means that individual smart products can — and likely will — be built with their own individual voice command capabilities — no smart speaker or internet connection needed. While these improvements won't necessarily rise to "Star Trek"-like conversational voice command functionality, they'll get us closer to science fiction's anthropomorphic machine-human communication ideal.