Using assistants like Siri and Alexa is ‘a return to the dark ages of 1970s computing,' study finds
GearBrain

Using assistants like Siri and Alexa is ‘a return to the dark ages of 1970s computing,' study finds

'The majority of Alexa users did not know what skills were', user interface survey found

Like GearBrain on Facebook

Interacting with AI butlers like Siri, Alexa and the Google Assistant is "like a return to the dark ages of the 1970s," a detailed study into their user interfaces has concluded.

Not only would today's virtual assistants — like Siri, Alexa and the Google Assistant — fail to pass the Turing test, but several aspects of their user interface remain 'bad' or even 'terrible,' according to experts.

Read More:

The study likened using smart speakers to interacting with early computers, saying: "The need to memorize cryptic commands, oppressive modes, confusing content, inflexible interactions - basically [they offer] an unpleasant user experience."

This not-too-surprising summary comes from an exhaustive study carried out by the Nielsen Norman Group, which specializes in user experience and how it can be improved.

To study the effectiveness of smart assistant user interfaces, Nielsen Norman gathered up 17 participants from New York and San Francisco, all of whom were experienced with how Siri, Alexa and others work. They then asked the participants to carry out a number of tests with the assistants, before interviewing them on their expectations and the quality of the assistants' responses.

Six user interface characteristics were analyzed, including how well the user's voice was understood, the assistant's ability to understand natural language, how well it spoke back, its level of intelligence, and any ability to perform supplementary actions which the user did not specifically ask for, but which the AI thinks could be helpful.

These characteristics were titled:

  • Voice input
  • Natural-language
  • Voice output
  • Intelligent interpretation
  • Agency
  • Interaction

The assistants were deemed to have offered "good" voice input skills, apart from when used by non-native speakers with strong accents, who "had to repeat themselves often."

But for the other categories, there were problems. Here is a summary of how the assistants performed, using the study's own words:

  • Natural language - Bad - Can become much better, but hard to do
  • Voice output - Bad - Inherently limited usability, expert for simple information
  • Intelligent interpretation - Bad - Can become much better, but extremely difficult to do
  • Agency - Bad - Can become much better
  • Integration - Terrible - Can become much better, but requires much grunt work

Participants in the study complained at how they had to speak to the assistants "like a robot", thinking carefully about how to phrase each command or question, and unlike how they would speak to another person. Some complained about assistants rushing to answer while they searched for the right word mid-utterance, even going to far as to tell Alexa it was being rude talking over them.

amazon alexa echoAmazon

More complaints were raised due to assistants speaking too quickly, or repeating words unnecessary in their answers to simple questions. One participant branded the Google Assistant as "too chatty" when providing extra information after being asked about a pharmacy's opening times.

Alexa's 'skill' system also received criticism, where users complained about having to remember the exact name of a skill to launch it. Tellingly, the report claims: "The majority of the Alexa users did not know what skills were; some had encountered them before, installed one or two, and then completely forgotten about their existence."

Although the report gives assistants a comprehensive panning, it accepts that the future still remains bright - especially if you consider the state of today's smart assistants as similar to mobile phones in the year 2000. "The promise [of truly intelligent assistants] does remain, and people already get some use out of their intelligent assistant," the report observed.

"But vast advances are required for this interaction style to support wider use with a high level of usability," it continued. "An analogy is to the way mobile devices developed: when we tested mobile usability in 2000, the results were abysmal. Yet, the promise of mobile information services was clear and many people already made heavy use of a particularly useful low-end service: person-to-person text messages."

Tough problems to solve

However, anyone expecting smart assistants to improve more quickly than mobile phones did between 2000 and now could be left disappointed. The report asks: "Will it take two decade to reach good AI usability? Some of the problems that need solving are so tough that this may even be an optimistic assessment."

Ultimately, the study concludes, a major problem with smart assistants is in the design of their user interface, which is often used without a display. "Although users project human-like qualities onto them, they have relatively low expectations for these assistants and reserve them for black-and-white, factual questions," the conclusion states. "Even though the main hurdle is probably better natural language and dialogue processing, many smaller scale issues could be fixed with more thoughtful design."

Check out The GearBrain, our smart home compatibility checker to see the other compatible products that work with Google Home and Home Mini and Amazon Alexa-enabled devices.

Like GearBrain on Facebook
The Conversation (0)

GearBrain Compatibility Find Engine

A pioneering recommendation platform where you can research, discover, buy, and learn how to connect and optimize smart devices.

Join our community! Ask and answer questions about smart devices and save yours in My Gear.

Top Stories

Weekly Deals