Preload
datafication2019_blogPost_02_1050x400

The forgotten killer metric on Assistant voice interfaces

For the last 8 years we have gone rogue with data analysis to identify insights that often the tech and social media platforms don’t want you to focus on. The research series is called Datafication (www.datafication.com.au) and this year we have done a deep dive into the good, the bad and the ugly of voice interfaces.

For Datafication 2019 we asked Australian Smart Speaker users how many times they have to repeat themselves to their Assistants. We have called this metric ‘The Repeat Rate’ and is a simple way to monitor the accuracy and training success of Natural Language Processing (NLP). As you can see from the graphic below ‘The Repeat Rate’ is too high, on average consumers we surveyed had to say their command 2.1 times.

datafication2019_website_postContent_images002

Interestingly ‘The Repeat Rate’ varied dramatically based on the Assistant used, with Google Assistant delivering the lowest non ‘Repeat Rate’ of 35%, followed closely by Amazon Alexa with a similar non ‘Repeat Rate’ of 34%. Siri is much worse with 49% requiring at least one repetition. The poorest performer in the data is Samsung Bixby with 30% of people having to repeat a command two or more times.

datafication2019_website_postContent_images001

‘The Repeat Rate’ is critical as it is a forward indicator on the tipping point of mainstream use of voice. When we reach the tipping point (in theory, when Assistants accurately understand the intent of the majority of commands first time), certain changes in the market will be triggered. The use cases for voice based activity can get more sophisticated than the current simplistic use cases (Hey Google what’s the weather?) to a world where the majority of search engine activity is voice based, where ecommerce using voice and screen explodes and where IoT smart devices in the home are frequently controlled by voice.

We hope to regularly publish ‘The Repeat Rate’ to measure how well NLP in Assistants is being trained and improved so we can set benchmarks to understand the progress of human interfaces in our lives.