Download: MP3 (41MB)
In the lead up to Christmas, commuters around London are being bombarded with advertisements for voice-controlled home assistants – in particular the Amazon Echo. Today, we talk about the business model behind the device, our concerns about the rate of its sales, and the questions that the Echo raises about who benefits in the future of machine learning.
The Amazon Echo bears many similarities to its main competitor, the Google Home. Both respond to a ‘wake’ word – which in the case of the Echo, is set by default to ‘Alexa’, a female-voiced assistant who acts as a search engine, can purchase items for you, add items to your calendar, and send commands to other ‘smart’ or connected devices.
Having been a popular item in the US for a couple of years, Amazon brought the Echo (and the newer, smaller Dot) to the UK market last year, but it’s now pushing it with season-specific advertising. With the voice-controlled speaker system recently made available for purchase in over 80 new countries around the world including in India, the number of Echos in homes is set to increase.
Amazon has built an open platform for developers: anyone can create a ‘skill’ for Alexa to be distributed via Amazon. The result of this is that Alexa now has over 15,000 ‘skills’ (which can range from turning on your central heating system to finding the ‘perfect christmas playlist’). Unlike Apple products, where the software platform is inextricably tied to the hardware, hardware developers can harness the power of Amazon’s platform. As just a vessel for Amazon’s software, the hardware itself loses apparent value, and we wonder how many of these speakers will be soon discarded in favour of a new model.
Like the Amazon Kindle, the Amazon voice-controlled assistants are suspiciously cheap. This rings alarm bells for several reasons, and not just because it tends to demotivate repair. We’ve learned that as a rule of thumb on the internet, when you’re not paying (much) for the product, you are the product.
Consumers of voice controlled assistants are paying, in part, with their voices. In order for voice recognition to work with different intonations and accents, manufacturers need a huge database of voices and to employ ‘deep learning’ techniques. These are expensive and time-consuming. But with Echos distributed all over the world, Amazon doesn’t have to collect voices – Alexa does it for them.
Another more troubling implication of this business model is that Amazon – and Google, with its personal assistants both on mobile and the home – are collecting vast amounts of data from consumers. While you can configure these devices to some degree, for them to be effective, personal assistants are always listening.
What will this data be used for? Personal assistants grow ‘smarter’ as they collect information about their users – but it’s actually the companies that own the platforms that have the most to learn. While we may feel like we are in a position of control when we give commands such as ‘Alexa, add mince pies to my shopping list’, are we in fact relinquishing control with every tidbit of data on our interests, habits and personal lives that is sent up into the cloud? Beyond personal data, what will be the consequence of a handful of large companies possessing so much more data and using AI to learn about behaviour of people, predicting wants and desires?
We discuss some open source alternatives – including Mycroft and Mozilla’s Common Voice – with greater transparency and data protection. We’ll be closely following this topic as it develops: voice-controlled assistants are undoubtedly useful for many things, but we need to make sure that we are active participants in shaping the kind of future that we want to live in.
- BBC: Amazon Echos activated by TV comment
- Venturebeat: Echo released in more than 80 countries worldwide
- Restart Podcast Ep 18: Gendered gadgets
- Wired: Voice is the next big platform, unless you have an accent
- Mozilla: Project Common Voice
- Mycroft: Open source artificial intelligence for everyone