Until the release of Amazon’s Echo, aka Alexa, the big players had paid little attention to voice technologies. In the meantime, there are numerous other variants, but which are the best known and which voice interface is the most suitable?

Today’s voice interfaces are a combination of two components, namely transcription and natural language processing (NLP). A spoken sentence is transcribed into text. This is analysed using artificial intelligence, based on which a reaction is generated and converted back to analogue speech via a speech synthesis (see also part 1).

Different classifications

Conversational interfaces are differentiated by whether they use so-called knowledge domains or not. Knowledge domains are digital structures that map knowledge around a given subject area.

1) Conversational interfaces with knowledge domains 

Conversational interfaces with knowledge domains are not just about parsing phrases, but about understanding the actual meaning behind a sentence. These types of interfaces are called smart assistants. Consider this sentence, which is simple for us humans: “Reserve two seats at a two-star restaurant in Hamburg!” – it is very easy for us to understand. We know that a restaurant can be given ‘stars’, that Hamburg is a city and that you can reserve seats in a restaurant. However, without this prior knowledge, it is difficult to make sense of the sentence. ‘Two Stars’ could just as well be the name of a specific restaurant. What two seats are and how to reserve them is then completely unclear. That a restaurant with certain characteristics in Hamburg is to be searched for, is also unclear. However, Smart Assistants should be able to precisely understand these concepts and therefore require special basic knowledge in respective domains such as gastronomy, events, weather and travel.

2) Conversational Interfaces without knowledge domains

Conversational interfaces without domain knowledge, such as Alexa, do not have this skill. Instead, they use a different approach. For a possible dialogue, sentence structures are specified during implementation in which variable parts, so-called slots, are defined. The spoken sentence is then analysed and assigned with a sentence structure. Subsequently, the component which generates the response to what has been said is informed of which sentence structure has been recognised by the given variable parts. The fact that this does not require any basic knowledge is clarified by the following sentence: ‘I would like to buy a red shirt’. At this point, the system does not need to know anything about clothes or colours because it just compares the phrase with given phrases related to buying a shirt. For this purpose, it is defined in the interface dialogue model that there is a sentence structure with an ID called, for example, ‘shirt purchase’. It is then subsequently determined that the sentence structure may have the following characteristics: “I want to buy a <colour> shirt”, “I want to buy a shirt in the colour <colour>” and “I want to buy a shirt in <colour>”. In this way, it also defines that there is a variable phrase (slot) named ‘colour’. The desired possibilities for this slot are indicated, e.g. ‘red’, ‘green’ and ‘yellow’. If the user utters the above sentence, the analysis shows that it has the ‘shirt purchase’ sentence structure with the value ‘red’ for the slot ‘colour’. In a correspondingly structured form, a back-end system can already begin to build something with this information.

The current key stakeholders

Until the release of Amazon’s Echo, aka Alexa, most IT companies had paid little attention to voice technologies. Although Siri was released with a bang, it was perceived more as a helpful tool rather than a whole new class of interfaces. However, the advantages of hands-free features for mobile devices were not to be dismissed and today each big player develops their own language solution. Here is a brief introduction to the current key stakeholders:

Amazon‘s Alexa

If you look at the Amazon product range, it is clear that Alexa is a logical development from already existing technologies. The Fire Tablets (launched 2013), Fire Phone (2014) and first Fire TVs (2014) were already equipped with voice control. However, Alexa’s ‘Voice Interface as a Service’ or ‘Alexa Voice Service’ technology is still not considered a Smart Assistant. Instead of analysing the meaning of sentences, they are simply compared in the background. When asked more complex questions, Alexa quickly bails out. The reason for this is that it only handles superficial knowledge domains that are not open to the developer. In addition, requests that can be expressed to an Echo must be very concise and not overly complex in their formulation. For example, films can be searched for using the name of an actor, or restaurants can be searched for by indicating the area. However, it does not get much more complex than this.

Google Assistant

Google Now was originally part of Google Search and was only searchable on the web. Later it was spun off to expand domain knowledge, making it more competitive with wizards like Apple’s Siri or Samsung’s S Voice. Last year, Google Now was replaced by Google Assistant. The extent to which the various knowledge domains in the Google Assistant are interlinked was impressively demonstrated at the Google Developer Conference with the ‘Google Duplex’ product. As a component of the assistant, Google Duplex can make phone calls to real people and make appointments for the hairdresser, for example, or even book a table. In doing so, the assistant not only accesses the appointment calendar, but must also have appropriate domain knowledge.

Apple‘s Siri

The story of Siri is a bit different. The Smart Assistant was developed by the Siri Inc. company and from the outset took the approach of analysing language by means of domain knowledge. Siri Inc. is a spin-off of the Stanford Research Institute (SRI). Fifteen years ago, SRI collaborated with these institutions on the CALO (Cognitive Assistant that Learns and Organizes) project, the experience of which influenced the development of Siri. Siri was released in the App Store in 2010 and Siri Inc. was promptly bought by Apple. A year later, Apple then officially announced that Siri is now an integral part of iOS. It has since been unrolled across all platforms. Most recently, the HomePod was released as a smart loudspeaker that reflects the current trend in voice interfaces and is comparable to Amazon’s competing product, Echo.

Microsoft’s Cortana

Microsoft’s Cortana was presented to the public for the first time in 2014 at a conference. Also designed as a Smart Assistant, Cortana features interesting reality-based adaptations. For example, a real assistant usually takes notes about their supervisor or client in order to get to know the person better and remember their habits. This is where Cortana uses a virtual notebook. For example, when being used for the first time, Cortana asks a few preferences in order to be able to provide personalised answers at an early stage. This functionality can also be prompted as needed. The key element of Cortana is Bing; Bing-based services allow you to make informal queries with the search engine.

Samsung’s Viv

Samsung has also been trying to establish intelligent software for their devices for quite some time, which naturally must also include a voice interface. In 2016 Samsung bought the company of Siri’s developers, Viv Labs. Viv Lab’s system fully relies on domain knowledge. Unlike its competitors, however, Viv is able to extend the knowledge base of external developers into new domains. As a result, the system should become more intelligent and be able to understand more and more. For example, imagine a whiskey distillery. With the help of experts, the Viv is provided with knowledge about the domain of whiskey and its products. In addition, a distillery shares all of its knowledge concerning wooden barrels and their production. The Viv domain knowledge now provides valuable expertise on which wooden barrels influence the taste of certain types of alcohol. For example, oak barrels provide whiskey with a vanilla flavour. If I now ask Viv what results in the vanilla note of a particular whiskey from said factory, Viv can answer that this taste is most likely due to oak barrel aging. Thus, Viv has merged both domains.

IBM’s Watson

To clear up any misunderstandings, IBM Watson should also be mentioned here. There is no ‘Artificial Intelligence Watson’ that understands everything and continuously accumulates knowledge. Instead, Watson is a collection of various artificial intelligence tools brought together under a common concept that can be used to realise a wide variety of projects. In addition, there are projects that serve to build up a large knowledge base. However, one should not labour under the illusion that each Watson project provides access to this knowledge. If you want to implement a project with Watson, you need to provide your own database – just as with any other machine learning toolkit. Among other features, Watson provides transcription (The IBM® Speech to Text Service) and text analysis (Natural Language Understanding Service) tools. If you want to implement a project together with Watson, you build on these two tools when implementing voice interfaces.

From analysing the problem to finding the right voice interface

Of course, there are many additional solutions, some of which are very specialised, but which also aim to break through the restrictions of the big players in order to offer more development opportunities. Now, the question naturally arises: But why all the different voice interfaces? As with many complex problems, there is no single universal solution. There is no ‘good’ or ‘bad’ interface. There are only ‘right’ or ‘wrong’ applications for the different technologies. Alexa is not good for complex sentence structures, but is great for fast conversions and is already widely used. On the other hand, while Viv has not been able to assert itself yet, it has the potential to understand random and complex sentences.

The selection of the right voice interface therefore involves choosing certain criteria, such as the application, focus, problem definition, needs of the target group and how open an interface is for integration into your own projects.

This is the second contribution of a four-part series on the subject of voice interfaces:

This page is available in DE