Posts

New technologies, devices, and content formats are challenging search engine experts around the world at an ever-increasing rate—but now help is coming from an unexpected source. What little helper can we expect to see in future strategic development and day-to-day business? Find out more in our March edition of SEO News.

Fraggles are back

The fraggles are loose, and they’re slowly taking over the Google world of tomorrow. Just to clear up any confusion right from the start: we’re not talking about the radish-loving, 65-cm tall (source: Wikipedia), cave-dwelling humanoids from the 80s TV show of the same name.

Like their namesakes, the fraggles our search engine optimisers have been working with for some time now are small and dynamic. Here, though, the word refers to content fragments identified by Google that can be scattered, either isolated or in innumerable combinations, throughout the ever-growing landscape of platforms, devices, and technologies.

Cindy Krum, a Denver, Colorado-based mobile marketing expert, was the first to use fraggles in this context. She says fraggles is intended as a portmanteau of “fragment” and “handle”, and describes them as Google’s response to dramatic changes in user behaviour and to the technological structural framework of websites, Progressive Web Apps (PWAs), personalised web services, and data feeds.

What these digital assets have in common is that most of their content is assigned to a single URL. One driving force behind this trend is the process of adapting to the needs of the mobile age; another is the development of server-based technologies like Javascript or Ajax, which are capable of generating individualised content dynamically.

Google, Bing, etc. adjusting their indexing

As a result, Krum says, the fixed allocation of content to URLs is being supplanted; search machines are increasingly indexing mere fragments of content from individual websites, feeds, or apps. Rather than indexing entire websites page by page, she explains, Google, Bing and friends now face the challenge of fishing the most relevant content fragments from the massive ocean of conventional HTML, dynamic Javascript, and endless streams of XML. Krum believes that Google’s Mobile First Index, which has been online for over a year, is simply a huge dragnet for fraggles of all types.

Indeed, looking at how the major providers’ search results have developed over the past two years, the fragment theory makes sense. Both Google and Microsoft are continuously experimenting with new content presentation modes and formats, especially on mobile devices: everything from integrated map and rating displays on local search results, to comprehensive reports on people, places, and brands through Knowledge Graphs, to concrete answers to FAQs through Features Snippet.

Search engines: the universal assistants of the future

Moreover, search engines are adapting their results more and more precisely to users’ search intentions and usage contexts. This development is sure to continue through the dawning age of voice assistants. Virtually calling a computer-generated Google Assistant while at the hairdresser’s is just the first of many coming high points in terms of search engines differentiating themselves as ever-present, universal information and assistance systems.

Relevance and consumability are inextricably linked in systems like these. Whether on the phone, watching television, or driving, modern users have neither the desire nor the ability to look through a page of search hits for the answer they need—much less scroll through a website. The real advantage of the fraggle concept lies in the immediacy and flexibility of small fragments of information, delivered to countless combinations of usage situations and device preferences.

Fraggles highlight Google’s growing emphasis on the customer journey

Fraggles also fit seamlessly into Google’s new strategic alignment of search results to user journeys. To celebrate the 20-year anniversary of its search engine, Google announced its intention to stop viewing search activities as a series of individual queries, but rather to use context and history information to try and pinpoint the user’s exact intentions and position within the customer journey. This, combined with artificial intelligence, means that search results are now meant to be seen not as a results-based service, but as a conversation. Fragments can be incorporated into this scenario as well, whether as product information, concrete purchase offers, or special queries during the post-purchase phase.

What does this all mean for SEOs? First and foremost, it means they will need to continue developing their own approaches to voice and visual search queries. Markups for structured data and voice responses (Google Speakable) need to be part of their standard repertoire, as do keyword analyses organised by intentions along the customer journey.

At last, summer is here. But artificial intelligence doesn’t take summer off, so it can be the ideal babysitter in the car, especially when stuck in a traffic jam. That is, as long as the language assistant actually has something to say. That’s what our SEO News for the month of August is all about. And of course, we can’t avoid the notorious silly-season monster.

1) Speaking notes for Google Home

Dialogue with machines is still a hot topic. Last month, we reported to the workforce on Google Assistant’s automated voice commands. Now, Mountain View is substantially simplifying the world of voice assistants, which is ideal for those content publishers who are trying to get started in this area. “Speakable” is Google’s first semantic markup that identifies text clips for voice output. The company states that the markups were provided through the industry initiative “Schema.org” and are still in the beta phase. With “Speakable”, news publishers and other content providers can mark up short, text-optimised sections within an article or webpage, so they can be used directly by the Google Assistant. Google advises that the text should be a maximum of two to three sentences long, similar to a teaser. This way, the assistant’s speech output has a talk time of 20 to 30 seconds. For optimal use, the content of the text should present the topic informatively and in short sentences. Google also suggested that headlines should be used. Content selection must ensure that technical information, such as captions, dates or source references, does not interfere with the user experience. In the age of artificial intelligence, the optimised use of markups is becoming increasingly important for search engine optimisers, especially as the number of delivery platforms is also increasing. The standardisation of supplemental information in the source text enables all systems involved in selecting and displaying the search results to reliably collect and optimally process the data. The “Speakable” feature will initially only be available for the English language in the US market. However, Google has stated that it plans to launch in other markets, under the condition that “a sufficient number of publishers implement Speakable”. So the SEO industry will certainly have its work cut out.

2) More opportunities for Amazon Alexa

When it comes to the future of digital searches, the focus is slowly shifting from analysing requests and intentions to reflecting on answers and output systems. The key challenge for successful human-machine communication, alternating between interactive displays, augmented reality and voice assistants, will be to provide the best possible result for each channel. Is there one answer, multiple answers or does the initial question then lead to a conversation between the search system and the searcher? In principle, the process is the same with a virtual assistant as it would be with a physical advisor: Do you want a quick result or a full sales pitch? Do you want to be left alone to browse quietly or do you need the help of a sales assistant? Is a brief answer enough or do you want to break down your query into more specific stages until you get the right result? The American company “Yext” has now introduced a collaboration with Amazon, which enables the import of NAP data (name, address and telephone number), as well as allowing the language assistant Alexa to import opening hours directly from local companies. The New York-based company told journalists that they plan to further integrate their interface with Amazon Alexa in the future. Product data and catalogues may be included at a later stage, but this has yet to be decided. The automation of the data exchange between the owners of digital offers and search systems is already a key component of success in modern digital retail. The goal of this is to create an optimal user experience at the point of issue, as well as valid measures of success. Providing and optimising data feeds is key for optimal functionality of Google’s PLA (Product Listing Ads) or the use of price search engines and affiliate networks. In the world of Amazon, the necessary interfaces and tools are only gradually being created. And when it comes to profiting from the growth of digital voice assistants, that’s exactly where the greatest potential currently lies.

3) An SEO Rabbit goes on a SERP Rampage

Do you still remember Lotti the snapping turtle, Yvonne the elusive cow, or Sammy the caiman? Fortunately, there are search engines that give us the opportunity to relive the stories of these adventurous, silly-season animals. And even years later, we are still captivated by them during the summer-holiday slump. In this latest ‘animal’ news sensation, it was a virtual rabbit that brought the world’s largest search engine Google to its knees. The story was published under the headline “Rabbit Bug” by the Spanish programming collective “La SEOMafia”. According to their information, a table was inserted into the source code of a website in order to deliberately manipulate Google’s SERPs. The bug was based on the fact that Google cannot interpret this formatting when displaying the search result, leading to the sudden termination of the search result display after the manipulated entry. This bug was implemented on a top-ranking site for the keyword “conejos” (Spanish for rabbits), with the result that only one, manipulated, search hit was displayed. It is easy to imagine the incredible click rates that could be achieved by using this strategy. It’s always a pleasure to see some creative spirits shake things up in the now mature and grown-up world of the SEO industry. Eventually, even Google’s SEO liaison officer John Müller became aware of the Rabbit Bug and reported on Twitter with a wink that he had circulated the sighting of the rabbit in-house. The rabbit is now threatened with the fate of all silly-season animals. In the end, most were captured or killed.

Summer is finally here and the nights are long, which gives us plenty of time to think about the fundamental questions of life. That’s why the July issue of SEO News examines not just the forthcoming Google updates, but also the cognition game show and the future pecking order on our planet.

1) Achieve good rankings quickly and securely

Once again, Google is focusing on the convenience and security of Internet users. The company (which in its own words aims to do no evil) is launching not one, but two updates in July – whose effects will be of equal benefit to Internet users and website operators alike. Both of these changes were announced a long time ago and have already been partially implemented. The first change will see the loading speed of mobile websites become an official ranking factor. Loading speed is already listed as a quality criterion in Google’s top 10 basic rules for website quality; however, it has taken a very long time for it to become a genuine ranking factor. The change was originally introduced based on studies showing that slow-loading websites experienced direct impacts on their clickthrough and conversion rates, and the speed argument was also repeated like a mantra by Google representatives at various search conferences during the 2018 season. The subsequent introduction of the Mobile First Index (see our report here) means that the rule has now been made official for mobile sites too. Google recommends that website operators analyse their domains using Google’s own “Page Speed Report” and “Lighthouse” tools and make the necessary changes for mobile websites. Alongside its speed update, Google is also getting serious in July with its announcement that websites which are not converted to the encrypted HTTPS protocol before the deadline will be marked as “not secure” on Chrome. This change also marks the end point of a campaign that was launched over two years ago in 2016, when Google began its awareness-raising work with a small ranking boost for secure websites. Google has described that measure as a success, with the company stating that around 68 per cent of all Chrome traffic on Android and Windows now occurs over HTTPS – and there is plenty of scope for that percentage to grow. The fact that Google is leveraging its market power to implement technical standards with the aim of improving the user experience is a step in the right direction. Many companies were only prepared to invest in faster technology or secure licences when threatened with reductions in traffic or sales. In order to prepare for future developments, it is advisable to keep an eye on new technologies such as AMP (Accelerated Mobile Pages), mobile checkout processes, and pre-rendering frameworks that allow content to be pre-loaded. These innovations can help you keep pace, especially when it comes to improving user perceptions of loading rates on all platforms.

2) Life is one big game show

This bit will be tricky for those of you who didn’t pay attention in maths. Remember that moment back at school, somewhere between integral calculus and stochastic processes, when you belatedly realised that you’d completely lost the plot? Well, in the age of algorithms that will come back to haunt you – especially if you work in online marketing. In everyday terms, an algorithm is nothing more than a carefully ordered chain of decisions designed to solve a problem in a structured way. The crucial innovation in recent years is the advent of artificial intelligence and machine learning. Nowadays, the individual links in the algorithmic chain are no longer assembled by people, but by programs. When you ask a search engine a question, the information is taken in, its core information object (the entity) and intention are identified by means of semantic analysis, and the most empirically appropriate result (the ranking) is returned in the correct context (e.g. local and mobile). However, a group of seven Google engineers presented a research project at the ICLR Conference in Vancouver that turns the question/answer principle on its head. For their project, the researchers used tasks taken from the popular US game show “Jeopardy”. On this show (first aired in 1964), contestants are required to provide the right questions in response to complex answers. In their study, the Google engineers exploited the fact that Jeopardy tasks involve information deficits and uncertainties that can only be resolved by formulating the right question. In other words, the question needs to be adapted until the information provided in the answer makes sense in its specific combination and context. The human brain performs this task in a matter of seconds, and is able to draw upon a comprehensive range of intellectual and social resources as it does so. However, if you ask a Jeopardy question (such as “Like the Bible, this Islamic scripture was banned in the Soviet Union between 1926 and 1956”) to a search engine, you will not receive an appropriate answer. Google returns a Wikipedia article about the Soviet Union, meaning that it interprets this search term as an entity or a core information object, and thus falls short. Microsoft’s search engine Bing comes a little closer to the obvious answer from a human perspective (“What is the Koran?”), but is likewise unable to deliver a satisfactory result. This little trick involving Jeopardy questions makes clear what the biggest problem is for search engines (even though it is marketed as one of the main markers of quality for modern search systems): how to accurately recognise the intention behind each search query. The idea is that what SEO professionals in companies and agencies currently work hard to develop should be reliably automated by the search engines themselves. In order to achieve this, the Google researchers developed a machine-learning system that reformulates possible answers to the Jeopardy question into many different versions before passing these on to the core algorithm itself. In step two, the answers obtained are then aggregated and reconciled with the initial questions. The results are only presented to the user once these two intermediate steps are complete. The self-learning algorithm then receives feedback on whether its answer was right or wrong. The AI system was subsequently trained using this method and with the help of a large data set. As a result of this training, the system learned how to independently GENERATE complex questions in response to familiar answers. This milestone goes far beyond simply UNDERSTANDING search queries, which are growing increasingly complex under the influence of voice and visual search. Although this study was carried out by Google, we can assume that Microsoft, Yandex and Baidu are also working on equivalent technologies designed to further automate the recognition of search terms and to automatically generate complex, personalised content in the not-too-distant future. At present, however, it is impossible to gauge what effects this might have on the diversity and transparency of the Internet.

3) Google Assistant sets the tone

While we’re on the subject of automatic content generation, we also have an update on Google’s uncanny presentation of two phone calls between the Google Assistant and the working population. Back in May, the search engine giant from Mountain View presented a video at its “IO” developer conference in which an AI extension to the Google Assistant named “Duplex” booked an appointment at a hairdresser’s and a table in a restaurant entirely on its own, all while perfectly imitating human speech. The human participants in those conversations were apparently unable to recognise that they were interacting with a machine while they went about their work. Close collaboration with robots and AI systems has long been familiar to industrial workers in the Western world, but now this development is also moving into the service economy, and therefore into our day-to-day lives. At first glance, the Google scenario was astonishing and convincing; however, the unnerving initial impression was swiftly followed by a number of pressing questions. In particular, the fact that Duplex failed to identify itself as a machine to its human conversation partners was the subject of considerable debate. Google has since responded and published a new video in which the Google Assistant identifies itself at the start of the conversation and states that the call will be recorded for quality control purposes – similar to a recorded message in a call centre. Taking a more detached view, however, one wonders whether this responsiveness on the part of the artificial intelligence is actually completely superfluous. The restaurant employee in the video follows the Google Assistant’s instructions obediently, as if he is talking to a human being – there is no difference whatsoever. In search marketing, we attempt to further our own interests by reflecting the intentions of target groups and consumers in the content produced by search engines (the results pages). In voice search, we issue commands to a machine – and a number of years will pass before we learn how that will change us. And in Google’s future scenario of an invisible, omnipresent and convenient system that allows users to organise themselves and solve problems, the human simultaneously becomes both the subject and the object of the technology. Our data was used to create, feed and train the system, and so we may briefly feel ourselves to be its masters; however, given the current state of affairs, we can and should seriously question whether we will recognise the point of no return once the balance finally tips.

If you think this June issue of SEO News will only be about the impact of Google’s Mobile Index, think again. We prefer to wait a bit on that. As the summer begins, we are therefore focusing on the return of a powerful tool, the prerequisites for good SEO work, and an industry in the throes of fake news.

1) The return of Google image search

The image bubble has burst. After a long legal dispute with the image agency Getty Images, Google decided to make some changes to its popular image search function. How positively these changes have affected website operators can be seen from a survey by the US search expert Anthony Mueller. But let’s start from the beginning. In January 2013, Google changed the way its image search function worked so that every user could directly view and download found images. A key aspect of this was that the files were buffered on the servers of the search engine, where users could access them with the ‘View Image’ button. As a consequence, clicks on the sites of content providers and rights holders nearly vanished and systematic traffic from image searches plummeted by more than 70 per cent in some cases. This development was especially perilous for websites that focus on visual impact for inspiration, such as fashion or furniture merchants, and had put a lot of effort into optimising their image content. Particularly for e-commerce operators, this collapse in traffic also meant a collapse in turnover. Three years later, the renowned Getty Images agency submitted a competitiveness complaint to the European Commission, apparently hoping that ‘Old Europe’ would again set things right. Getty’s efforts were rewarded, with the result that the ‘View Image’ button disappeared from Google image search in early 2018. Interested users had to visit the original sites to access the original files. That stimulated Mueller, the well-connected search expert, to ask some 60 large enterprises worldwide, if after nearly six months following the change, they had seen any impact in their website traffic. The result was that on average, visits from Google image search have risen by 37 per cent. Although the figures for impressions and ranking positions in image search have remained relatively stable, click-throughs have risen dramatically with all of the surveyed enterprises. The survey also indicates that conversions from image searches have grown by about 10 per cent. Of course, savvy users can still switch to other search engines, such as Microsoft’s Bing or Duck Go. Those two search engines never got rid of direct access to image files. However, due to Google’s market power this is exactly the right time to give new priority to the optimisation of image content and exploit the new growth potential, according to the author. Presently text search is still the dominant method for acquiring information. However, there are signs of a paradigm shift to visual search, particularly in retail.

2) Getting better results with smart SEO goals

Thanks to the Internet, contacts and advertising impact are now more measurable than ever before. Although the digital revolution in advertising is no longer in its infancy, it has by no means reached the end of its evolution. With digital campaigns, it is easy to define suitable key figures to measure impact and effectiveness, and it is not technically difficult to obtain corresponding campaign data. However, defining goals for search engine optimisation is not so easy. For example, Google stopped offering keyword-level performance data for systematic searches many years ago. Marketing managers and SEO experts are therefore repeatedly confronted with the challenge of developing an SEO KPI concept that visualises optimisation results and, above all, gets the company’s budget controller onside for professional SEO work. For this reason, search guru Rand Fishkin has put together some rules for formulating the goals of SEO activities, which are interesting to advertisers and enterprises alike. According to Fishkin, the main rule is that the business goals must form the basis for the SEO concept. The next step is to break down these higher-level expectations, which are usually financial, into marketing goals – for example, by defining requirements for various communication channels along the customer journey. The actual SEO goals come into view only after this point, and they can be mapped out in the last step using just six metrics. These KPIs are ranking positions, visitors from systematic searches (divided into brand searches and generic search objectives), enterprise representation with various hits on the results page of a search term, search volume, link quality and quantity, and direct traffic from link referrals. Fishkin checks his concept against two different example customers. For example, a pure online mail-order shoe seller has a fairly simple business goal: boosting turnover by 30 per cent in the core target group. In Fishkin’s view, the next step is to specify in the marketing plan that this growth will be generated by a high probability of conversions at the end of the customer journey. From that you can derive an SEO goal of 70 per cent growth in systematic traffic. In order to achieve this goal, you then adopt and carry out implementable SEO measures. For the contrasting scenario of local SEO without reference to e-commerce, Fishkin’s example is a theatre that wants to draw more visitors from the surrounding area. In this case the regions where the target audience should be addressed are defined in the marketing plan. The SEO plan then consists of setting up local landing pages, utilising theatre reviews and blogs, and other content-related and locally driven measures. The advantage of this sort of top-down approach is the alignment of individual SEO measures, which are often difficult to grasp, to the overall aims of the organisation. According to Fishkin, the rewards are higher esteem and faster implementation of the laborious SEO work.

3) Fake news threatens the existence of the SEO industry

Did you get a shock when you read this heading? That’s exactly what we wanted, in order to get your attention. Of course, you rarely see such highly charged headings on SEO blogs, but competition in the IT sector does not spare the search industry. Every year we hear that SEO is dead, but supply and demand for optimisation services have growing steadily for more than 15 years. A large part of that is doubtless due to the intensive PR activities of the parties concerned. Starting as the hobby of a few individuals, over the course of time search engine optimisation has developed into specialised agencies and migrated to in-house teams of enterprises. Along the way there has been continual testing, experimentation and comparison, SEO expertise has been constantly expanded, and above all a lot has been written about it. SEO blogs therefore serve on the one hand as an inexhaustible source of information – a sort of global treasure of SEO experience, forming the basis for success. On the other hand, postings on search topics are also a form of self-advertising and customer acquisition for service providers and agencies. John Mueller, the well known Senior Webmaster Trends Analyst at Google, has now criticised some SEO blogs. He claims that some of them use postings as click bait. That all started with a report on an alleged bug in an SEO plugin for WordPress. In the course of the discussion about the tool, information was presented in abridged form on some SEO sites and important statements by John Mueller on behalf of Google were not passed on. He is now saying that postings should pay attention to all aspects of complex search topics. What matters is to create long-term value with balanced reporting. People should resist the temptation to get quick clicks. According to Mueller, the goal should be to convey knowledge. It is clear that even the search scene cannot evade the grasp of digital attention. It looks like speed has become a goal in itself, and it is assumed that online readers no longer have time to pay attention to the details. In this way our own methods endanger the industry’s collective wealth of experience. In an increasingly complex search world, it is particularly important to not lose sight of the details, and we have to take the time for a thorough treatment of each topic. For example, the threat to the existence of our democracy from the SEO activities of Russian troll farms is a topic that still needs a thorough treatment.

Spring has finally sprung, driving even the most hard-nosed online marketeers outdoors to enjoy the sunshine. It’s a time when important trends and developments can easily be missed – and that’s why we’ve summarised the most important SEO news for May here. This time we will be looking at the development of the search market, Google’s assault on e-commerce, and possible negative impacts of language assistants on our behaviour.

1) The market for search engines is maturing

It’s once again back in fashion to question Google’s dominance in the search market. The Facebook data protection scandal means that many critics of the Google system are hoping that a slightly larger portion of the online community is beginning to recognise that “free of charge” online doesn’t mean “without cost”, and that as a result, user numbers for the Mountain View search engine will no longer continue to grow. We can see some support for this assumption in the trend of many users preferring to start their shopping search directly in Amazon – a competing company. And this presents a good reason to ask the questions: is Google losing market share? Where are users actually doing their online searching? A study by American data collectors from Jumpshot sheds some light on the matter. SEO veteran Rand Fishkin interpreted their analysis of US clickstream data – i.e. referrer data at server level and anonymised click logs from web applications – from 2015 to 2018, with surprising results. Contrary to the presumed trend, the number of searches on Amazon is in fact growing; however, because the total figure for all searches increased at the same time, Amazon’s market share consistently remained around 2.3% over the entire period analysed. A detailed look at the various Google services, such as the image search or Google maps, reveals declining figures for searches within these special services, due to technological and design changes. However, these searches are simply shifting to the universal Google web search. This means that the company from Mountain View has been successful in integrating a range of services for users on mobile devices and desktops into its central search results page. Google’s market share therefore also increased by 1.5 percentage points between 2015 and 2018 to around 90%, meaning that the competition seems miles behind. As with Amazon, the search share for YouTube, Pinterest, Facebook and Twitter is almost unchanged. Microsoft’s search engine Bing and Yahoo have not increased their market share despite a rise in searches. Fishkin’s conclusion is appropriately pragmatic: the search engine industry was at a sufficiently high level of maturity in 2018 that a handful of strong players were able to successfully establish themselves on the market. However, Google’s dominance will not be at risk for some years, as all of its pursuers are benefiting equally from continued dynamic growth in search volumes, the SEO expert summarises. Fishkin adds that even if the giant from Mountain View manages to emerge apparently unscathed from any data scandals, the fact that Amazon, Bing, etc. are able to successfully keep pace with the market leader is the real key finding behind the Jumpshot figures. This assessment is also in line with the phenomenon of growth in mobile searches not coming at the expense of traditional desktop searches. Instead, mobile expansion is also taking place as growth, while desktop searches at a continued high level have not lost relevance.

2) Google wants to know what you bought last summer

In the growing segment of transactional shopping searches, Google’s market power is built on sand. Although the Mountain View company has successfully established Google Shopping as a brokering platform, their vision of controlling the entire value chain, including payment platform, has remained a pipe dream. Or to identify the issue more precisely: Google knows what people are searching for, but only Amazon knows what millions of people actually buy. This is about to change. With a feature launched in the USA called ‘Google Shopping Actions’, a buy option can be displayed directly in the Google search results for products from participating retailers. This feature is intended for retailers that want to sell their products via Google search, the Google Express local delivery service, and in the Google Assistant on smartphones, as well as language assistants. Instead of having to sidestep to selling platforms such as Amazon, the user will in future be able to procure products directly through Google. Google says that Google Shopping Actions will make buying simpler and centralised. The company announced that a centralised shopping basket and a payment process that uses a Google account means that the shopping experience will be processed easily and securely for users of the search engine. In addition to traditional search using the Google search field, it will also be possible to make purchases using speech input, enabling the company to remain competitive in the age of language assistants. Of course the other side of the coin is that a direct shopping function also enables a new level of quality data to be collected and attributed to individual users in Mountain View.

3) Alexa and the age of unrefinement

“Mummy! Turn the living room light on now!” Any child that tries to get what it wants using these words will probably fail miserably. It’s an unchanging component of childhood that you learn to politely word a request to another person as a question, and that that little word “please” is always – by a distance – the most important part of a statement of wish. But this iron certainty is at risk. And that’s not because of a vague suspicion that children these days are no longer taught manners by their parents: what might prove to be a much stronger factor is that the highly digitised younger generation have at their command – even from a very early age – a whole arsenal of compliant, uncomplaining helpers and assistants who do not respond with hurt feelings or refusal if given an abrupt command to complete a task immediately. In the American magazine ‘The Atlantic’, author Ken Gordon engages with the effects of this development on future generations. He states that although precise commands are a central component in controlling software, it makes a huge difference whether these are silently conveyed to a system using a keyboard, or delivered to a humanised machine assistant via speech commands. Gordon goes on to say that the fact that Alexa, Cortana, Siri, and so on accept the lack of a “Please” or “Thank you” without complaint could leave an emotional blind spot in young people. Finally, he concludes that although a speech command is just a different type of programming: “Vocalizing one’s authority can be problematic, if done repeatedly and unreflectively.” But it’s still too early to start predicting how our interaction with each other will change when artificial intelligence and robots become fixed parts of our family, work teams, and ultimately society.