Posts

These days, it’s hard to shake the feeling that everything is changing. Unfortunately, we cannot provide much more stability – because things are about change. This edition of SEO News for the month of October asks the question of whether the Internet as we know it will still exist in ten years, and explores what Google has planned for the next 20 years.

1) The Brave New World of Google

Major birthdays are a welcome occasion to take stock and look ahead. It’s no different for companies and institutions. The search engine Google is currently celebrating its 20th anniversary. Consequently, the Head of Search, Ben Gomes, who was promoted just a few months ago, has attempted to construct a grand narrative in the form of a blog post. Gomes’ story begins with his childhood in India, when his only access to information was a public library, a remnant of Britain’s long-vanished colonial power, and finishes with the modern search engine. Gomes suggests that personalisation, automation and relevance are the cornerstones of a quality product that, according to him, still follows the original vision: “To organize the world’s information and make it universally accessible and useful”. But is this goal being sacrificed globally on the altar of proportionality? SEO news will take up this question again below, with regard to the double standards in dealing with China.

An interesting issue for everyday SEO work, however, is a paradigm shift which Gomes believes will be groundbreaking for Google over the next 20 years. The Head of Search confirms the vision of an invisible and omnipresent information, solutions and convenience machine. According to Google, the transformation to this ubiquitous service is to be followed by three fundamental processes of change. First, it’s about even stronger personalisation. At this level, Google wants to try to evolve from a situation-dependent provider of answers, into a constant companion. According to Gomes, users’ recurring information deficits and ongoing research projects will be recognised, taken up and handled. This is to be achieved, above all, by a restructuring of the user experience on the Google results page. All sorts of personalised elements will be found here in the near future to help users make their journey through the infinite information universe more efficient. The user not only gets to know themself in this process, more importantly, the search engine gets to know the user – that goes without saying.

But before any criticisms can arise, we move swiftly on to the second paradigm shift: The answer before the question.

Google has set out to identify and prepare information relevant to the individual user, even before they have formulated a search query at all. The key element here is technological. Following “Artificial Intelligence” and “Deep Learning”, a technique called “Neural Matching” should be especially helpful: It links the representation expressed by text, language or image with the higher-level object or concept. This represents the continuation of the concept of semantic searches and entities with new technological concepts, and is exceptionally consistent from a business perspective.

The third pillar of the change should be a greater openness to visual information in the search systems. The visual search has great potential for users and advertisers, as we have already discussed several times before. Google is immediately taking action, introducing a complete overhaul of its image search, as well as the integration of its AI-driven image recognition technology “Lens” into the new generation of in-house “Pixel” smartphones. The interesting thing about Google’s anniversary publication is what it doesn’t mention: The voice assistant Google Home. This is a good sign that, despite all market constraints, Google is not distancing itself from its technological DNA and allowing itself to be pushed into a competition with the voice market leader Amazon. Contrary to the publicised hype, voice software is yet to create a huge stir in the search world.

2) The end of the networked world

Oh, how everything is connected: The individual, the world, technology and democracy. More and more aspects of our existence are digitised or transmitted via digital channels. In this process, it always comes back to bias. The well-known tech companies are acting as the pacesetters of this upheaval with their platforms. It may not be too long before Facebook, Amazon or Google establish themselves as the quasi-institutionalised cornerstones of our social and economic systems. Even today, the real creative power of these companies often exceeds the capabilities of existing state regulations. And search engines are at the centre of this development as a human-machine interface and mediation platform. The most relevant shopping search engine Amazon, for example, is changing not only our personal consumption habits but also the appearance of our cities and landscapes, with its radical change in the retail sector. The convenience for the consumer has resulted in empty shops in the inner cities and miles of faceless logistics loading bays in the provinces. Meanwhile, global populism has cleverly used social and informational search systems to accurately position and reinforce its messages. Facebook and Google have contributed at least partially to the sudden and massive political upheaval in one of the largest democracies in the world. Maintaining their self-image as pure technology companies, Google, Facebook and the like, however, have so far persistently refused to accept responsibility for the consequences of their actions. Apart from public repentance and the vague announcement that they are looking for “technical solutions”, they have shown little openness to adapting their strategies to the intrinsic systemic dangers. So the interesting question is: do global technology companies have to represent those values of freedom and democracy that have laid the foundation for their own rise and success in the US and Western Europe? Or can companies such as Google or Facebook be flexible depending on the market situation, and utilise their technological advantage in dubious cases in the context of censorship and repression? Currently, the state of this debate can be seen in Google’s project “Dragonfly”. Since Mountain View has refused to censor its product content, the global leader has been denied access to the world’s largest and fastest-growing market. When Google ceased all activities in China in 2010, the People’s Republic was forced to do without it, and managed pretty well. China has managed just fine without competition for its own flagships Baidu, Tencent and Alibaba. According to consistent media reports, Google has been working for several months to restart involvement in the Middle Kingdom, with the blessing of the government in Beijing. Under the working title “Dragonfly”, Google is reportedly planning to launch a Search app and a Maps app. Working closely with the Chinese authorities, and under state control and censorship, these apps are expected to pave the way for future, widespread activities for Mountain View in the People’s Republic. It just goes to show that Google is prepared to play the game, if the price is right. This approach can be seen as pragmatically and economically motivated. Particularly in light of the fact that the Chinese authorities recently granted Google’s competitor Facebook company approval, then withdrew it after only one day. Rampant discord in the West and cooperative subordination in Asia: former Google CEO Eric Schmidt outlined the consequences of this double standard a few days ago in San Francisco. Schmidt told US news channel CNBC that he expects the Internet to divide over the next decade. He predicts a split into a Chinese-dominated and a US-dominated Internet by 2028 at the latest. Apparently, Silicon Valley has already given up on the vision of a global and open network for the world. However, the consequences of this development will be felt by every individual.

At last, summer is here. But artificial intelligence doesn’t take summer off, so it can be the ideal babysitter in the car, especially when stuck in a traffic jam. That is, as long as the language assistant actually has something to say. That’s what our SEO News for the month of August is all about. And of course, we can’t avoid the notorious silly-season monster.

1) Speaking notes for Google Home

Dialogue with machines is still a hot topic. Last month, we reported to the workforce on Google Assistant’s automated voice commands. Now, Mountain View is substantially simplifying the world of voice assistants, which is ideal for those content publishers who are trying to get started in this area. “Speakable” is Google’s first semantic markup that identifies text clips for voice output. The company states that the markups were provided through the industry initiative “Schema.org” and are still in the beta phase. With “Speakable”, news publishers and other content providers can mark up short, text-optimised sections within an article or webpage, so they can be used directly by the Google Assistant. Google advises that the text should be a maximum of two to three sentences long, similar to a teaser. This way, the assistant’s speech output has a talk time of 20 to 30 seconds. For optimal use, the content of the text should present the topic informatively and in short sentences. Google also suggested that headlines should be used. Content selection must ensure that technical information, such as captions, dates or source references, does not interfere with the user experience. In the age of artificial intelligence, the optimised use of markups is becoming increasingly important for search engine optimisers, especially as the number of delivery platforms is also increasing. The standardisation of supplemental information in the source text enables all systems involved in selecting and displaying the search results to reliably collect and optimally process the data. The “Speakable” feature will initially only be available for the English language in the US market. However, Google has stated that it plans to launch in other markets, under the condition that “a sufficient number of publishers implement Speakable”. So the SEO industry will certainly have its work cut out.

2) More opportunities for Amazon Alexa

When it comes to the future of digital searches, the focus is slowly shifting from analysing requests and intentions to reflecting on answers and output systems. The key challenge for successful human-machine communication, alternating between interactive displays, augmented reality and voice assistants, will be to provide the best possible result for each channel. Is there one answer, multiple answers or does the initial question then lead to a conversation between the search system and the searcher? In principle, the process is the same with a virtual assistant as it would be with a physical advisor: Do you want a quick result or a full sales pitch? Do you want to be left alone to browse quietly or do you need the help of a sales assistant? Is a brief answer enough or do you want to break down your query into more specific stages until you get the right result? The American company “Yext” has now introduced a collaboration with Amazon, which enables the import of NAP data (name, address and telephone number), as well as allowing the language assistant Alexa to import opening hours directly from local companies. The New York-based company told journalists that they plan to further integrate their interface with Amazon Alexa in the future. Product data and catalogues may be included at a later stage, but this has yet to be decided. The automation of the data exchange between the owners of digital offers and search systems is already a key component of success in modern digital retail. The goal of this is to create an optimal user experience at the point of issue, as well as valid measures of success. Providing and optimising data feeds is key for optimal functionality of Google’s PLA (Product Listing Ads) or the use of price search engines and affiliate networks. In the world of Amazon, the necessary interfaces and tools are only gradually being created. And when it comes to profiting from the growth of digital voice assistants, that’s exactly where the greatest potential currently lies.

3) An SEO Rabbit goes on a SERP Rampage

Do you still remember Lotti the snapping turtle, Yvonne the elusive cow, or Sammy the caiman? Fortunately, there are search engines that give us the opportunity to relive the stories of these adventurous, silly-season animals. And even years later, we are still captivated by them during the summer-holiday slump. In this latest ‘animal’ news sensation, it was a virtual rabbit that brought the world’s largest search engine Google to its knees. The story was published under the headline “Rabbit Bug” by the Spanish programming collective “La SEOMafia”. According to their information, a table was inserted into the source code of a website in order to deliberately manipulate Google’s SERPs. The bug was based on the fact that Google cannot interpret this formatting when displaying the search result, leading to the sudden termination of the search result display after the manipulated entry. This bug was implemented on a top-ranking site for the keyword “conejos” (Spanish for rabbits), with the result that only one, manipulated, search hit was displayed. It is easy to imagine the incredible click rates that could be achieved by using this strategy. It’s always a pleasure to see some creative spirits shake things up in the now mature and grown-up world of the SEO industry. Eventually, even Google’s SEO liaison officer John Müller became aware of the Rabbit Bug and reported on Twitter with a wink that he had circulated the sighting of the rabbit in-house. The rabbit is now threatened with the fate of all silly-season animals. In the end, most were captured or killed.

Summer is finally here and the nights are long, which gives us plenty of time to think about the fundamental questions of life. That’s why the July issue of SEO News examines not just the forthcoming Google updates, but also the cognition game show and the future pecking order on our planet.

1) Achieve good rankings quickly and securely

Once again, Google is focusing on the convenience and security of Internet users. The company (which in its own words aims to do no evil) is launching not one, but two updates in July – whose effects will be of equal benefit to Internet users and website operators alike. Both of these changes were announced a long time ago and have already been partially implemented. The first change will see the loading speed of mobile websites become an official ranking factor. Loading speed is already listed as a quality criterion in Google’s top 10 basic rules for website quality; however, it has taken a very long time for it to become a genuine ranking factor. The change was originally introduced based on studies showing that slow-loading websites experienced direct impacts on their clickthrough and conversion rates, and the speed argument was also repeated like a mantra by Google representatives at various search conferences during the 2018 season. The subsequent introduction of the Mobile First Index (see our report here) means that the rule has now been made official for mobile sites too. Google recommends that website operators analyse their domains using Google’s own “Page Speed Report” and “Lighthouse” tools and make the necessary changes for mobile websites. Alongside its speed update, Google is also getting serious in July with its announcement that websites which are not converted to the encrypted HTTPS protocol before the deadline will be marked as “not secure” on Chrome. This change also marks the end point of a campaign that was launched over two years ago in 2016, when Google began its awareness-raising work with a small ranking boost for secure websites. Google has described that measure as a success, with the company stating that around 68 per cent of all Chrome traffic on Android and Windows now occurs over HTTPS – and there is plenty of scope for that percentage to grow. The fact that Google is leveraging its market power to implement technical standards with the aim of improving the user experience is a step in the right direction. Many companies were only prepared to invest in faster technology or secure licences when threatened with reductions in traffic or sales. In order to prepare for future developments, it is advisable to keep an eye on new technologies such as AMP (Accelerated Mobile Pages), mobile checkout processes, and pre-rendering frameworks that allow content to be pre-loaded. These innovations can help you keep pace, especially when it comes to improving user perceptions of loading rates on all platforms.

2) Life is one big game show

This bit will be tricky for those of you who didn’t pay attention in maths. Remember that moment back at school, somewhere between integral calculus and stochastic processes, when you belatedly realised that you’d completely lost the plot? Well, in the age of algorithms that will come back to haunt you – especially if you work in online marketing. In everyday terms, an algorithm is nothing more than a carefully ordered chain of decisions designed to solve a problem in a structured way. The crucial innovation in recent years is the advent of artificial intelligence and machine learning. Nowadays, the individual links in the algorithmic chain are no longer assembled by people, but by programs. When you ask a search engine a question, the information is taken in, its core information object (the entity) and intention are identified by means of semantic analysis, and the most empirically appropriate result (the ranking) is returned in the correct context (e.g. local and mobile). However, a group of seven Google engineers presented a research project at the ICLR Conference in Vancouver that turns the question/answer principle on its head. For their project, the researchers used tasks taken from the popular US game show “Jeopardy”. On this show (first aired in 1964), contestants are required to provide the right questions in response to complex answers. In their study, the Google engineers exploited the fact that Jeopardy tasks involve information deficits and uncertainties that can only be resolved by formulating the right question. In other words, the question needs to be adapted until the information provided in the answer makes sense in its specific combination and context. The human brain performs this task in a matter of seconds, and is able to draw upon a comprehensive range of intellectual and social resources as it does so. However, if you ask a Jeopardy question (such as “Like the Bible, this Islamic scripture was banned in the Soviet Union between 1926 and 1956”) to a search engine, you will not receive an appropriate answer. Google returns a Wikipedia article about the Soviet Union, meaning that it interprets this search term as an entity or a core information object, and thus falls short. Microsoft’s search engine Bing comes a little closer to the obvious answer from a human perspective (“What is the Koran?”), but is likewise unable to deliver a satisfactory result. This little trick involving Jeopardy questions makes clear what the biggest problem is for search engines (even though it is marketed as one of the main markers of quality for modern search systems): how to accurately recognise the intention behind each search query. The idea is that what SEO professionals in companies and agencies currently work hard to develop should be reliably automated by the search engines themselves. In order to achieve this, the Google researchers developed a machine-learning system that reformulates possible answers to the Jeopardy question into many different versions before passing these on to the core algorithm itself. In step two, the answers obtained are then aggregated and reconciled with the initial questions. The results are only presented to the user once these two intermediate steps are complete. The self-learning algorithm then receives feedback on whether its answer was right or wrong. The AI system was subsequently trained using this method and with the help of a large data set. As a result of this training, the system learned how to independently GENERATE complex questions in response to familiar answers. This milestone goes far beyond simply UNDERSTANDING search queries, which are growing increasingly complex under the influence of voice and visual search. Although this study was carried out by Google, we can assume that Microsoft, Yandex and Baidu are also working on equivalent technologies designed to further automate the recognition of search terms and to automatically generate complex, personalised content in the not-too-distant future. At present, however, it is impossible to gauge what effects this might have on the diversity and transparency of the Internet.

3) Google Assistant sets the tone

While we’re on the subject of automatic content generation, we also have an update on Google’s uncanny presentation of two phone calls between the Google Assistant and the working population. Back in May, the search engine giant from Mountain View presented a video at its “IO” developer conference in which an AI extension to the Google Assistant named “Duplex” booked an appointment at a hairdresser’s and a table in a restaurant entirely on its own, all while perfectly imitating human speech. The human participants in those conversations were apparently unable to recognise that they were interacting with a machine while they went about their work. Close collaboration with robots and AI systems has long been familiar to industrial workers in the Western world, but now this development is also moving into the service economy, and therefore into our day-to-day lives. At first glance, the Google scenario was astonishing and convincing; however, the unnerving initial impression was swiftly followed by a number of pressing questions. In particular, the fact that Duplex failed to identify itself as a machine to its human conversation partners was the subject of considerable debate. Google has since responded and published a new video in which the Google Assistant identifies itself at the start of the conversation and states that the call will be recorded for quality control purposes – similar to a recorded message in a call centre. Taking a more detached view, however, one wonders whether this responsiveness on the part of the artificial intelligence is actually completely superfluous. The restaurant employee in the video follows the Google Assistant’s instructions obediently, as if he is talking to a human being – there is no difference whatsoever. In search marketing, we attempt to further our own interests by reflecting the intentions of target groups and consumers in the content produced by search engines (the results pages). In voice search, we issue commands to a machine – and a number of years will pass before we learn how that will change us. And in Google’s future scenario of an invisible, omnipresent and convenient system that allows users to organise themselves and solve problems, the human simultaneously becomes both the subject and the object of the technology. Our data was used to create, feed and train the system, and so we may briefly feel ourselves to be its masters; however, given the current state of affairs, we can and should seriously question whether we will recognise the point of no return once the balance finally tips.

If you think this June issue of SEO News will only be about the impact of Google’s Mobile Index, think again. We prefer to wait a bit on that. As the summer begins, we are therefore focusing on the return of a powerful tool, the prerequisites for good SEO work, and an industry in the throes of fake news.

1) The return of Google image search

The image bubble has burst. After a long legal dispute with the image agency Getty Images, Google decided to make some changes to its popular image search function. How positively these changes have affected website operators can be seen from a survey by the US search expert Anthony Mueller. But let’s start from the beginning. In January 2013, Google changed the way its image search function worked so that every user could directly view and download found images. A key aspect of this was that the files were buffered on the servers of the search engine, where users could access them with the ‘View Image’ button. As a consequence, clicks on the sites of content providers and rights holders nearly vanished and systematic traffic from image searches plummeted by more than 70 per cent in some cases. This development was especially perilous for websites that focus on visual impact for inspiration, such as fashion or furniture merchants, and had put a lot of effort into optimising their image content. Particularly for e-commerce operators, this collapse in traffic also meant a collapse in turnover. Three years later, the renowned Getty Images agency submitted a competitiveness complaint to the European Commission, apparently hoping that ‘Old Europe’ would again set things right. Getty’s efforts were rewarded, with the result that the ‘View Image’ button disappeared from Google image search in early 2018. Interested users had to visit the original sites to access the original files. That stimulated Mueller, the well-connected search expert, to ask some 60 large enterprises worldwide, if after nearly six months following the change, they had seen any impact in their website traffic. The result was that on average, visits from Google image search have risen by 37 per cent. Although the figures for impressions and ranking positions in image search have remained relatively stable, click-throughs have risen dramatically with all of the surveyed enterprises. The survey also indicates that conversions from image searches have grown by about 10 per cent. Of course, savvy users can still switch to other search engines, such as Microsoft’s Bing or Duck Go. Those two search engines never got rid of direct access to image files. However, due to Google’s market power this is exactly the right time to give new priority to the optimisation of image content and exploit the new growth potential, according to the author. Presently text search is still the dominant method for acquiring information. However, there are signs of a paradigm shift to visual search, particularly in retail.

2) Getting better results with smart SEO goals

Thanks to the Internet, contacts and advertising impact are now more measurable than ever before. Although the digital revolution in advertising is no longer in its infancy, it has by no means reached the end of its evolution. With digital campaigns, it is easy to define suitable key figures to measure impact and effectiveness, and it is not technically difficult to obtain corresponding campaign data. However, defining goals for search engine optimisation is not so easy. For example, Google stopped offering keyword-level performance data for systematic searches many years ago. Marketing managers and SEO experts are therefore repeatedly confronted with the challenge of developing an SEO KPI concept that visualises optimisation results and, above all, gets the company’s budget controller onside for professional SEO work. For this reason, search guru Rand Fishkin has put together some rules for formulating the goals of SEO activities, which are interesting to advertisers and enterprises alike. According to Fishkin, the main rule is that the business goals must form the basis for the SEO concept. The next step is to break down these higher-level expectations, which are usually financial, into marketing goals – for example, by defining requirements for various communication channels along the customer journey. The actual SEO goals come into view only after this point, and they can be mapped out in the last step using just six metrics. These KPIs are ranking positions, visitors from systematic searches (divided into brand searches and generic search objectives), enterprise representation with various hits on the results page of a search term, search volume, link quality and quantity, and direct traffic from link referrals. Fishkin checks his concept against two different example customers. For example, a pure online mail-order shoe seller has a fairly simple business goal: boosting turnover by 30 per cent in the core target group. In Fishkin’s view, the next step is to specify in the marketing plan that this growth will be generated by a high probability of conversions at the end of the customer journey. From that you can derive an SEO goal of 70 per cent growth in systematic traffic. In order to achieve this goal, you then adopt and carry out implementable SEO measures. For the contrasting scenario of local SEO without reference to e-commerce, Fishkin’s example is a theatre that wants to draw more visitors from the surrounding area. In this case the regions where the target audience should be addressed are defined in the marketing plan. The SEO plan then consists of setting up local landing pages, utilising theatre reviews and blogs, and other content-related and locally driven measures. The advantage of this sort of top-down approach is the alignment of individual SEO measures, which are often difficult to grasp, to the overall aims of the organisation. According to Fishkin, the rewards are higher esteem and faster implementation of the laborious SEO work.

3) Fake news threatens the existence of the SEO industry

Did you get a shock when you read this heading? That’s exactly what we wanted, in order to get your attention. Of course, you rarely see such highly charged headings on SEO blogs, but competition in the IT sector does not spare the search industry. Every year we hear that SEO is dead, but supply and demand for optimisation services have growing steadily for more than 15 years. A large part of that is doubtless due to the intensive PR activities of the parties concerned. Starting as the hobby of a few individuals, over the course of time search engine optimisation has developed into specialised agencies and migrated to in-house teams of enterprises. Along the way there has been continual testing, experimentation and comparison, SEO expertise has been constantly expanded, and above all a lot has been written about it. SEO blogs therefore serve on the one hand as an inexhaustible source of information – a sort of global treasure of SEO experience, forming the basis for success. On the other hand, postings on search topics are also a form of self-advertising and customer acquisition for service providers and agencies. John Mueller, the well known Senior Webmaster Trends Analyst at Google, has now criticised some SEO blogs. He claims that some of them use postings as click bait. That all started with a report on an alleged bug in an SEO plugin for WordPress. In the course of the discussion about the tool, information was presented in abridged form on some SEO sites and important statements by John Mueller on behalf of Google were not passed on. He is now saying that postings should pay attention to all aspects of complex search topics. What matters is to create long-term value with balanced reporting. People should resist the temptation to get quick clicks. According to Mueller, the goal should be to convey knowledge. It is clear that even the search scene cannot evade the grasp of digital attention. It looks like speed has become a goal in itself, and it is assumed that online readers no longer have time to pay attention to the details. In this way our own methods endanger the industry’s collective wealth of experience. In an increasingly complex search world, it is particularly important to not lose sight of the details, and we have to take the time for a thorough treatment of each topic. For example, the threat to the existence of our democracy from the SEO activities of Russian troll farms is a topic that still needs a thorough treatment.

Spring has finally sprung, driving even the most hard-nosed online marketeers outdoors to enjoy the sunshine. It’s a time when important trends and developments can easily be missed – and that’s why we’ve summarised the most important SEO news for May here. This time we will be looking at the development of the search market, Google’s assault on e-commerce, and possible negative impacts of language assistants on our behaviour.

1) The market for search engines is maturing

It’s once again back in fashion to question Google’s dominance in the search market. The Facebook data protection scandal means that many critics of the Google system are hoping that a slightly larger portion of the online community is beginning to recognise that “free of charge” online doesn’t mean “without cost”, and that as a result, user numbers for the Mountain View search engine will no longer continue to grow. We can see some support for this assumption in the trend of many users preferring to start their shopping search directly in Amazon – a competing company. And this presents a good reason to ask the questions: is Google losing market share? Where are users actually doing their online searching? A study by American data collectors from Jumpshot sheds some light on the matter. SEO veteran Rand Fishkin interpreted their analysis of US clickstream data – i.e. referrer data at server level and anonymised click logs from web applications – from 2015 to 2018, with surprising results. Contrary to the presumed trend, the number of searches on Amazon is in fact growing; however, because the total figure for all searches increased at the same time, Amazon’s market share consistently remained around 2.3% over the entire period analysed. A detailed look at the various Google services, such as the image search or Google maps, reveals declining figures for searches within these special services, due to technological and design changes. However, these searches are simply shifting to the universal Google web search. This means that the company from Mountain View has been successful in integrating a range of services for users on mobile devices and desktops into its central search results page. Google’s market share therefore also increased by 1.5 percentage points between 2015 and 2018 to around 90%, meaning that the competition seems miles behind. As with Amazon, the search share for YouTube, Pinterest, Facebook and Twitter is almost unchanged. Microsoft’s search engine Bing and Yahoo have not increased their market share despite a rise in searches. Fishkin’s conclusion is appropriately pragmatic: the search engine industry was at a sufficiently high level of maturity in 2018 that a handful of strong players were able to successfully establish themselves on the market. However, Google’s dominance will not be at risk for some years, as all of its pursuers are benefiting equally from continued dynamic growth in search volumes, the SEO expert summarises. Fishkin adds that even if the giant from Mountain View manages to emerge apparently unscathed from any data scandals, the fact that Amazon, Bing, etc. are able to successfully keep pace with the market leader is the real key finding behind the Jumpshot figures. This assessment is also in line with the phenomenon of growth in mobile searches not coming at the expense of traditional desktop searches. Instead, mobile expansion is also taking place as growth, while desktop searches at a continued high level have not lost relevance.

2) Google wants to know what you bought last summer

In the growing segment of transactional shopping searches, Google’s market power is built on sand. Although the Mountain View company has successfully established Google Shopping as a brokering platform, their vision of controlling the entire value chain, including payment platform, has remained a pipe dream. Or to identify the issue more precisely: Google knows what people are searching for, but only Amazon knows what millions of people actually buy. This is about to change. With a feature launched in the USA called ‘Google Shopping Actions’, a buy option can be displayed directly in the Google search results for products from participating retailers. This feature is intended for retailers that want to sell their products via Google search, the Google Express local delivery service, and in the Google Assistant on smartphones, as well as language assistants. Instead of having to sidestep to selling platforms such as Amazon, the user will in future be able to procure products directly through Google. Google says that Google Shopping Actions will make buying simpler and centralised. The company announced that a centralised shopping basket and a payment process that uses a Google account means that the shopping experience will be processed easily and securely for users of the search engine. In addition to traditional search using the Google search field, it will also be possible to make purchases using speech input, enabling the company to remain competitive in the age of language assistants. Of course the other side of the coin is that a direct shopping function also enables a new level of quality data to be collected and attributed to individual users in Mountain View.

3) Alexa and the age of unrefinement

“Mummy! Turn the living room light on now!” Any child that tries to get what it wants using these words will probably fail miserably. It’s an unchanging component of childhood that you learn to politely word a request to another person as a question, and that that little word “please” is always – by a distance – the most important part of a statement of wish. But this iron certainty is at risk. And that’s not because of a vague suspicion that children these days are no longer taught manners by their parents: what might prove to be a much stronger factor is that the highly digitised younger generation have at their command – even from a very early age – a whole arsenal of compliant, uncomplaining helpers and assistants who do not respond with hurt feelings or refusal if given an abrupt command to complete a task immediately. In the American magazine ‘The Atlantic’, author Ken Gordon engages with the effects of this development on future generations. He states that although precise commands are a central component in controlling software, it makes a huge difference whether these are silently conveyed to a system using a keyboard, or delivered to a humanised machine assistant via speech commands. Gordon goes on to say that the fact that Alexa, Cortana, Siri, and so on accept the lack of a “Please” or “Thank you” without complaint could leave an emotional blind spot in young people. Finally, he concludes that although a speech command is just a different type of programming: “Vocalizing one’s authority can be problematic, if done repeatedly and unreflectively.” But it’s still too early to start predicting how our interaction with each other will change when artificial intelligence and robots become fixed parts of our family, work teams, and ultimately society.

It’s all about speed when it comes to online marketing. Therefore, in November, we are already looking at the new year and thinking about everything that will change in 2018. Will SEO be dead and gone and robots take over the world? It won’t be that bad, but there is a hint of truth behind this. You can find out more in the current SEO news.

1) Google launches its Mobile-First-Index (a little)

The launch of the Mobile-First-Index will be the dominating topic for SEOs in 2018. A year ago, the search engine, based in Mountain View, had already announced that it will realise mobile versions of websites in the future instead of using the desktop version as a reference for contents and rankings. However, it is not all going to happen on one specific day, the change will be quite gradual and accompanied by extensive tests, according to Google. Google spokesperson, John Mueller, has now announced that work has begun on converting the first websites to the Mobile Index in trial operation. Although it is still too early to talk about the official launch of regular operation, it is more of an initial testing phase. However, the changes in rankings that were observed by web masters in the middle of October are not related to these tests, according to Mueller.

2) 2018 SEO expert oracle

A glimpse into the SEO crystal ball fascinates the search industry again every year. Renowned experts have made predictions for 2018, on what the dominating trends will be in the coming 12 months. They all agree that Google’s transition to the Mobile-First-Index, the rapidly increasing use of language assistants and the triumph of artificial intelligence will bring about serious changes to the technological side of search marketing. Companies and web masters should watch these changes closely. The fight for organic traffic will quickly intensify. Since Google increasingly appears as a publisher and already provides a lot of information on its own search results using the so-called Featured Snippets, the use of structured data, in-depth analysis of contents and user behaviour as well as the focus on a good user experience all remain the most important areas of activity. Aaron Wall from SEO Book even speculated that Google’s dominance in the search sector will decline and that users will increasingly resort to specialised search systems. In summary, SEO expert John Lincoln easily adapts an old classic: “The old SEO is dead and gone – welcome to a new era. It’s 100 times better and much more exciting.”

3) Microsoft and Google rely on human support

Barely a day goes by when there isn’t something written about the unstoppable spread of artificial intelligence and its effects on online marketing. Search provider giants, Google and Microsoft, rely on the use of learning machines. However, if you look closely, there is also an opposite trend: Microsoft’s search engine, Bing, first announced in August that it wants to rely more on its collaboration with users in the “Bing Distill” community in order to improve the quality of its direct answers in the future (we reported). At the start of October, Google invited its “Local Guide” community to the second conference in San Francisco. According to the company, the organised user community already has around fifty million participants worldwide, who primarily check and correct entries in Google Maps. In addition, almost 700,000 new entries are composed by local guides on a daily basis. Google said that this is a great help, especially in developing countries, because information from local businesses and services in these countries is difficult to automatically record and check. It remains to be seen whether this trend is taking hold or whether humans are just a bridge technology until artificial intelligence has acquired the same skill set.

4) How artificial intelligence will change search engine optimisation

Search Marketing faces great changes and, at the core, it’s all about the effects of integrating artificial intelligence and machine learning into the technology of major platforms. In terms of the organic search, according to SEO veteran and expert, Kristopher Jones, this means that keyword rankings will no longer be subject to dramatic changes in the future and that there will be no superior, universal algorithm. In fact, specialised and dynamic algorithms in a variety of versions will be used for various search requests. Ultimately, the search provider’s aim is to accurately grasp the exact intention of the user using technological aids and to be able to deliver better results, according to Jones. The search expert believes that the classic keyword analysis and technical SEO would therefore be obsolete. In response to the challenges of artificial intelligence, Jones suggests a combination of user experience optimisation, strictly tailoring the contents to user intentions and using more natural speech patterns for voice search. He went on to say that search engine optimisers will not be able to develop their own analysis tools based on artificial intelligence and that agencies and advertisers will have to develop strong responses to the technological challenges in order to not be overwhelmed by the progress.

When Google announced the dawn of the Age of Assistance [1] last spring, it certainly didn’t underestimate all the changes it would trigger.

11 mn to read Interaction with various technologies and interfaces means that the way we search for information on the Internet is being turned completely on its head.
And along with it, the way we need to think about how we use SEO.

First, you no longer have (full) ownership of your content…

It’s been rumoured for a few years, we’re going to have to come to terms with the fact that the web is now a series of platforms. This global phenomenon means that access to the digital audience, is centralized, managed by a handful of players in Silicon Valley.

Google AMP, exemple de mise en scène pour le Washington Post ou le New York Times

Google AMP, with examples of how the Washington Post and the New York Times are displayed

If we only consider access to information, it is primarily Google and Facebook that lay down their own law. For commercial reasons – mainly audience retention and control of advertising space – each of them has deployed its own platform for hosting content: Instant Articles by Facebook [2], AMP (Accelerated Mobile Pages) by Google [3]. If you want to reach a large audience, especially in the media sector, it has quickly become essential to consider sharing information on these platforms.

The benefit for the reader is obvious. With these technologies, users can access and consume information more quickly. With Facebook for example, you don’t need to wait for a web browser to open up, the articles are readily available in the Facebook app on your smartphone. It’s the same with Google, AMP is an “accelerated” platform, a no-frills setup that delivers an optimal mobile experience.

However, when you post an article on AMP or Instant Articles, you abandon your website and depend solely on the environment that the GAFAs agree to provide for you. While it is still possible for your brand to emerge, a great many web user habits have begun to disappear: auxiliary browsing, page separation, and of course advertising design. Content is brought right down to its bare bones.

This can be a good thing, in that the news, content or function displayed is exactly what the user was looking for. But it has a huge impact on how the information is presented, and especially how the internet has learned to capitalize on its readership over the last 20 years.

With AMP, it’s no longer the page, but its content… that generates satisfaction, viewing habits, even repeat behavior.

And if you’re not running a media site?

Rest assured, Google hasn’t forgotten you either. The search engine is developing and distributing a Progressive Web App format [4] that will eventually become the AMP for eCommerce and transactional platforms. In the same way that press articles are fast-tracked by smartphones, transaction forms – flight check-in, quote requests, etc. – will be fast-tracked for an improved mobile experience. The outlook looks promising in terms of the user experience [5], but the impacts for the industries that will use this format remain to be seen.

…by the way, you no longer (really) need to try to rank web pages…

AMP and Instant Articles have already changed how content is perceived on the Internet, and have begun to separate it from its traditional base: the web page. Featured snippets (SEO experts refer to these results as being in position zero) in Google results [6] are also having a massive impact on the way SEO is approached.

Basically, a featured snippet is a ready-made answer, generated by Google, to a question from the user.

It takes the form of a paragraph of boxed text, sometimes even with illustrations, that is presented above the usual organic search results. And that’s why it’s called position zero.

A Google featured snippet, generated for a search into... featured snippets

A Google featured snippet, generated for a search into… featured snippets…

The format appears mainly when the user asks questions; full sentences in interrogative form… but also when requests are understood to be searches for information on processes or concepts. Anything that requires more explanation than transactional search results.

How does Google identify the information that will appear in a featured snippet? It first evaluates the relevance of a page on the subject requested and then extracts the paragraph or paragraphs it considers to be the most explicit.

What matters then to the engines is no longer just the relevance of a page on a specific keyword, but how part of this page can answer a concrete question.

The evolution of searches towards featured snippets means that content creators have to stop thinking only in terms of web pages, and start thinking in text units – paragraphs, lists, processes – and how they can work on their content so that it is presented as a response rather than raw information. This is probably going to change a lot of editorial style guides.

The impact of featured snippets goes hand-in-hand with the deployment of AMP technologies. If Google is able to find, and therefore provide the user with, a suitable response in one paragraph, there is no longer any benefit in it driving traffic to a website. The featured snippet can potentially be enough for the user. Finding a web page isn’t the user’s priority any more!

…so you no longer (really) need to target keywords…

Where is all this going? The two revolutions we’ve covered so far only involve the display and processing of web data. They do not really affect how the user interacts with search engines. And yet, the biggest revolution in progress is coming straight from the users themselves.

By relying more and more on their smartphones (people are now pulling out their devices more than 150 times a day [7]), users prefer their own micro-questions and are moving away from keyboards. The emergence of featured snippets in Google is a direct consequence of this change in behavior [8].

Last spring, nearly 20% of queries made using the Google app in the US were voice queries [9].

And these figures are set to rise. Many queries are now only related to smartphones: to look for directions, call a contact or play a track.

Infographie : quel usage en recherche vocale pour les adolescents et les adultes aux USA ?

But other uses are emerging, such as requests for homework assistance from teenagers (31%) or queries about movie showtimes from adults (9%) [10]. These are searches for basic information.

How are Internet users’ voice queries formulated? It’s quite simple, they are spoken. When we make a voice query, we no longer depend on a keyword, we ask a real question. Advertisements for voice assistants – like Apple’s Siri – have inspired this behavior.

When questions are fully formulated, they have the advantage of being able to concentrate on a point of detail about a person (age, place of birth, role for an actor, etc.) or a retail outlet (location, opening hours, etc.). And that’s exactly what featured snippets are designed to do: they provide a precise answer to a specific question [11].

So what does that change in how websites are to be designed?
This changes what kind of information is displayed: we’re no longer trying to position results based on a request, but to answer a question.

It’s no longer about trying to display as much information about Angelina Jolie as possible, but to convince Google that you are the best source of information to give her age . This involves a little technical skill – microformatting, information management – but above all it means thinking about content in a different way, separating it into basic information blocks, based on the user’s questions, rather than in long encyclopaedic articles.

Questions, unlike queries, require quick and simple answers. And so content needs to be quick and simple too.

…anyway, soon you’ll no longer be displaying any text at all…

The next revolution will be simplicity. Can you see where we’re going with this?
The natural partner for voice queries is of course voice responses. Voice assistants – Amazon Echo, Google Home and Apple HomePod mark the latest development in how we search for information on the web.

Google Home, l'assistant vocal de Google est bien entendu le futur du Search

Google Home, Google’s voice assistant is of course the future of Search

Amazon Echo, the most popular voice assistant, was installed in nearly 9 million American homes last spring [12].

These terminals have no screens – even though Amazon has been testing new versions of its terminal[13] – answers to users’ questions are purely vocal and should leave no doubt or room for interpretation.

Because the main problem with searches, and especially answers, is in how they are interpreted, and the doubt that can be generated by the absence of visual support or backup solutions. On a computer screen, if the first result of a query isn’t what you want, you can always click on the next one. And if you do not fully understand a featured snippet, it often comes with a visual to illustrate the subject, or a link to go into more details.

This reassurance, or redirection, is crucial in the user experience; it encourages them to go deeper into a search, to check back, to explore further.

In voice queries, answers don’t offer any further options. You can’t ask for another result or confirm the first outcome with an illustration.

In voice queries, “I did not understand” doesn’t exist.
In fact, anything that could be confusing about the answer is eliminated.

First, the source of the generated information[14] can have a tremendous impact on its meaning. Users must make do with what they have, and assume that if they have chosen an assistant produced by the Amazon brand, they agree to see the world, or at least part of it, through the eyes and sources that are available from Amazon.

But there is also the way content providers formulate their answers. The spoken media is not the written media – print and radio reporters are well aware of that – and being convincing on a Google Home device is not the same as being reassuring from the search engine’s homepage.

The answers provided by brands will have to evolve towards facts rather than elements of communication or “projection”…

Are you still doing SEO?
Yes, but not really in the same way as before…

We have cast SEO aside so many times that it will probably still survive many a revolution in the future. Appearing on a search result page or being quoted by a voice assistant will always require a minimum amount of information structuring and technical expertise, a minimum amount of thinking and the ability to use algorithms.

Having said that, voice searches, and especially the emergence of new consultation tools on the web, are sure to drastically change the way we think about how we optimize access to information. First, because the page as a unit of measurement for the web will soon disappear. Social networks have already eliminated the need for a website[15].

The web page is still the basic unit of SEO optimization. We use web pages to consider how we segment content, tree structures, semantic silos, etc. Separating content providers from web pages will require most information specialists to take a look back at their content: texts and semantic notions. And to reflect on this moving matter without necessarily sticking to permanent and structured support tools.

The death of the web page will force site managers to think in terms of information flow, and no longer only in terms of support tools.

A great many content managers and community managers have been getting into that habit over the last few years. They flick between broadcast media – Facebook, Twitter, YouTube, etc. – depending on the targets and objectives of their content, and have learned to handle information flows and divert physical media. They have also learned how to adjust content and change its structure, depending on its objectives and how it is published.

In some ways, community managers have also learned to speak to algorithms before addressing human beings.

And that goes to show that businesses and expertise are getting closer, coming together, and merging.

After all, given that conversational interfaces are the search engines of tomorrow, community managers might indeed become the SEO of tomorrow?

Inspirations

– One album: US (Peter Gabriel – 1990), for the introduction track Come Talk to Me 🙂
– One book : The Library of Babel (Jorge Louis Borgès – 1941)

Sources

English translation by Ruth Simpson

Why the approach for search-engine advertising in future will focus more on context

The days of search-engine advertising (SEA) insisting chiefly on category accounts and generic campaigns with thousands of ad groups and keywords are coming to an end. Nowadays, excessive keywording is actually proving to be somewhat counter-productive. You could be extolling the virtues of a small Kölsch beer to a fan of Munich’s famous stein – a complete waste of time. The main reason for turning our backs on absolute keyword dominance is that, in the past, Google has continually expanded its keyword options. As a result, the context in which the keyword appears is becoming increasingly more relevant.

Originally, when planning SEA campaigns, care was taken to choose keywords that were as precise as possible, not least because of the strict policy implemented by Google. Nevertheless, over time, Google has become increasingly flexible with the range of its “exact match” and “phrase match” keyword options. For some time now, to ensure that potential customers do not fall between the cracks, misspelt keywords, regional language variations (e.g. tram instead of trolley) or abbreviations have also resulted in a hit list being returned. The aim of this measure was not only to generate more clicks and thus also more money for Google, it was also intended to simplify the process for advertisers to focus on the intentions of the people conducting the searches and thus ensure more relevant results.

Ultimately, all users are different and therefore have a different way of expressing themselves and a different idea of which adverts are personally relevant to them. There has also been a change in user behaviour with the increasing use of voice technology. According to Google, more than a fifth of enquiries are already made on Android smartphones using voice input. Consequently, for the SEA business, dynamic search ads (DSAs) are becoming more and more appealing.

The key benefit of dynamic search ads: greater reach, less manual effort

With dynamic search ads, keywords are no longer entered – apart from the keywords which should not be used at all (the so-called negatives). Instead, the text displayed is created (semi) automatically. In connection with dynamic search ads, keywords are only used in the context of a negative exclusion test, i.e. to ensure campaign granularity. With DSAs, rather than using keywords, Google compares search queries with the contents of a website or data feed. If there is relevant information with regard to the search enquiry, Google returns an advertisement, after approval of the campaign – automatically and with no keywords entered. In so doing, both the combination of words in the ad title and the URL of the target page are generated individually based on the way the search query is worded.

An example: a user is about to travel to Switzerland and wants to buy a new pair of hiking shoes. In the Google search window, he enters the words “men’s hiking shoes”, whereupon, based on the contents of the web shop, an ad is generated with the title “men’s hiking shoes”. Google takes the text for the ad from the product data feed or the text on the website. Because data feeds are often based on identical manufacturer’s information for all web shops, without content optimisation, no distinction in the headline of the text ad can be made. Companies therefore need to specify which type of criterion (price, selection, delivery, promotion etc.) should be selected. User and enquiry-related optimisation of contents in the web shop and data feed is the key to higher conversion rates here among search-ad competitors.

Dynamic search ads are already obtaining very good results. The reason for this is that the increase in targeting options on data feeds and new opportunities in text design means that ads can be placed with great precision, without any reduction in output by Google and if necessary without getting highly relevant traffic or paying more for this traffic. The more precise and relevant the ad, the more likely it is that the potential purchaser will click on it. It also ensures that considerably more users are reached than with campaigns purely based on keywords.
However, this partial automation in Google’s system certainly does not grant Google a free licence or allow the search-engine giants alone to decide what hits should be delivered. In fact, the opposite is true: agencies need to manage the complex craft of dynamic search ads. This means that they need to specify negative exclusion tests which ensure no advertisement is returned. In the earlier hiking-shoes example, this could be a combination of the terms “fall”, “mountaineering accident”, or something similar. They need to put together campaigns with clearly structured themes and optimise data feeds or the website structure and URLs.

It is also necessary to continuously monitor the quality of campaigns, constantly adjust what is returned and continuously review the rules and regulations that govern this. The tasks for agencies are therefore changing.

The e-commerce sector in particular stands to benefit from dynamic search ads. Online shops often have a large and ever-changing product range and extensive content, such as product data feeds, which can be accessed by generating dynamic ads. Despite the freedom offered by DSAs, campaigns can be closely controlled, as there is an option to advertise on the basis of the website as a whole, as well as specific categories. There is also a logistical advantage with dynamic search ads: because an online shop’s range often changes, this used to require a great deal of work with classic search-engine advertising. This work has been reduced to a minimum by the semi-automatic creation of DSAs.

Therefore, the work of search-engine advertisers will also change in future: we shall no longer be putting most of our effort into keyword sets and variant texts. Instead, for our SEA campaigns, we shall use dynamic search ads that extract their information automatically from data feeds and websites. In future, agencies will have to deal much more with pure campaign optimisation to ensure quality and thus long-term success. Furthermore, website optimisation and data-feed optimisation will increasingly become a central focus.

Search engines do not take a vacation. Therefore, we present just in time for the summer vacation the most important SEO News of the month July – with new competition for Amazon Alexa, positive news for Bing and, of course, exciting Google updates.

1. Google Mobile Search enables direct contact with potential customers

After the first tests in November of last year, Google in the USA has now officially launched the function enabling users to contact companies directly from the search results on mobile terminals. After a local search (e.g. for a restaurant, hairdresser, etc.) you will be able to notify the store of your choice directly by Messaging App. For providers, the new function is activated quickly via the Google MyBusiness-Account. The communication is processed either through the Google-Messaging-App “Allo” onto Android devices or directly in the native Messaging-App onto iOS.

2. Videos on Google and YouTube: New study explains the differences in the ranking

Do I want to focus on Google or YouTube in the case of optimising my moving image content? A new study from the USA provides support with this decision. Using a comprehensive ranking analysis, this could show that the algorithms of both search engines differ significantly due to different user intentions and monetisation models. As a result, the content of the video is decisive: While informative content on traditional Google search, such as operating instructions, seminars, or reviews gain high visibility, on YouTube you can achieve high rankings with entertainment content and serial formats. Interesting reading for any SEO.

3. Bing expands market shares in Desktop Searches

For a successful search engine optimisation, it is important not to depend only on the market leader Google. To reach your target group, you need to closely observe the broad spectrum of general and specialised search systems. This, of course, includes Microsoft’s search engine Bing, which, by his own account, serves older and financially stronger target groups than its competitor Google. According to the latest figures from Comscore’s market researchers, Bing was able to expand its European market share in desktop searches to nine percent in the first two quarters of 2017, in Germany to twelve percent and in the United States even to 33 percent. Bing was driven by a stronger integration of the search engine into the current operating system Windows 10 and its Voice Search “Cortana”, Depending on the audience and target market, it is thus worthwhile keeping an eye on the company from Redmond.

4. Bing expands results display for brand searches

And once again Bing: In the past, it has been shown that even Google is not afraid to copy new features from Microsoft’s search engine. For example, in the case of the image search, Bing was able to profile itself with new display formats on the search results pages. Recently, in the United States, Bing offers during the search for brand names, in addition to the well-known site links, also direct entry points for “Popular Content” in the form of screen shots and images. Whether or not this feature provides added value for the user is questionable, it serves quite definitely an attention increase and thus a potentially higher click rate.

5. Competition for Google and Amazon: Samsung and Facebook are planning their smart speakers

Up to now, the market for the smart speaker has been controlled mainly by Amazon and Google, where the trade giant currently plays a dominant role with Echo and Alexa. Now Samsung and Facebook are also preparing to enter this market. Currently, Samsung is focusing on the development of language assistant Bixby and once more positions itself as a competitor to Google. Apparently, Facebook will launch a corresponding offer on the market in the first quarter of 2018.These developments underline the trend that SEO will increase significantly in complexity given the rapid (further) development of language searches and the more diverse region of terminal equipment.

Some exciting changes to the search engine giant from California in San Francisco were introduced at the Google Global Performance Summit last Tuesday. In addition to new features in local search ads and important extensions of the Google Display Network (GDN), now Google provides also expanded advertising and display options in the classic search ads, called Extended Text Ads (ETA).

Plan.Net Performance is one of the first agencies in Germany to test the new Google formats for a customer and enlightening experiences were gained.

Finally, there is more space with Google Extended Text Ads

25/35/35. Hitherto the number of characters was limited in the creation of text ads on Google Search for the title, text and URL. This limitation could cause sometimes real difficulties to advertisers, for example, if you wanted to promote a “pet owner liability insurance”.

Since last week, Google offers more freedom to selected advertisers: two headlines of 30 characters each and an 80 character line of text offer sufficient space for the use of USPs and call-to-actions. The domain of the URL display is generated automatically from the stored destination URL, additionally there are two fields for the individual definition of the URL path.

The easier ad creation by expanding the character limit is only partly true. In the old format advertisers were forced to restrict the texts to the most important information. Now there is a risk to use unnecessary text filler, thus distracting from the actual core.

Google AdWords: Google Extended Text Ads

Google Extended Text Ads

Is this a logical compensation after a few weeks ago all ads in the right column were deactivated from the search results? Agreed, for those who were used to the ads on the right side and the left-aligned view for years, Google search results page looked in February almost a bit empty.

The expanded text ads are available since Monday, 23 May 2016. The first results are promising and confirm the expected uplift in the core metrics (higher click-through rates, CTR, at slightly lower CPC). Google itself predicts an uplift in CTR by up to 20 percent. Since the new format during the beta phase is only limited and only few advertisers are unlocked, the actual effect will probably become clear in a few months.

Google’s strategy to further strengthen the premium positions has not changed meanwhile. Thus, the expanded text ads, as other enhancements, increase also the premium positions 1 to 3. The competition will not be lower.

GDN: Cross-exchange for Display Remarketing Campaigns and Responsive Ads

Through the Google Display Network (GDN) advertisers can publish classic display ads on a variety of participating websites and blogs. Under the keywords “Cross-exchange for display remarketing campaigns” Google facilitates its customers to extend their remarketing campaigns through additional inventory sources. So far, Google fell back on the DoubleClick Ad Exchange. DoubleClick is also part of the Google Group.

A major difference between the GDN and the major ad exchanges is the order process. While in GDN usually there are only incurred costs when an advertisement is actually clicked (CPC – cost per click), the Ad Exchanges are generally remunerated for each advertising appearance (CPM – Cost Per Mille). You might think that with the expansion of GDNs to additional ad exchanges, Google is taking a certain risk. Theoretically, this is also true, especially since Google most probably buys the advertising service on a CPM basis and offers it to its customers on a CPC basis. However, it would not be Google, if they did not know exactly what they are doing.

The newly acquired range is limited exclusively to remarketing campaigns. The generated CTRs are known to be many times higher than for campaigns with other targeting options. CTRs of 0.20 percent and higher for standard formats are not uncommon. With the higher expected CTR, Google is also in the position to pay the corresponding higher CPMs, or rather to ensure its own margin. This purchase model can be very successful, as other vendors like Criteo have long proved.

The extension of remarketing campaigns in the GDN to additional ad exchanges thus represents not necessarily a cannibalization, but rather a useful supplement for Google.

Another announcement are the “Responsive Ads for display”, i.e., advertisements that individually adjust to the respective content in which they are placed. This allows to place advertising spaces in the GDN which do not follow the usual format standards. It was exactly with especial formats when DoubleClick was not a very flexible partner. “Responsive Ads for display” should have a positive impact, especially on mobile devices and facilitate native advertising integrations. Google positions itself step by step in a “Mobile First” world and will significantly expand its range through adjustments.

What the new feature actually brings, will only be known in detail after a test. With the increasing “playground” of Google grows also the overlap with other areas of marketing. It is therefore more important to evaluate all the accordingly activities under an overarching strategy and coordinate the most important.

Other advertising opportunities in the local search

Finally, new features for Local Search Ads (LSA) were announced in San Francisco. So it will be possible for advertisers in the future, to highlight their ads on mobile devices and the Google Maps service. “Promoted Pins” put the company logo in the navigation via Google Maps prominently in scene. If a potential customer looks for services or products on the go and clicks on such a pin, in addition to the usual display texts, current information of offers or promotions will be available in the future. Google responded with this innovation to the unbroken trend towards mobile use of its services. According to own statements, one third of all mobile searches relates directly to local services, such as cafés, restaurants or shops. In addition, mobile requests with a local connection grow around 50 per cent faster than the totality of all mobile searches worldwide.

Google changes its appearance as an advertising platform in the context of an increasing competition and a rapidly changing user behaviour. Especially Facebook has been able to benefit from the increasing mobilization of internet usage. For advertisers and agencies, this means to observe developments and innovations closely and to have the courage to experiment and question traditional paths.