Out with the cookie cutter approach and into the customer’s head

You have created award-worthy advertising and invested heavily in media – and then your customer gets stuck on an incompetent hotline for over half an hour. You have sent out a perfectly personalised e-mail newsletter – but unfortunately your customer is redirected to a general category page of your online shop when they click on it. You start a limited sales promotion, but even weeks after it’s ended the retargeting banners follow your users everywhere. Marketing is a bit like dating: sometimes it’s the small things that can ruin a good first impression.

When customers gain experience with brands or products today, they do so in many places: in store, online, via social media, on the phone and on the street. In the best case, this customer experience results in a coherent overall picture. But in reality this is often not the case. Why? Because companies have structures that can often make it difficult for the focus to lie on the customer experience as a central element of their actions. And this is despite more and more companies being aware of how important this aspect is.

A consistent customer experience needs new structures

In times of increasing price transparency and decreasing brand loyalty, a coherent customer experience is an important differentiating feature. If you can’t find AND retain your customers, you’re in trouble. For brands this means concentrating on giving the customer reasons to become and remain a customer. And because the platform economy of the digital world is making (price) comparisons easier and lowering exchange hurdles, it often no longer comes down to ONE reason – a great product, an unbeatable price, a good brand image people like to show off and so on. The key to success and sustainability in the digital age is a coherent and above all relevant customer experience.

A further challenge is that digitalisation affects many, if not almost all, areas of a company, from product development to management, marketing and services. If companies really want to put the customer at the centre of their activities, they have to tackle this task across departments. This means breaking down the barriers between areas and/or promoting a different form of cooperation within the company. Admittedly, this is a complex and far from easy task. Let’s take marketing and communication as an example: traditional advertising, digital marketing, CRM or dialogue marketing, PR/corporate communication and social media often exist side by side in historically separate silos.

Utility and usability are what make the difference

Relevance is a decisive factor in determining customer experience. Relevance is determined by the customer’s subjective experience. Does the customer like the advertising? Was the person on the hotline friendly? Did the customer find what they were looking for on the website quickly? Every customer makes their own judgement. If you summarise the evaluation criteria, they can be divided into two general categories: First is utility and second, usability.

Utility describes how valuable the experience and the received content were for the user. How well does my experience correspond to my particular requirements? Does the content answer my questions? Does it solve my problems? Does it meet my expectations or even surpass them?

Usability is an overarching term for the user-friendliness of customer experiences. It’s not about the content, but about how easy it is to use, control and operate products or services. And of course the experiences at the different touch points must also result in a suitable overall picture and has to be very well networked.

Experience from many projects shows that utility and usability only form a coherent picture if companies enable their different experts inside and outside the company to work together on a relevant customer experience.

Lufthansa Personalisation Example: 500 million newsletter

In a digitalised world of brands, people expect meaningful personalised content. Every year a company like Lufthansa, for example, sends out 500 million newsletters to different target groups, in different locations, featuring a wide variety of services. The keyword “personalisation” encompasses an extremely complex and elaborate communication architecture designed to ensure a coherent digital user journey, starting from the user’s inspiration to fly long before take-off, all the way to when they land back at home. Plan.Net built its own newsletter cockpit for the airline for the sole purpose of personalising their newsletter. A shared platform for Lufthansa Marketing and its service providers with an intuitive interface, a modular system for content and a real-time preview. Dialogue communication via e-mail is also synchronised across the board with banners, apps and social media platforms. This is just one example of a project that could be realised with cross-departmental work coordinated between the brand and the service provider.

Audible, the subsidiary wholly-owned by Amazon, takes a different approach. The market leader in the digital distribution of audiobooks follows a 360-degree approach that combines communication, media, research and tracking. A wide variety of content is prepared and controlled via media placements in order to address the users in the right way – depending on their interests and needs, as well as what stage of the user journey they are at. The cost-per-lead can be significantly reduced by using content marketing tailored to the user experience like this.

There are many ways to ensure a coherent user experience, and each one is often unique to the company and products. I would therefore advise focusing first and foremost on relevant customer experience, and therefore on your existing customers themselves, when redesigning your marketing strategy. To do this you should ask yourself five questions:

  1. Who are my customers?
    This may sound banal, but in many companies the available data and information is not evaluated as comprehensively as it could be, nor used across departments to the extent that could be possible.
  2. What moves my customers?
    It is not only social media that gives you the opportunity to learn what people think about you and what their needs are. Take advantage of these opportunities and always think from the user’s perspective when creating your products and services.
  3. Where do I reach my customers?
    Which media and non-media points of contact do my customers use in which phase of their relationship and what are their intentions?
  4. What added value can help me to be more customer-centric?
    Product enhancements, services – there are many ways to expand a service in a customer-centric way. Use solutions from partners as needed – you don’t have to reinvent the wheel every time.
  5. How can I personalise my offers?
    Communication, websites, services and products – almost everything can be personalised nowadays. Use this opportunity to create the highest possible relevance.

If you have answered these questions honestly and comprehensively, you will have created a very good basis for the best possible success today, and for the sustainability of your marketing tomorrow.

Summer is finally here and the nights are long, which gives us plenty of time to think about the fundamental questions of life. That’s why the July issue of SEO News examines not just the forthcoming Google updates, but also the cognition game show and the future pecking order on our planet.

1) Achieve good rankings quickly and securely

Once again, Google is focusing on the convenience and security of Internet users. The company (which in its own words aims to do no evil) is launching not one, but two updates in July – whose effects will be of equal benefit to Internet users and website operators alike. Both of these changes were announced a long time ago and have already been partially implemented. The first change will see the loading speed of mobile websites become an official ranking factor. Loading speed is already listed as a quality criterion in Google’s top 10 basic rules for website quality; however, it has taken a very long time for it to become a genuine ranking factor. The change was originally introduced based on studies showing that slow-loading websites experienced direct impacts on their clickthrough and conversion rates, and the speed argument was also repeated like a mantra by Google representatives at various search conferences during the 2018 season. The subsequent introduction of the Mobile First Index (see our report here) means that the rule has now been made official for mobile sites too. Google recommends that website operators analyse their domains using Google’s own “Page Speed Report” and “Lighthouse” tools and make the necessary changes for mobile websites. Alongside its speed update, Google is also getting serious in July with its announcement that websites which are not converted to the encrypted HTTPS protocol before the deadline will be marked as “not secure” on Chrome. This change also marks the end point of a campaign that was launched over two years ago in 2016, when Google began its awareness-raising work with a small ranking boost for secure websites. Google has described that measure as a success, with the company stating that around 68 per cent of all Chrome traffic on Android and Windows now occurs over HTTPS – and there is plenty of scope for that percentage to grow. The fact that Google is leveraging its market power to implement technical standards with the aim of improving the user experience is a step in the right direction. Many companies were only prepared to invest in faster technology or secure licences when threatened with reductions in traffic or sales. In order to prepare for future developments, it is advisable to keep an eye on new technologies such as AMP (Accelerated Mobile Pages), mobile checkout processes, and pre-rendering frameworks that allow content to be pre-loaded. These innovations can help you keep pace, especially when it comes to improving user perceptions of loading rates on all platforms.

2) Life is one big game show

This bit will be tricky for those of you who didn’t pay attention in maths. Remember that moment back at school, somewhere between integral calculus and stochastic processes, when you belatedly realised that you’d completely lost the plot? Well, in the age of algorithms that will come back to haunt you – especially if you work in online marketing. In everyday terms, an algorithm is nothing more than a carefully ordered chain of decisions designed to solve a problem in a structured way. The crucial innovation in recent years is the advent of artificial intelligence and machine learning. Nowadays, the individual links in the algorithmic chain are no longer assembled by people, but by programs. When you ask a search engine a question, the information is taken in, its core information object (the entity) and intention are identified by means of semantic analysis, and the most empirically appropriate result (the ranking) is returned in the correct context (e.g. local and mobile). However, a group of seven Google engineers presented a research project at the ICLR Conference in Vancouver that turns the question/answer principle on its head. For their project, the researchers used tasks taken from the popular US game show “Jeopardy”. On this show (first aired in 1964), contestants are required to provide the right questions in response to complex answers. In their study, the Google engineers exploited the fact that Jeopardy tasks involve information deficits and uncertainties that can only be resolved by formulating the right question. In other words, the question needs to be adapted until the information provided in the answer makes sense in its specific combination and context. The human brain performs this task in a matter of seconds, and is able to draw upon a comprehensive range of intellectual and social resources as it does so. However, if you ask a Jeopardy question (such as “Like the Bible, this Islamic scripture was banned in the Soviet Union between 1926 and 1956”) to a search engine, you will not receive an appropriate answer. Google returns a Wikipedia article about the Soviet Union, meaning that it interprets this search term as an entity or a core information object, and thus falls short. Microsoft’s search engine Bing comes a little closer to the obvious answer from a human perspective (“What is the Koran?”), but is likewise unable to deliver a satisfactory result. This little trick involving Jeopardy questions makes clear what the biggest problem is for search engines (even though it is marketed as one of the main markers of quality for modern search systems): how to accurately recognise the intention behind each search query. The idea is that what SEO professionals in companies and agencies currently work hard to develop should be reliably automated by the search engines themselves. In order to achieve this, the Google researchers developed a machine-learning system that reformulates possible answers to the Jeopardy question into many different versions before passing these on to the core algorithm itself. In step two, the answers obtained are then aggregated and reconciled with the initial questions. The results are only presented to the user once these two intermediate steps are complete. The self-learning algorithm then receives feedback on whether its answer was right or wrong. The AI system was subsequently trained using this method and with the help of a large data set. As a result of this training, the system learned how to independently GENERATE complex questions in response to familiar answers. This milestone goes far beyond simply UNDERSTANDING search queries, which are growing increasingly complex under the influence of voice and visual search. Although this study was carried out by Google, we can assume that Microsoft, Yandex and Baidu are also working on equivalent technologies designed to further automate the recognition of search terms and to automatically generate complex, personalised content in the not-too-distant future. At present, however, it is impossible to gauge what effects this might have on the diversity and transparency of the Internet.

3) Google Assistant sets the tone

While we’re on the subject of automatic content generation, we also have an update on Google’s uncanny presentation of two phone calls between the Google Assistant and the working population. Back in May, the search engine giant from Mountain View presented a video at its “IO” developer conference in which an AI extension to the Google Assistant named “Duplex” booked an appointment at a hairdresser’s and a table in a restaurant entirely on its own, all while perfectly imitating human speech. The human participants in those conversations were apparently unable to recognise that they were interacting with a machine while they went about their work. Close collaboration with robots and AI systems has long been familiar to industrial workers in the Western world, but now this development is also moving into the service economy, and therefore into our day-to-day lives. At first glance, the Google scenario was astonishing and convincing; however, the unnerving initial impression was swiftly followed by a number of pressing questions. In particular, the fact that Duplex failed to identify itself as a machine to its human conversation partners was the subject of considerable debate. Google has since responded and published a new video in which the Google Assistant identifies itself at the start of the conversation and states that the call will be recorded for quality control purposes – similar to a recorded message in a call centre. Taking a more detached view, however, one wonders whether this responsiveness on the part of the artificial intelligence is actually completely superfluous. The restaurant employee in the video follows the Google Assistant’s instructions obediently, as if he is talking to a human being – there is no difference whatsoever. In search marketing, we attempt to further our own interests by reflecting the intentions of target groups and consumers in the content produced by search engines (the results pages). In voice search, we issue commands to a machine – and a number of years will pass before we learn how that will change us. And in Google’s future scenario of an invisible, omnipresent and convenient system that allows users to organise themselves and solve problems, the human simultaneously becomes both the subject and the object of the technology. Our data was used to create, feed and train the system, and so we may briefly feel ourselves to be its masters; however, given the current state of affairs, we can and should seriously question whether we will recognise the point of no return once the balance finally tips.

Times are changing, and so are technologies, markets and customers. By now, everybody has realized that in the big lottery called “digitalization”, some have a lot to gain, some have a lot to lose and we all have a lot to learn. In this context, a swath of new management tools and techniques touted as “agile” and “lean” have taken middle management by storm. One of these shiny, new Silicon Valley tools is the MVP or Minimum Viable Product.

You can barely get into a discussion about developing a product, working on a project or solving another problem of substantial complexity and vagueness these days without somebody suggesting you “just do an MVP”. There seems to be no problem an MVP approach cannot solve. Don’t know what the scope of our project is? Let’s do an MVP. Don’t know what our customer wants? Let’s do an MVP. Don’t really have any time or budget? Let’s do an MVP.

Maximum Vexing Problem

And this could be fine. While an MVP approach might not solve all the problems just mentioned on its own, in almost all cases some valuable insights can be derived from applying it. Who would disagree, that in a volatile environment, with vaguely defined problems and even vaguer solutions, an approach based on many small steps might be better suited than one big (mis)step? In this context, the concept of developing a product (or solving any other problem) by quickly implementing possible ideas, testing hypotheses, gathering feedback and gradually improving on the found solution is not that much of a stretch. There is only one caveat to this: in order for the magic to work, all sides involved must have a common understanding of what the concept actually entails. Which, often, is unfortunately not the case.

Minimum and Product

As with many simple and memorable concepts, the devil is in the detail, so, let’s dig in a bit: MVP is defined by three distinct terms; Minimum, Viable and Product. The first and the third, we (and our stakeholders) usually grasp quite quickly and easily:

Minimum states that we are aiming for something small, efficient and effective. We want to reach a maximum of X with a minimal input of Y (often time, money, people). That’s what gets most people (and upper management) hooked after all: the promise of getting big X while spending little Y.

Product is also rather easy to grasp; we are not creating something for our personal enjoyment, we aim to create something a customer might want to use or buy. Therefore, this person, group of stakeholders or demographic and their feedback naturally sets the bar for our efforts.

“V” as in “Viable”

Which leaves us with the term “viable”, which teams struggle with the most and which can make or break this approach. Let’s rephrase “viable”, so the problem is clearer: a solution is “viable” if it is capable of delivering on our expectation of success. As these “expectations of success” might vary widely from stakeholder to stakeholder, a shared understanding of what “viable” means, or lack thereof, is what makes or breaks a team taking the MVP route.

Depending on their professional training, current role, subjective perception and many other variables involved, each stakeholder might have a drastically different understanding of what “viable” means. So, it should be clear that in order to make it with an MVP approach it is crucial to make each stakeholder’s expectations as explicit and clear as possible and continuously attempt to align them as much as possible.

Making “Viable” more “Viable”

One way to do away with the gravest misunderstandings of what should be achieved by using an MVP approach has been suggested by Lean/MVP coach Henrik Kniberg. He proposes three substitutions for the term “viable” depending on the main objective a team tries to achieve through the use of an MVP:

Minimum Testable Product – Anything that you can learn something from. Starting with those things that provide the most value in regards to functionality, general viability, assessment of value propositions, risks or chances.

Minimum Useable Product – Something a user can get a (limited) use from and give feedback in terms of how to improve the product. Again, focusing on those things that provide the most value in regards to “use” and/or “feedback”.

Minimum Loveable Product – Something a user might actually like enough to buy, use continuously or recommend to others. Ideally this builds on insights gained from previously established Minimum Testable and Minimum Useable Products.

These substitutions make the many facets of “viability” explicit and thus sharpen the concept of MVP based on the context. With them you can provide the right terminology to onboard teams and stakeholders enthusiastic about the concept. They might also come in handy with teams in distress, where a clarification of “viable” would provide some basic structure and orientation allowing them to successfully complete product or project they have already started. Last but not least, they can serve as a means for a more informed discussion with stakeholders who have already had negative experiences with the careless use of an MVP approach.

MVPractice

At Plan.Net we have been successfully making use of MVP-based approaches in a variety of situations: For example, in the development of ambitious digital products. Such as a lean solution for partner integration we created for a global logistics company. Or location based offers and services for the apps of one of Germanys biggest loyalty programs. In this case a focused and iterative approach helped to implement a challenging technology stack of GPS, Bluetooth and NFC in a stable way, while still being able to react in a flexible way to new inputs in regard to product specification. In general MVPs are especially viable (you see what we did here) in cases were technological and creative demands have to be reconciled. Situations that often are informed by conflicting priorities, high pressure and uncertainty.

Another case where we made good use of an MVP, was the relaunch of the platform-portfolio of a big German energy supplier. It helped managing an ambitious roadmap and a big scope. But also in trialing new technology, as in the development of an AR prototype for a big railway company, an MVP-approach helps to keep cool in the heat of the digital jungle.

Beyond the buzzword

So, new challenges need new ideas. We need new approaches for working together, we need new tools and we need new ways of collaborating. We need to be willing to try out things together, to achieve greatness or sometimes even fail together. What we do not need are buzzwords being detonated in meetings like stun grenades, leaving behind nothing but a headache and a ringing sound in everybody’s ears. We need to look “beyond the buzzword” and pragmatically assess how “agile” and “lean” approaches can help us solve problems instead of obscuring them.

In the The inside story x 3 series, experts from the Plan.Net group regularly explain a current topic from the digital world from different perspectives. What does it mean for granny, and for an agency colleague? And what does the customer – in other words, a company – get out of it?

In recent years, hardly any other technology has painted so many scenarios of a golden future as blockchain, THE solution for decentralised, tamper-proof storage of transaction data. But what’s next?

My granny and blockchain: What is it and is it the same thing as those bitcoins?

Not really, dear granny; blockchain is actually the technology that provides the basis for non-physical currencies like bitcoin, among other things. Simply put, blockchain is an extendible list of data records that are arranged in blocks – hence the name. Imagine a train that gets more and more carriages attached to it over time. In these data carriages, every process – a payment transaction in the case of bitcoin, for example – is recorded chronologically. Since this ‘data train’ is not just stored and updated on a single computer, but on many different ones, the data are much more secure. This is because if somebody wanted to retrospectively modify a data record – say they wanted to cover something up, for example – it wouldn’t work, because the single piece of false information wouldn’t be able to combat the many correct ones.

Bitcoin, a digital payment system that can be used worldwide, is based on the blockchain principle. Extremely secure and simple to use, it manages without a bank because the payment transactions take place in a network in which everyone can co-operate. But although it sounds good in theory, in practice there are a few disadvantages. In order to access these digital coins, you need an online bureau de change that you can hand over your money to. Unfortunately, however, it is very difficult to know whether this currency exchange is sufficiently protected against attacks from hackers. In addition, there are very few things you can buy using bitcoins, as bitcoin is actually extremely unsuited to being used as a means of payment: processing payments takes a long time and involves high fees, and the value of the currency is too variable. And that’s why it’s quite a bad idea to invest your savings in bitcoin – unless gambling is your passion.

However, blockchain also has potential in areas other than digital currencies: for example, end-to-end, transparent supply chain documentation in the food industry. In several countries, blockchain technology is also frequently used in public administration, for example, for notarial services or the administration of medical data. But it’s still got a long way to go.

My colleague and blockchain: Will it help us to create transparency in online advertising?

In the digital media business, there are several potential areas of application for blockchain. One example is the transparent and secure handling of campaigns that use programmatic advertising. In this context, blockchain would be able to solve problems around ad fraud, brand safety and billing by providing end-to-end, tamper-free documentation in the chain that shows which impression was delivered to which bid as well as where and when. In terms of reporting performance value, too, blockchain would be able to help to ensure consistent data records between publishers, agencies and customers.

To this end, a number of start-ups have already designed solutions, but large-scale use is still blocked by huge development and coordination costs among all parties involved. However, it does not always have to be blockchain – the Ads.txt initiative from IAB Tech Lab is a simple tool that has already made great strides in terms of helping to avoid inventory fraud.

Blockchain for the customer: So which of the many providers are trustworthy?

Programmatic media is interesting for advertising companies too, of course. Moreover, blockchain can be deployed as a commercial platform between producers and customers in the area of content delivery, for example, in the field of image rights.

In recent months, numerous start-ups and established technology companies have launched products and apps based on blockchain. The biggest disadvantage for many providers: as a rule, each platform uses its own currency as a means of payment in the form of coins or tokens, which are used to pay for transaction processing as well as buying and selling media stock or digital licences. If several services are used, companies basically have to have a wallet full of different currencies on hand – and it’s not just holidays that this is impractical for.

First sale of these currencies (known as an Initial Coin Offering, or ICO for short) has given start-ups around blockchain a fresh opportunity to raise capital. However, belief in the platform’s success is still the underlying factor: if the business model works, the value of the currency increases at the same time. This is useful for the investors and platform operators; not, however, for anyone who would like to use the currency to buy media or (digital) goods on the platform. And by no means are all providers successful. Research carried out by the cryptocurrency news site bitcoin.com shows that of the more than 900 ICOs in 2017, almost 60 per cent of the companies failed or as good as failed. For now, the best course of action is: wait, drink a cup of tea and see how things develop.

SEO News

If you think this June issue of SEO News will only be about the impact of Google’s Mobile Index, think again. We prefer to wait a bit on that. As the summer begins, we are therefore focusing on the return of a powerful tool, the prerequisites for good SEO work, and an industry in the throes of fake news.

1) The return of Google image search

The image bubble has burst. After a long legal dispute with the image agency Getty Images, Google decided to make some changes to its popular image search function. How positively these changes have affected website operators can be seen from a survey by the US search expert Anthony Mueller. But let’s start from the beginning. In January 2013, Google changed the way its image search function worked so that every user could directly view and download found images. A key aspect of this was that the files were buffered on the servers of the search engine, where users could access them with the ‘View Image’ button. As a consequence, clicks on the sites of content providers and rights holders nearly vanished and systematic traffic from image searches plummeted by more than 70 per cent in some cases. This development was especially perilous for websites that focus on visual impact for inspiration, such as fashion or furniture merchants, and had put a lot of effort into optimising their image content. Particularly for e-commerce operators, this collapse in traffic also meant a collapse in turnover. Three years later, the renowned Getty Images agency submitted a competitiveness complaint to the European Commission, apparently hoping that ‘Old Europe’ would again set things right. Getty’s efforts were rewarded, with the result that the ‘View Image’ button disappeared from Google image search in early 2018. Interested users had to visit the original sites to access the original files. That stimulated Mueller, the well-connected search expert, to ask some 60 large enterprises worldwide, if after nearly six months following the change, they had seen any impact in their website traffic. The result was that on average, visits from Google image search have risen by 37 per cent. Although the figures for impressions and ranking positions in image search have remained relatively stable, click-throughs have risen dramatically with all of the surveyed enterprises. The survey also indicates that conversions from image searches have grown by about 10 per cent. Of course, savvy users can still switch to other search engines, such as Microsoft’s Bing or Duck Go. Those two search engines never got rid of direct access to image files. However, due to Google’s market power this is exactly the right time to give new priority to the optimisation of image content and exploit the new growth potential, according to the author. Presently text search is still the dominant method for acquiring information. However, there are signs of a paradigm shift to visual search, particularly in retail.

2) Getting better results with smart SEO goals

Thanks to the Internet, contacts and advertising impact are now more measurable than ever before. Although the digital revolution in advertising is no longer in its infancy, it has by no means reached the end of its evolution. With digital campaigns, it is easy to define suitable key figures to measure impact and effectiveness, and it is not technically difficult to obtain corresponding campaign data. However, defining goals for search engine optimisation is not so easy. For example, Google stopped offering keyword-level performance data for systematic searches many years ago. Marketing managers and SEO experts are therefore repeatedly confronted with the challenge of developing an SEO KPI concept that visualises optimisation results and, above all, gets the company’s budget controller onside for professional SEO work. For this reason, search guru Rand Fishkin has put together some rules for formulating the goals of SEO activities, which are interesting to advertisers and enterprises alike. According to Fishkin, the main rule is that the business goals must form the basis for the SEO concept. The next step is to break down these higher-level expectations, which are usually financial, into marketing goals – for example, by defining requirements for various communication channels along the customer journey. The actual SEO goals come into view only after this point, and they can be mapped out in the last step using just six metrics. These KPIs are ranking positions, visitors from systematic searches (divided into brand searches and generic search objectives), enterprise representation with various hits on the results page of a search term, search volume, link quality and quantity, and direct traffic from link referrals. Fishkin checks his concept against two different example customers. For example, a pure online mail-order shoe seller has a fairly simple business goal: boosting turnover by 30 per cent in the core target group. In Fishkin’s view, the next step is to specify in the marketing plan that this growth will be generated by a high probability of conversions at the end of the customer journey. From that you can derive an SEO goal of 70 per cent growth in systematic traffic. In order to achieve this goal, you then adopt and carry out implementable SEO measures. For the contrasting scenario of local SEO without reference to e-commerce, Fishkin’s example is a theatre that wants to draw more visitors from the surrounding area. In this case the regions where the target audience should be addressed are defined in the marketing plan. The SEO plan then consists of setting up local landing pages, utilising theatre reviews and blogs, and other content-related and locally driven measures. The advantage of this sort of top-down approach is the alignment of individual SEO measures, which are often difficult to grasp, to the overall aims of the organisation. According to Fishkin, the rewards are higher esteem and faster implementation of the laborious SEO work.

3) Fake news threatens the existence of the SEO industry

Did you get a shock when you read this heading? That’s exactly what we wanted, in order to get your attention. Of course, you rarely see such highly charged headings on SEO blogs, but competition in the IT sector does not spare the search industry. Every year we hear that SEO is dead, but supply and demand for optimisation services have growing steadily for more than 15 years. A large part of that is doubtless due to the intensive PR activities of the parties concerned. Starting as the hobby of a few individuals, over the course of time search engine optimisation has developed into specialised agencies and migrated to in-house teams of enterprises. Along the way there has been continual testing, experimentation and comparison, SEO expertise has been constantly expanded, and above all a lot has been written about it. SEO blogs therefore serve on the one hand as an inexhaustible source of information – a sort of global treasure of SEO experience, forming the basis for success. On the other hand, postings on search topics are also a form of self-advertising and customer acquisition for service providers and agencies. John Mueller, the well known Senior Webmaster Trends Analyst at Google, has now criticised some SEO blogs. He claims that some of them use postings as click bait. That all started with a report on an alleged bug in an SEO plugin for WordPress. In the course of the discussion about the tool, information was presented in abridged form on some SEO sites and important statements by John Mueller on behalf of Google were not passed on. He is now saying that postings should pay attention to all aspects of complex search topics. What matters is to create long-term value with balanced reporting. People should resist the temptation to get quick clicks. According to Mueller, the goal should be to convey knowledge. It is clear that even the search scene cannot evade the grasp of digital attention. It looks like speed has become a goal in itself, and it is assumed that online readers no longer have time to pay attention to the details. In this way our own methods endanger the industry’s collective wealth of experience. In an increasingly complex search world, it is particularly important to not lose sight of the details, and we have to take the time for a thorough treatment of each topic. For example, the threat to the existence of our democracy from the SEO activities of Russian troll farms is a topic that still needs a thorough treatment.

Programmatic Advertising

Programmatic advertising (PA) is a multi-faceted term. Many market players use it as a buzzword, a label for the hype that has at times raised very high expectations among lots of market players, especially advertising customers. Others frequently use PA as a synonym for automation projects that are several years overdue, especially in so-called classic media, but in which nothing is “programmatic”. At mediascale, we generally define programmatic advertising as data-driven media-buying and as a process we are only just beginning.

Therefore, the disappointment that may have occurred with one or other advertisers is not a fundamental issue for programmatic advertising. Rather, it should be an incentive to take programmatic to the next level: on the one hand, by rethinking the set-up of service providers and technology; on the other, individual expectations should be reasonably calibrated.

In recent years, it has mainly been venture capital-financed players, who wanted to be part of the media value chain, who have fuelled the programmatic hype starting from their own interests, which has led to high expectations. And of course they have claimed “their” part of the supply chain. But those who worked more intensively with the market knew that the quantity and quality of the available profile data for programmatic campaigns is limited. However, only good data can increase the efficiency of campaigns significantly enough so that the additional costs for the additional members of the value chain are reintegrated. A possible disappointment was thus an announcement or based on unrealistic expectations.

In many conversations with our customers, we have realistically presented both the possibilities as well as the limits of data-driven advertising in order to rule out exaggerated expectations of PA from the outset. In doing so, the following assumptions were made, which our customers, sometimes against their initial will, have been getting along well with so far:

  • Programmatic advertising is not a new channel with completely different rules to traditional display business. Also when auctioned and backed by data, a content ad remains a content ad and will not develop the advertising impact of an instream pre-roll large format
  • Meaningful, validated data is the indispensable basis for programmatic advertising. Here it is important to look carefully and carry out comprehensive auditing of the available data offers. At first glance, the data market in DMPs seems expansive. But data segments that deliver what they promise (delivering a corresponding uplift to campaigns) are by no means abundant. And they have their price.
  • An impression that cannot be assigned to valuable data should not be bought programmatically. As meaningful as it is to uniformly track all advertising contacts and accumulate all campaigns and pseudonymous profile data in one system, it makes little sense to put untargeted campaign volume into systems just to have bought it “programmatically”. This results in costs and technical performance losses that are not offset by financial added value.
  • The open market, open to all, originally proclaimed by many to be programmatic’s central promise of salvation, has created more problems than it solves, as it has also opened up the market to a plethora of dubious players. The efforts of the large, open sell-side platforms to push the black sheep out are commendable, but unfortunately not always successful. That is why we only buy inventories that we can thoroughly test, both technically and commercially. Furthermore, whenever possible, we buy from partners (often in private marketplaces) that we know and have established business relationships with – including any sanction options which may be necessary in the interest of the customer in an emergency.
  • And we’re not forgetting the creation: What use is the most sophisticated planning on a profile basis, if only one means of advertising is available? That’s why data driven creativity is, in our view, the indispensable fourth pillar of programmatic advertising – alongside technology, media space and data.

Today, programmatic has already caused major changes in the digital media business. But we are sure that this transformation process is far from finished yet. And it will encompass more and more media types in the future: TV, out-of-home, audio, cinema and eventually also print. In five years at the latest, we will be able to plan, book and control more and more channels via programmatic. In addition, people’s media use is evolving, new, relevant platforms are being created at ever-increasing speed, and data protection requirements also require fundamental and sometimes new solutions. All these challenges keep us busy and agile. Staying at the current level of development is not a solution. Particularly as we are just scratching the surface with programmatic.

This article was first published in adzine.

Amazon Echo Voice Search

What voice internet means for the future of digital marketing

The screenless internet: A bold prediction for the future

At the end of 2016, Gartner published a bold prediction: by 2020 30% of web browsing sessions would be done without a screen. The main driver behind this push into a screenless future would be young and tech savvy target groups fully embracing digital assistants like Siri and Google assistant on mobile, Microsoft’s Cortana and Amazon’s Echo.

While 30% still feels slightly optimistic mid 2018, the vision of an increasingly screenless internet becomes more and more realistic every day. The adoption rate of smart speakers 3 years after launch is outpacing the smartphone adoption rate in the United States. And what’s maybe most surprising, it isn’t only the young early adopter crowd that is behind this success story, but parents and families. Interacting with technology seamlessly and naturally through conversation is making digital services more attractive to a wider range of consumers.

The new ubiquity of voice assistants

And it isn’t only stationary smart speakers that are growing in usage and capability, every major smartphone features its own digital assistant and consumers can interact with their TVs and cars through voice as well. The major tech players are investing massively in the field and within the next few years every electronic device we put in our homes, carry with us or wear, will be voice-capable.

So, have we finally reached peak mobile and can finally walk the earth with our chins held high again, freed from the chains of our smartphone screens? Well, not so fast.
There’s one issue many digital assistants still face, and let’s be perfectly honest here: despite being labeled “smart” they are still pretty dumb.

Computer speech recognition has reached human level accuracy through advancements in artificial intelligence and machine learning. But just because the machine now understands us perfectly, it isn’t necessarily capable of answering in an equally meaningful way and a lot of voice apps and services are still severely lacking. Designing better voice services and communicating with consumers is a big challenge, especially in marketing.

Peak mobile and “voice first” as the new mantra for marketing

Ever since the launch of the original iPhone in 2007 and the smartphone boom that followed, “mobile first” has been marketing’s mantra. Transforming every service and touchpoint from a desktop computer to a smaller screen and adapting to an entirely new usage situation on the go was a challenge. And even 10 years later, a lot of companies still struggle with certain aspects of the mobile revolution.

The rising popularity of video advertising on the web certainly helped ironing out many issues in terms of classic advertising. After all a pre-roll ad on a smartphone screen catches at least as much attention as it does in a browser. We figured out how to design apps, websites and shops for mobile, reduced complexity and shifted user experiences towards a new ecosystem. But this mostly worked by taking the visual assets representing our brands and services and making them smaller and touch capable.

Brand building in a post-screen digital world

With voice, this becomes a whole new struggle. We have to reinvent how brands speak to their consumers. Literally. And this time without the training wheels of established visual assets. At this year’s SXSW, Chris Ferrel of the Richards Group gave a great talk on this topic and one of his slides has been on my mind ever since: The visual web was about how your brand looks. The voice web is about how your brand looks at the world.

In recent decades, radio advertising has mostly been reduced to a push-to-store vehicle. Loud, obnoxious, and annoying the consumers just long enough, that visiting a store on their way home from work became a more attractive perspective, than listening to any more radio ads.

On the screenless internet, we could see a renaissance of the long-lost art of audio branding. A lot of podcast advertising is already moving in this direction, although there it is mostly carried by the personalities of the hosts. Turning brands into these kinds of personalities should have priority.

The challenges of voice search and voice commerce

We will also have to look at changing search patterns in voice. Text search tends to be short and precise, mostly one to three words. With voice, search queries become longer and follow a more natural speech pattern, so keyword advertising and SEO will have to adapt.

Voice enabled commerce poses a few interesting challenges as well. How do you sell a product, when your customer can’t see it? This might be less of an issue than initially imagined, though. “Alexa, order me kitchen towels” is pretty straight forward and Amazon already knows the brand I buy regularly. Utilizing existing customer data and working with the big market places will be key, at least for FMCG brands.

But how to get into the consumer’s relevant set? And what about sectors like fashion, that heavily rely on visual impressions? Tightly combining all marketing touchpoints comes into play, voice as a channel can’t be isolated from all other brand communication. Obviously, voice will not replace all other marketing channels, but it might become the first point of reference for consumers due to its ubiquity and seamless integration into their daily lives. Finding its role in the overall brand strategy will be crucial.

Navigating the twilight zone of technological evolution

What may be the biggest challenge of this brave new world of voice marketing is the fact that our connected world isn’t as connected as we would like it to be. The landscape of voice assistants is heavily fragmented and more importantly, the devices act in very isolated environments. While I can tell my digital assistant to turn on my kitchen lights or fire up my PlayStation when using compatible smart home hubs and devices, an assumedly simple task like “Siri, show me cool summer jackets from H&M on the bedroom TV” isn’t as easily accomplished.

Right now, it often is still up to the users to act as the interface between voice assistants and the other gadgets in their living spaces. The screenless internet isn’t the natural endpoint in the evolution of technology, it’s more of an unavoidable consequence of iterative steps in development. For now, we have to navigate through this weird, not fully-realized vision of a connected world and hope for technology to catch up and become truly interconnected. So, let’s find the voices of our brands until they regain the capability of also showing us their connected personality.

Corinna Gleich, Junior Digital Media Planner at Plan.Net Media, has travelled to China to work for three months as part of an internal company exchange programme. She’s been at the House of Communication in Beijing for four weeks now, and is starting to feel at home in China’s capital city. We asked her to write about all about the surprises that living there has brought so far. This report is based on her experiences during the first four weeks.

When I arrived in China, the first thing I had to come to terms with was that my phone was as good as useless – Google, Facebook and Instagram were all blocked and WhatsApp didn’t work. I could get around this with a VPN, though. Speaking English wasn’t an option, hardly anyone here can speak it and that meant me having to work hard to learn Chinese. At first, I could only pay for things with cash (German bank cards aren’t usually accepted and there are only a few ATMs that work with Visa, for example), so I had to open a Chinese bank account as soon as possible to be able to pay using WeChat Pay. I needed to have a local mobile number before I could get a bank card. Luckily, this was quick and cheap to set up. I could set up WeChat with my new number and get a bank card (I was lucky in this respect, too, as the rules for bank cards were recently changed and foreign nationals now have to have lived in the country for at least a year to be able to request one). Getting money into the account from back home was the next challenge, but WeChat had that covered. WeChat makes it easy for another user to transfer money; this money doesn’t go into the account, but rather into the WeChat Wallet. Everything’s done on your mobile here – which is why there are a few more handy apps to help you go about your day, such as Alipay (WeChat’s biggest competitor and which has more users in some cities), Didi (Uber), Ofo (for cycle hire), Air Matters (an air pollution analyser), Dianping (Yelp), E (for ordering food) and translation apps.

The office in Beijing is located right inside a shopping mall. The work day in China is almost exactly the same as in Germany. The only thing is you have a much later start. Turning up between 10 and 12 is normal; you just work longer in the evening to make up for it. It’s also not unheard of to just take a power nap while at work. There are lots of cushions and cuddly toys dotted about to make the place comfortable. They drink coffee here too. You can order food and drinks round the clock. Generally speaking, the food is much cheaper than in Germany – three euros gets you a decent meal. You can also have bubble tea and other drinks delivered. Delivery people race on their scooters at breakneck speed, up and down streets and even steps!

Shopping mall right next to the office with big screens on the ceilings

The way people interact with and consume media here is completely different. Everyone wants to stand out from the crowd without really worrying about data protection. Live streaming is the big thing over here; you can watch a person eat their dinner, for example, and send them virtual gifts that you have to buy. This is how live streamers make their money. There’s a parallel for everything – WeChat is like Facebook, Sina Weibo like Twitter, Youku like YouTube and Nice like Instagram. There’s shops on every corner (I’ve never seen so many shopping centres in such close proximity), and great importance is attached to brands; Western brands are particularly fashionable. German brands (some that I didn’t even know existed) are seen as must-haves in electronics. Owning an iPhone is the norm here.

Work and everyday life aside, sightseeing in Beijing is amazing for a tourist! There’s so much to discover and ticket prices are only around two to three euros. Public transport is cheap, too (the subway and bus are around 50 cents a journey). You can also travel to nearby big cities (e.g. Shanghai, Hangzhou) in no time with the high-speed train. A highlight for me so far was the Summer Palace, which is just outside of Beijing on a small hill surrounded by a lake. I was actually quite disappointed by the Forbidden City; the architecture was very impressive, but there wasn’t much to see in any of the buildings and some were closed altogether. Hangzhou is definitely the place to visit for nature lovers (around five hours from Beijing by high-speed train); it’s rare to see so much green in a city, even in Germany.

Corinna at the Summer Palace

 

Another tourist attraction: The (crowded) Great Wall

My main takeaway from this experience so far is that Beijing is so much more than just a big city; you have to get used to the crowds and fast pace of life here. To me, China and Beijing are like a completely different world. If you want to discover something totally new like I did, you’d really love it over here.

SEO News

Spring has finally sprung, driving even the most hard-nosed online marketeers outdoors to enjoy the sunshine. It’s a time when important trends and developments can easily be missed – and that’s why we’ve summarised the most important SEO news for May here. This time we will be looking at the development of the search market, Google’s assault on e-commerce, and possible negative impacts of language assistants on our behaviour.

1) The market for search engines is maturing

It’s once again back in fashion to question Google’s dominance in the search market. The Facebook data protection scandal means that many critics of the Google system are hoping that a slightly larger portion of the online community is beginning to recognise that “free of charge” online doesn’t mean “without cost”, and that as a result, user numbers for the Mountain View search engine will no longer continue to grow. We can see some support for this assumption in the trend of many users preferring to start their shopping search directly in Amazon – a competing company. And this presents a good reason to ask the questions: is Google losing market share? Where are users actually doing their online searching? A study by American data collectors from Jumpshot sheds some light on the matter. SEO veteran Rand Fishkin interpreted their analysis of US clickstream data – i.e. referrer data at server level and anonymised click logs from web applications – from 2015 to 2018, with surprising results. Contrary to the presumed trend, the number of searches on Amazon is in fact growing; however, because the total figure for all searches increased at the same time, Amazon’s market share consistently remained around 2.3% over the entire period analysed. A detailed look at the various Google services, such as the image search or Google maps, reveals declining figures for searches within these special services, due to technological and design changes. However, these searches are simply shifting to the universal Google web search. This means that the company from Mountain View has been successful in integrating a range of services for users on mobile devices and desktops into its central search results page. Google’s market share therefore also increased by 1.5 percentage points between 2015 and 2018 to around 90%, meaning that the competition seems miles behind. As with Amazon, the search share for YouTube, Pinterest, Facebook and Twitter is almost unchanged. Microsoft’s search engine Bing and Yahoo have not increased their market share despite a rise in searches. Fishkin’s conclusion is appropriately pragmatic: the search engine industry was at a sufficiently high level of maturity in 2018 that a handful of strong players were able to successfully establish themselves on the market. However, Google’s dominance will not be at risk for some years, as all of its pursuers are benefiting equally from continued dynamic growth in search volumes, the SEO expert summarises. Fishkin adds that even if the giant from Mountain View manages to emerge apparently unscathed from any data scandals, the fact that Amazon, Bing, etc. are able to successfully keep pace with the market leader is the real key finding behind the Jumpshot figures. This assessment is also in line with the phenomenon of growth in mobile searches not coming at the expense of traditional desktop searches. Instead, mobile expansion is also taking place as growth, while desktop searches at a continued high level have not lost relevance.

2) Google wants to know what you bought last summer

In the growing segment of transactional shopping searches, Google’s market power is built on sand. Although the Mountain View company has successfully established Google Shopping as a brokering platform, their vision of controlling the entire value chain, including payment platform, has remained a pipe dream. Or to identify the issue more precisely: Google knows what people are searching for, but only Amazon knows what millions of people actually buy. This is about to change. With a feature launched in the USA called ‘Google Shopping Actions’, a buy option can be displayed directly in the Google search results for products from participating retailers. This feature is intended for retailers that want to sell their products via Google search, the Google Express local delivery service, and in the Google Assistant on smartphones, as well as language assistants. Instead of having to sidestep to selling platforms such as Amazon, the user will in future be able to procure products directly through Google. Google says that Google Shopping Actions will make buying simpler and centralised. The company announced that a centralised shopping basket and a payment process that uses a Google account means that the shopping experience will be processed easily and securely for users of the search engine. In addition to traditional search using the Google search field, it will also be possible to make purchases using speech input, enabling the company to remain competitive in the age of language assistants. Of course the other side of the coin is that a direct shopping function also enables a new level of quality data to be collected and attributed to individual users in Mountain View.

3) Alexa and the age of unrefinement

“Mummy! Turn the living room light on now!” Any child that tries to get what it wants using these words will probably fail miserably. It’s an unchanging component of childhood that you learn to politely word a request to another person as a question, and that that little word “please” is always – by a distance – the most important part of a statement of wish. But this iron certainty is at risk. And that’s not because of a vague suspicion that children these days are no longer taught manners by their parents: what might prove to be a much stronger factor is that the highly digitised younger generation have at their command – even from a very early age – a whole arsenal of compliant, uncomplaining helpers and assistants who do not respond with hurt feelings or refusal if given an abrupt command to complete a task immediately. In the American magazine ‘The Atlantic’, author Ken Gordon engages with the effects of this development on future generations. He states that although precise commands are a central component in controlling software, it makes a huge difference whether these are silently conveyed to a system using a keyboard, or delivered to a humanised machine assistant via speech commands. Gordon goes on to say that the fact that Alexa, Cortana, Siri, and so on accept the lack of a “Please” or “Thank you” without complaint could leave an emotional blind spot in young people. Finally, he concludes that although a speech command is just a different type of programming: “Vocalizing one’s authority can be problematic, if done repeatedly and unreflectively.” But it’s still too early to start predicting how our interaction with each other will change when artificial intelligence and robots become fixed parts of our family, work teams, and ultimately society.

SEO News

In 2018 the Easter Bunny brought us more than just chocolate, it also gave us the long-awaited mobile-first index from Google. It will be interesting to see what impact this has when it comes to optimising digital assets. I’ve no doubt that we’ll be hearing a lot more about this in the weeks and months ahead. For the time being, however, the SEO news for April has focussed on short clicks, the need for speed in e-commerce, and a new (search) view of the world.

1) Google’s quest for instant search satisfaction

People are often looking for shortcuts, especially when it comes to searching for things on the Internet. We want to find exactly what we’re looking for straight away, with zero hassle. However, it’s never the case that your search query immediately takes you to the result you’re looking for. On average, you have to return to the search results five times, either to refine your search term or to search for something different. These clicks back to Google or Bing are known as ‘short clicks’. You could argue that the primary objective of search engine optimisation is to make these short clicks obsolete, because the aim is to tailor the content of websites to exactly match what people are searching for. Google has made clear its desire to improve and speed up the search experience, and the company has now rolled out a feature which has been extensively tested on mobile devices and in the USA. If you return to Google with a short click, a box with the heading “People also search for” is shown underneath the first search result. This box contains a list of links with similar search queries. This list differs markedly from the list of alternative search queries which is already shown at the end of the search results. The fact that Google has rolled out this short click box globally indicates that the company is taking the issue of laborious, time-consuming searches seriously, and is prepared to take concrete steps to address it. It’s also another indication that short clicks are a negative thing in terms of the users. As such, it’s clear that search engine optimisation teams need to continue with their efforts to eliminate short clicks, with a view to making life that bit easier when you head to a search engine.

2) A real need for speed in the battle for the top spot in e-commerce

It’s now spring, which means the conference season for the industry is well under way. However, if you compare the main topics being discussed at ‘Search Marketing Expo’ (SMX), the leading international trade fair for the industry, in North America and Europe, you might notice that Accelerated Mobile Pages (AMP) is a much bigger topic in the USA than on our shores. On the other side of the pond, website operators and increasingly also e-commerce providers are placing a major focus on this stripped-down HTML protocol. It’s primarily designed to improve the user experience and increase conversion rates by ensuring that pages load faster. What’s more, at this year’s Online Marketing Rockstars conference in Hamburg, representatives from Google also indicated that fast loading times are crucial if you want to stay ahead of the competition, particularly when it comes to the top dog, Amazon. In keeping with this topic, US search expert Eric Enge has published a study which investigates the advantages of AMP technology and aims to dispel four key myths which are presumably holding many companies back from using the technology. Firstly, Enge points out that AMP is not just suitable for news publishers, even though the ‘stories’ format has only recently been implemented on the AMP platform for this. Instead, using examples from India, he demonstrates that the higher speeds available for e-commerce result in significantly higher conversion rates, particularly in markets in which mobile end devices are heavily used. Enge also explains that opting for AMP in no way means needing to compromise on design features. He argues that the responsive website design recommended by the major search engine operators has major weaknesses compared against AMP. According to Enge, more resources would need to be planned to design and implement an optimal AMP user experience as there are not enough use cases in this field yet. The study also expressly warns against half-hearted AMP implementations, as this would make it more difficult to use the technology on mobile devices, and would unnecessarily complicate key functions such as the navigation. Loading speed should already be a central pillar of every SEO strategy. However, if German companies want to prevail against their competitors on the global stage, they too should take a closer look at AMP technology and the benefits it offers.

3) Life through a lens – say hello to Visual Search

“Alexa! Can I show you something quickly?” You might already like to give the latest generation of voice-enabled personal assistants this kind of command, in the same way that you might ask a member of your family. Yet there’s one fundamental obstacle standing in the way of such integration: voice assistants don’t have eyes. This will very soon be a thing of the past, however, through what is known as ‘Visual Search’. Thanks to increasingly deep integration of artificial intelligence, it has now become possible to interpret visual information and recognise objects. Just look at the ‘Google Lens’ as the most recent example. This tool for performing visual searches was launched a few months ago and is available exclusively on Google’s own ‘Pixel’ smartphones. ‘Google Lens’ enables the user to run a search on a photo at the touch of a button. The search engine automatically recognises what is depicted in the photo, e.g. sights such as the Eiffel Tower. It then provides relevant additional information such as directions, opening hours, entry prices and reviews. What’s more, Google Lens really excels with text. Although text recognition is by no means a new feature, Google is able to recognise a photo of a business card as an address format, for example, and convert the information from the photo into a corresponding contact file. However, Google isn’t the only player investing in this field. Microsoft upgraded its Bing search engine with an AI package a few weeks ago, which also contained new features for running visual searches. And Pinterest – the popular platform which has always considered itself to be a tool of visual discovery – has also put visual searches at the heart of the user experience through its new ‘Pinterest Lens’ app. Not only does Pinterest Lens break down scanned images into their attributes such as colour, quality and function, it can also generate shopping links for selected brands from an image search. It’s no coincidence that all major Internet companies are placing visual searches centre stage. According to the market research institute Gartner, around 50 percent of all mobile searches will be triggered either by voice or an image by as early as 2019. As such, the new context-driven searches constitute a growth segment which complements intuitive human behaviour. It remains to be seen whether it will quickly become a real revenue driver in e-commerce, as is currently promised. However, it’s clear that search engine optimisation is facing a steep learning curve that goes beyond keywords and content.