Apple fans have been waiting for these keynotes for years. Years of anticipation, wondering if there will be a “one more thing”. And no, that was unfortunately (once again) not the case in the middle of September with the presentation of the Apple Watch 4 and the latest iPhone. We are currently seeing more evolutionary developments than this one innovation, which leaves us breathless. But on the other hand, most device visuals and features are already leaked before the events.

So, no mega highlights? Everyone certainly has a different view of it. My personal highlight was primarily the following development: with the Apple Watch, Apple is increasingly developing into a health brand. The old saying “An Apple a day keeps the doctor away” gets a whole new meaning. While Apple is still a lifestyle brand, the true essence of the brand is to simplify or facilitate the customer’s daily business and life. And this pledge is now being taken to the next level with an infrastructure for sports and health-conscious people who can count on Apple even in a real emergency.

The following three new features of the Apple Watch 4 are, in my view, pivotal for this development:

  • Apple has installed new electrodes and heart rate monitors, so that ECGs can now be performed with the help of the Apple Watch. This is a true milestone and a tremendous benefit to cardiovascular patients. In the past, unwieldy equipment was essential for ECGs. But these can now be done by the watch, and, according to the company, reliably. And in order to remove any doubt, the US Food and Drug Administration certifies the professional functionality of the device. Not only does it make measurement easier in acute cases, this “Apple healthcare system” monitors the body’s own systems around the clock if required, raising the alarm long before the person can even detect signs such as atrial fibrillation. The detection rate is 97 per cent, according to one study. This could save human lives. In Europe, however, this function will not be available for the time being, and certainly some health and data protection committees have to give it their seal of approval.
  • What if you have no cardiovascular problems, but are into extreme skiing, or have a tendency to fall? Then Apple has a function especially for you. In the event of a fall, an automatic alarm will be triggered unless you disable it within a minute. Then, readings and coordinates can also be transmitted to emergency services.
  • And in the future, diabetic patients should even be able to use a blood glucose meter app with “One Drop” on their wrist devices. At least, that’s the promise.

How is Apple managing to transition from a purely lifestyle brand into a health brand? While others are simply talking about a Quantify-Yourself movement, which will more likely appeal to nerds, Apple is pursuing a plan to become indispensable to Apple fans.

Trust is fundamental for this. Trust that your most personal and intimate information is completely safe. And here too, Apple has (so far at least) proved itself beyond any doubt. Or, according to Fast Company’s headline after the last keynote: “Forget the new iPhones: Apple’s best product is now privacy”.

And it’s not just since the last keynote that Apple has diverged from many other digital giants in this regard. In recent years, Apple has been excelling in how they deal with their – or rather, our – data. They have maintained absolute partitioning of all user personal data, and protected data against external access – even sometimes from personal inquiries by the FBI. This consistent concern for privacy has earned them a great deal of customer confidence. And this gives them probably the best foundation for getting involved in a mega-market of health products of the future.

The 30 to 40-year-old fans of today will become the largest users of these offerings in the future. This is where Apple’s trustworthy data handling will become significant. Those fans will have been conditioned for years and will not be able to imagine living their lives without Apple products. Strategic calculation? No, Apple disciples would never accuse their beloved brand of this kind of scheming. But this far-sightedness would nevertheless be clever.

Stephan Enders, Head of Plan.Net Innovation Studio and a self-confessed Apple fanboy

In the series The inside story x 3, experts from the Plan.Net group regularly explain a current topic from the digital world from different perspectives. What does it mean for Granny, and for an agency colleague? And what does the customer – in other words, a company – get out of it?

While you often find signs saying “No cash, only cards” in shops and restaurants abroad, cash is still the most popular payment method in Germany. In many places, debit and credit cards are not accepted. Up until now, mobile payment has also been difficult in Germany, but that could change this year. With the launch of Google Pay in June and the planned launch of Apple Pay in September, the German market will finally be tapped. So it’s high time for retailers and marketers to consider the impact of mobile payment on the shopping experience.

Mobile payment is still virtually never used in Germany

Grandma, we have to talk. Why are you so determined to hold on to your beloved cash, and so reluctant to use new payment methods? Are you concerned about security? Fear of Internet fraudsters and concern about data protection is likely the major reason why mobile payment has not yet been successful in Germany. Mobile payment has previously been used for small, low-risk transactions, such as grocery shopping or public transport.

Or are you reluctant because you do not yet realise how beneficial mobile payment could be for you? Admittedly, contactless payment using debit or credit cards at the register in bricks-and-mortar retailers has the same advantages as mobile payment: speed and convenience.

Whatever the reasons preventing you from giving mobile payment a chance, you’re not alone. According to a survey by Oliver Wyman (2017), only seven per cent of German respondents have ever paid with their smartphone at the point of sale. The positive news: one third of non-users can imagine using mobile payment in the future.

Mobile payment has been mainstream in Asia for some time

Here in Germany, we are still coming to terms with the “new” option of mobile payment, while in Asia paying by smartphone has long been part of everyday life. Last year, 70 per cent of mobile Internet users in China paid using their mobile phones (CNNIC 2017), which accounted for more than half of all payments in the same period, according to a study by the Deutsche Bundesbank. The leading providers are the instant messenger WeChat (WeChat Pay) and the online retailer Alibaba (Alipay). The use of these two mobile payment apps is simple. To complete a transaction, all you need is a QR code and the app. Simply scan the QR code and pay in the app. Alipay and WeChat Pay not only work in large stores, but can even be used to pay for a melon at a small street vendor.

An even more exciting prospect is that, in the future, you may be able to pay without even needing your smartphone to authenticate the payment process.

In the Chinese metropolis of Hangzhou, customers of the fast-food chain KFC can pay with a smile. The face recognition software “Smile to Pay” takes one to two seconds to create a 3D scan of the face and identify the customer. For security reasons, the order is additionally verified by entering the mobile phone number.

Will developments in the mobile payment sector have an impact on agency work? Probably not any time soon. But in the future it may be possible to book the post-payment page as an advertising space with Google Pay and to play ads that are tailored to the user profile – age, gender, region and consumer behaviour.

How can merchants take advantage of mobile payment for themselves?

Mobile payment has so far been somewhat neglected by bricks-and-mortar retailers. Digital trends such as augmented reality, virtual reality and speech assistants have been seen as much more exciting. But mobile payment is an important part of the digitisation of bricks-and-mortar retail and should not be ignored. Many customers already use their smartphone during offline shopping – as a storefinder, for product research or for navigation within the store with the help of augmented reality apps. Now, the next logical step is the digitisation of the purchase.

When bricks-and-mortar stores support mobile payment as a payment method, they offer their customers two advantages, namely time savings and convenience. With consumers’ increasingly fast-moving and mobile lifestyle in mind, these factors are critical to customer experience and customer satisfaction.

The fashion department store Breuninger is a pioneer of mobile payment in Germany. Breuninger is the first German department store to offer Alipay and WeChat Pay as a payment method aimed at wealthy and shopping-happy Chinese tourists.

Will German shoppers start using mobile payment methods more in future? Only time will tell. In any case, bricks-and-mortar stores should already be considering how to integrate mobile payment into the customer experience.

In the series The inside story x 3, experts from the Plan.Net group regularly explain a current topic from the digital world from different perspectives. What does it mean for Granny, and for an agency colleague? And what does the customer – in other words, a company – get out of it?

Ever heard of Kuro Takhasomi? No? Better known by his nickname KuroKy, the 25-year-old from Berlin is one of the biggest stars in his sport, and has already played his way to over USD 3.7 million in prize money. In mid-August, KuroKy and his co-players from Team Liquid will be competing in “The International”, a major eSports tournament taking place in Vancouver. Their aim? To defend their title in the world’s most lucrative eSports event. The name of the event? Dota 2, a team-based computer game. Long derided as the antisocial hobby of cellar-dwelling teens, eSports are now well on their way to becoming a billion-dollar market – and the sponsorship and advertising opportunities in this rapidly-developing scene are enormous.

eSports: as diverse as the traditional kind, with stars emulated by millions

If my Granny were to ask me what eSports are all about, the answer would be pretty straightforward. Just like in the Olympics, players compete with each other either individually or in teams in disciplines of all kinds. Instead of volleyball, tennis or archery, however, these disciplines are computer games, such as Dota 2, League of Legends or Counterstrike. The equipment? Rackets, trainers, and balls are replaced here by a mouse and a keyboard. And the different games are just as varied as traditional sporting disciplines – there’s no single “eSport”.

The various games are organized into leagues and championships in which the competing teams often hail from all over the world. The final rounds of these fill massive arenas with thousands of spectators, with the events also broadcast on the Internet and, increasingly, on traditional television.

There is one thing that sports and eSports do have in common, though: the leap from hobby to career can only be made with years of hard training and huge amounts of discipline. And it’s here that the answer can be found to that often-asked question, “Why would people want to watch other people playing games?” For the same reason that people sit in front of their TVs watching the likes of Lionel Messi, LeBron James or Serena Williams do their thing: because they’re the very best in their respective disciplines, and can perform feats that the hobby player can only dream of.

eSports: an attractive media environment and a driving force behind streaming platforms

eSports are primarily a digital entertainment medium, with high coverage and long viewing times that make them perfectly suited to digital display and video advertising. The primary target group consists of young, tech-savvy men, who are nowadays often difficult to reach using traditional media. The most important eSports platform is undoubtedly Amazon´s subsidiary Twitch, where eSports count among the most watched content. A special feature of eSports is the close link between the pros and the fan community: many eSports enthusiasts not only follow the big tournaments, but are also loyal viewers of daily player training sessions, during which they are able to interact directly with the stars they emulate and learn more about their favourite games.
Recent years have also seen YouTube and Facebook begin investing heavily in eSports. At the beginning of the year, the Electronic Sports League signed an exclusive streaming deal with Facebook for some of its popular tournament series, including Counterstrike.

A place where young target groups still think sponsorship and marketing are cool

For companies, eSports represent an extremely attractive sponsorship environment. This is because it is precisely those target groups who would otherwise be unlikely to be especially open to sponsorship and advertising communication who are really interested in seeing “their” game and “their” heroes flourish.
The multiplier effect of sponsoring eSports teams or players shouldn’t be underestimated either. The continuous presence of gamers on streaming platforms both between and during tournaments serves to make sponsors an integral part of the community; after all, it is the sponsors who make it possible for the athletes to turn their hobby into a career and to compete at a high level without financial concerns. The fans appreciate this, which makes it easy for brands to cultivate a positive perception of themselves within the scene. In combination with a social media team that engages to some extent with gaming culture and interacts with fans on an equal footing, as well as minor campaigns such as give-aways, this can result in a powerful marketing tool.

A recently published study by our colleagues from WaveMaker has shown that, in addition to high awareness, brands with a presence in the eSports environment – primarily those from the drinks and technology sectors – have also achieved very high brand activation among eSports fans.

And another thing: A good opportunity to gather some first impressions of eSports, gaming, and its fans will be provided at the end of August by GamesCom in Cologne.

At last, summer is here. But artificial intelligence doesn’t take summer off, so it can be the ideal babysitter in the car, especially when stuck in a traffic jam. That is, as long as the language assistant actually has something to say. That’s what our SEO News for the month of August is all about. And of course, we can’t avoid the notorious silly-season monster.

1) Speaking notes for Google Home

Dialogue with machines is still a hot topic. Last month, we reported to the workforce on Google Assistant’s automated voice commands. Now, Mountain View is substantially simplifying the world of voice assistants, which is ideal for those content publishers who are trying to get started in this area. “Speakable” is Google’s first semantic markup that identifies text clips for voice output. The company states that the markups were provided through the industry initiative “” and are still in the beta phase. With “Speakable”, news publishers and other content providers can mark up short, text-optimised sections within an article or webpage, so they can be used directly by the Google Assistant. Google advises that the text should be a maximum of two to three sentences long, similar to a teaser. This way, the assistant’s speech output has a talk time of 20 to 30 seconds. For optimal use, the content of the text should present the topic informatively and in short sentences. Google also suggested that headlines should be used. Content selection must ensure that technical information, such as captions, dates or source references, does not interfere with the user experience. In the age of artificial intelligence, the optimised use of markups is becoming increasingly important for search engine optimisers, especially as the number of delivery platforms is also increasing. The standardisation of supplemental information in the source text enables all systems involved in selecting and displaying the search results to reliably collect and optimally process the data. The “Speakable” feature will initially only be available for the English language in the US market. However, Google has stated that it plans to launch in other markets, under the condition that “a sufficient number of publishers implement Speakable”. So the SEO industry will certainly have its work cut out.

2) More opportunities for Amazon Alexa

When it comes to the future of digital searches, the focus is slowly shifting from analysing requests and intentions to reflecting on answers and output systems. The key challenge for successful human-machine communication, alternating between interactive displays, augmented reality and voice assistants, will be to provide the best possible result for each channel. Is there one answer, multiple answers or does the initial question then lead to a conversation between the search system and the searcher? In principle, the process is the same with a virtual assistant as it would be with a physical advisor: Do you want a quick result or a full sales pitch? Do you want to be left alone to browse quietly or do you need the help of a sales assistant? Is a brief answer enough or do you want to break down your query into more specific stages until you get the right result? The American company “Yext” has now introduced a collaboration with Amazon, which enables the import of NAP data (name, address and telephone number), as well as allowing the language assistant Alexa to import opening hours directly from local companies. The New York-based company told journalists that they plan to further integrate their interface with Amazon Alexa in the future. Product data and catalogues may be included at a later stage, but this has yet to be decided. The automation of the data exchange between the owners of digital offers and search systems is already a key component of success in modern digital retail. The goal of this is to create an optimal user experience at the point of issue, as well as valid measures of success. Providing and optimising data feeds is key for optimal functionality of Google’s PLA (Product Listing Ads) or the use of price search engines and affiliate networks. In the world of Amazon, the necessary interfaces and tools are only gradually being created. And when it comes to profiting from the growth of digital voice assistants, that’s exactly where the greatest potential currently lies.

3) An SEO Rabbit goes on a SERP Rampage

Do you still remember Lotti the snapping turtle, Yvonne the elusive cow, or Sammy the caiman? Fortunately, there are search engines that give us the opportunity to relive the stories of these adventurous, silly-season animals. And even years later, we are still captivated by them during the summer-holiday slump. In this latest ‘animal’ news sensation, it was a virtual rabbit that brought the world’s largest search engine Google to its knees. The story was published under the headline “Rabbit Bug” by the Spanish programming collective “La SEOMafia”. According to their information, a table was inserted into the source code of a website in order to deliberately manipulate Google’s SERPs. The bug was based on the fact that Google cannot interpret this formatting when displaying the search result, leading to the sudden termination of the search result display after the manipulated entry. This bug was implemented on a top-ranking site for the keyword “conejos” (Spanish for rabbits), with the result that only one, manipulated, search hit was displayed. It is easy to imagine the incredible click rates that could be achieved by using this strategy. It’s always a pleasure to see some creative spirits shake things up in the now mature and grown-up world of the SEO industry. Eventually, even Google’s SEO liaison officer John Müller became aware of the Rabbit Bug and reported on Twitter with a wink that he had circulated the sighting of the rabbit in-house. The rabbit is now threatened with the fate of all silly-season animals. In the end, most were captured or killed.

Out with the cookie cutter approach and into the customer’s head

You have created award-worthy advertising and invested heavily in media – and then your customer gets stuck on an incompetent hotline for over half an hour. You have sent out a perfectly personalised e-mail newsletter – but unfortunately your customer is redirected to a general category page of your online shop when they click on it. You start a limited sales promotion, but even weeks after it’s ended the retargeting banners follow your users everywhere. Marketing is a bit like dating: sometimes it’s the small things that can ruin a good first impression.

When customers gain experience with brands or products today, they do so in many places: in store, online, via social media, on the phone and on the street. In the best case, this customer experience results in a coherent overall picture. But in reality this is often not the case. Why? Because companies have structures that can often make it difficult for the focus to lie on the customer experience as a central element of their actions. And this is despite more and more companies being aware of how important this aspect is.

A consistent customer experience needs new structures

In times of increasing price transparency and decreasing brand loyalty, a coherent customer experience is an important differentiating feature. If you can’t find AND retain your customers, you’re in trouble. For brands this means concentrating on giving the customer reasons to become and remain a customer. And because the platform economy of the digital world is making (price) comparisons easier and lowering exchange hurdles, it often no longer comes down to ONE reason – a great product, an unbeatable price, a good brand image people like to show off and so on. The key to success and sustainability in the digital age is a coherent and above all relevant customer experience.

A further challenge is that digitalisation affects many, if not almost all, areas of a company, from product development to management, marketing and services. If companies really want to put the customer at the centre of their activities, they have to tackle this task across departments. This means breaking down the barriers between areas and/or promoting a different form of cooperation within the company. Admittedly, this is a complex and far from easy task. Let’s take marketing and communication as an example: traditional advertising, digital marketing, CRM or dialogue marketing, PR/corporate communication and social media often exist side by side in historically separate silos.

Utility and usability are what make the difference

Relevance is a decisive factor in determining customer experience. Relevance is determined by the customer’s subjective experience. Does the customer like the advertising? Was the person on the hotline friendly? Did the customer find what they were looking for on the website quickly? Every customer makes their own judgement. If you summarise the evaluation criteria, they can be divided into two general categories: First is utility and second, usability.

Utility describes how valuable the experience and the received content were for the user. How well does my experience correspond to my particular requirements? Does the content answer my questions? Does it solve my problems? Does it meet my expectations or even surpass them?

Usability is an overarching term for the user-friendliness of customer experiences. It’s not about the content, but about how easy it is to use, control and operate products or services. And of course the experiences at the different touch points must also result in a suitable overall picture and has to be very well networked.

Experience from many projects shows that utility and usability only form a coherent picture if companies enable their different experts inside and outside the company to work together on a relevant customer experience.

Lufthansa Personalisation Example: 500 million newsletter

In a digitalised world of brands, people expect meaningful personalised content. Every year a company like Lufthansa, for example, sends out 500 million newsletters to different target groups, in different locations, featuring a wide variety of services. The keyword “personalisation” encompasses an extremely complex and elaborate communication architecture designed to ensure a coherent digital user journey, starting from the user’s inspiration to fly long before take-off, all the way to when they land back at home. Plan.Net built its own newsletter cockpit for the airline for the sole purpose of personalising their newsletter. A shared platform for Lufthansa Marketing and its service providers with an intuitive interface, a modular system for content and a real-time preview. Dialogue communication via e-mail is also synchronised across the board with banners, apps and social media platforms. This is just one example of a project that could be realised with cross-departmental work coordinated between the brand and the service provider.

Audible, the subsidiary wholly-owned by Amazon, takes a different approach. The market leader in the digital distribution of audiobooks follows a 360-degree approach that combines communication, media, research and tracking. A wide variety of content is prepared and controlled via media placements in order to address the users in the right way – depending on their interests and needs, as well as what stage of the user journey they are at. The cost-per-lead can be significantly reduced by using content marketing tailored to the user experience like this.

There are many ways to ensure a coherent user experience, and each one is often unique to the company and products. I would therefore advise focusing first and foremost on relevant customer experience, and therefore on your existing customers themselves, when redesigning your marketing strategy. To do this you should ask yourself five questions:

  1. Who are my customers?
    This may sound banal, but in many companies the available data and information is not evaluated as comprehensively as it could be, nor used across departments to the extent that could be possible.
  2. What moves my customers?
    It is not only social media that gives you the opportunity to learn what people think about you and what their needs are. Take advantage of these opportunities and always think from the user’s perspective when creating your products and services.
  3. Where do I reach my customers?
    Which media and non-media points of contact do my customers use in which phase of their relationship and what are their intentions?
  4. What added value can help me to be more customer-centric?
    Product enhancements, services – there are many ways to expand a service in a customer-centric way. Use solutions from partners as needed – you don’t have to reinvent the wheel every time.
  5. How can I personalise my offers?
    Communication, websites, services and products – almost everything can be personalised nowadays. Use this opportunity to create the highest possible relevance.

If you have answered these questions honestly and comprehensively, you will have created a very good basis for the best possible success today, and for the sustainability of your marketing tomorrow.

Summer is finally here and the nights are long, which gives us plenty of time to think about the fundamental questions of life. That’s why the July issue of SEO News examines not just the forthcoming Google updates, but also the cognition game show and the future pecking order on our planet.

1) Achieve good rankings quickly and securely

Once again, Google is focusing on the convenience and security of Internet users. The company (which in its own words aims to do no evil) is launching not one, but two updates in July – whose effects will be of equal benefit to Internet users and website operators alike. Both of these changes were announced a long time ago and have already been partially implemented. The first change will see the loading speed of mobile websites become an official ranking factor. Loading speed is already listed as a quality criterion in Google’s top 10 basic rules for website quality; however, it has taken a very long time for it to become a genuine ranking factor. The change was originally introduced based on studies showing that slow-loading websites experienced direct impacts on their clickthrough and conversion rates, and the speed argument was also repeated like a mantra by Google representatives at various search conferences during the 2018 season. The subsequent introduction of the Mobile First Index (see our report here) means that the rule has now been made official for mobile sites too. Google recommends that website operators analyse their domains using Google’s own “Page Speed Report” and “Lighthouse” tools and make the necessary changes for mobile websites. Alongside its speed update, Google is also getting serious in July with its announcement that websites which are not converted to the encrypted HTTPS protocol before the deadline will be marked as “not secure” on Chrome. This change also marks the end point of a campaign that was launched over two years ago in 2016, when Google began its awareness-raising work with a small ranking boost for secure websites. Google has described that measure as a success, with the company stating that around 68 per cent of all Chrome traffic on Android and Windows now occurs over HTTPS – and there is plenty of scope for that percentage to grow. The fact that Google is leveraging its market power to implement technical standards with the aim of improving the user experience is a step in the right direction. Many companies were only prepared to invest in faster technology or secure licences when threatened with reductions in traffic or sales. In order to prepare for future developments, it is advisable to keep an eye on new technologies such as AMP (Accelerated Mobile Pages), mobile checkout processes, and pre-rendering frameworks that allow content to be pre-loaded. These innovations can help you keep pace, especially when it comes to improving user perceptions of loading rates on all platforms.

2) Life is one big game show

This bit will be tricky for those of you who didn’t pay attention in maths. Remember that moment back at school, somewhere between integral calculus and stochastic processes, when you belatedly realised that you’d completely lost the plot? Well, in the age of algorithms that will come back to haunt you – especially if you work in online marketing. In everyday terms, an algorithm is nothing more than a carefully ordered chain of decisions designed to solve a problem in a structured way. The crucial innovation in recent years is the advent of artificial intelligence and machine learning. Nowadays, the individual links in the algorithmic chain are no longer assembled by people, but by programs. When you ask a search engine a question, the information is taken in, its core information object (the entity) and intention are identified by means of semantic analysis, and the most empirically appropriate result (the ranking) is returned in the correct context (e.g. local and mobile). However, a group of seven Google engineers presented a research project at the ICLR Conference in Vancouver that turns the question/answer principle on its head. For their project, the researchers used tasks taken from the popular US game show “Jeopardy”. On this show (first aired in 1964), contestants are required to provide the right questions in response to complex answers. In their study, the Google engineers exploited the fact that Jeopardy tasks involve information deficits and uncertainties that can only be resolved by formulating the right question. In other words, the question needs to be adapted until the information provided in the answer makes sense in its specific combination and context. The human brain performs this task in a matter of seconds, and is able to draw upon a comprehensive range of intellectual and social resources as it does so. However, if you ask a Jeopardy question (such as “Like the Bible, this Islamic scripture was banned in the Soviet Union between 1926 and 1956”) to a search engine, you will not receive an appropriate answer. Google returns a Wikipedia article about the Soviet Union, meaning that it interprets this search term as an entity or a core information object, and thus falls short. Microsoft’s search engine Bing comes a little closer to the obvious answer from a human perspective (“What is the Koran?”), but is likewise unable to deliver a satisfactory result. This little trick involving Jeopardy questions makes clear what the biggest problem is for search engines (even though it is marketed as one of the main markers of quality for modern search systems): how to accurately recognise the intention behind each search query. The idea is that what SEO professionals in companies and agencies currently work hard to develop should be reliably automated by the search engines themselves. In order to achieve this, the Google researchers developed a machine-learning system that reformulates possible answers to the Jeopardy question into many different versions before passing these on to the core algorithm itself. In step two, the answers obtained are then aggregated and reconciled with the initial questions. The results are only presented to the user once these two intermediate steps are complete. The self-learning algorithm then receives feedback on whether its answer was right or wrong. The AI system was subsequently trained using this method and with the help of a large data set. As a result of this training, the system learned how to independently GENERATE complex questions in response to familiar answers. This milestone goes far beyond simply UNDERSTANDING search queries, which are growing increasingly complex under the influence of voice and visual search. Although this study was carried out by Google, we can assume that Microsoft, Yandex and Baidu are also working on equivalent technologies designed to further automate the recognition of search terms and to automatically generate complex, personalised content in the not-too-distant future. At present, however, it is impossible to gauge what effects this might have on the diversity and transparency of the Internet.

3) Google Assistant sets the tone

While we’re on the subject of automatic content generation, we also have an update on Google’s uncanny presentation of two phone calls between the Google Assistant and the working population. Back in May, the search engine giant from Mountain View presented a video at its “IO” developer conference in which an AI extension to the Google Assistant named “Duplex” booked an appointment at a hairdresser’s and a table in a restaurant entirely on its own, all while perfectly imitating human speech. The human participants in those conversations were apparently unable to recognise that they were interacting with a machine while they went about their work. Close collaboration with robots and AI systems has long been familiar to industrial workers in the Western world, but now this development is also moving into the service economy, and therefore into our day-to-day lives. At first glance, the Google scenario was astonishing and convincing; however, the unnerving initial impression was swiftly followed by a number of pressing questions. In particular, the fact that Duplex failed to identify itself as a machine to its human conversation partners was the subject of considerable debate. Google has since responded and published a new video in which the Google Assistant identifies itself at the start of the conversation and states that the call will be recorded for quality control purposes – similar to a recorded message in a call centre. Taking a more detached view, however, one wonders whether this responsiveness on the part of the artificial intelligence is actually completely superfluous. The restaurant employee in the video follows the Google Assistant’s instructions obediently, as if he is talking to a human being – there is no difference whatsoever. In search marketing, we attempt to further our own interests by reflecting the intentions of target groups and consumers in the content produced by search engines (the results pages). In voice search, we issue commands to a machine – and a number of years will pass before we learn how that will change us. And in Google’s future scenario of an invisible, omnipresent and convenient system that allows users to organise themselves and solve problems, the human simultaneously becomes both the subject and the object of the technology. Our data was used to create, feed and train the system, and so we may briefly feel ourselves to be its masters; however, given the current state of affairs, we can and should seriously question whether we will recognise the point of no return once the balance finally tips.

Times are changing, and so are technologies, markets and customers. By now, everybody has realized that in the big lottery called “digitalization”, some have a lot to gain, some have a lot to lose and we all have a lot to learn. In this context, a swath of new management tools and techniques touted as “agile” and “lean” have taken middle management by storm. One of these shiny, new Silicon Valley tools is the MVP or Minimum Viable Product.

You can barely get into a discussion about developing a product, working on a project or solving another problem of substantial complexity and vagueness these days without somebody suggesting you “just do an MVP”. There seems to be no problem an MVP approach cannot solve. Don’t know what the scope of our project is? Let’s do an MVP. Don’t know what our customer wants? Let’s do an MVP. Don’t really have any time or budget? Let’s do an MVP.

Maximum Vexing Problem

And this could be fine. While an MVP approach might not solve all the problems just mentioned on its own, in almost all cases some valuable insights can be derived from applying it. Who would disagree, that in a volatile environment, with vaguely defined problems and even vaguer solutions, an approach based on many small steps might be better suited than one big (mis)step? In this context, the concept of developing a product (or solving any other problem) by quickly implementing possible ideas, testing hypotheses, gathering feedback and gradually improving on the found solution is not that much of a stretch. There is only one caveat to this: in order for the magic to work, all sides involved must have a common understanding of what the concept actually entails. Which, often, is unfortunately not the case.

Minimum and Product

As with many simple and memorable concepts, the devil is in the detail, so, let’s dig in a bit: MVP is defined by three distinct terms; Minimum, Viable and Product. The first and the third, we (and our stakeholders) usually grasp quite quickly and easily:

Minimum states that we are aiming for something small, efficient and effective. We want to reach a maximum of X with a minimal input of Y (often time, money, people). That’s what gets most people (and upper management) hooked after all: the promise of getting big X while spending little Y.

Product is also rather easy to grasp; we are not creating something for our personal enjoyment, we aim to create something a customer might want to use or buy. Therefore, this person, group of stakeholders or demographic and their feedback naturally sets the bar for our efforts.

“V” as in “Viable”

Which leaves us with the term “viable”, which teams struggle with the most and which can make or break this approach. Let’s rephrase “viable”, so the problem is clearer: a solution is “viable” if it is capable of delivering on our expectation of success. As these “expectations of success” might vary widely from stakeholder to stakeholder, a shared understanding of what “viable” means, or lack thereof, is what makes or breaks a team taking the MVP route.

Depending on their professional training, current role, subjective perception and many other variables involved, each stakeholder might have a drastically different understanding of what “viable” means. So, it should be clear that in order to make it with an MVP approach it is crucial to make each stakeholder’s expectations as explicit and clear as possible and continuously attempt to align them as much as possible.

Making “Viable” more “Viable”

One way to do away with the gravest misunderstandings of what should be achieved by using an MVP approach has been suggested by Lean/MVP coach Henrik Kniberg. He proposes three substitutions for the term “viable” depending on the main objective a team tries to achieve through the use of an MVP:

Minimum Testable Product – Anything that you can learn something from. Starting with those things that provide the most value in regards to functionality, general viability, assessment of value propositions, risks or chances.

Minimum Useable Product – Something a user can get a (limited) use from and give feedback in terms of how to improve the product. Again, focusing on those things that provide the most value in regards to “use” and/or “feedback”.

Minimum Loveable Product – Something a user might actually like enough to buy, use continuously or recommend to others. Ideally this builds on insights gained from previously established Minimum Testable and Minimum Useable Products.

These substitutions make the many facets of “viability” explicit and thus sharpen the concept of MVP based on the context. With them you can provide the right terminology to onboard teams and stakeholders enthusiastic about the concept. They might also come in handy with teams in distress, where a clarification of “viable” would provide some basic structure and orientation allowing them to successfully complete product or project they have already started. Last but not least, they can serve as a means for a more informed discussion with stakeholders who have already had negative experiences with the careless use of an MVP approach.


At Plan.Net we have been successfully making use of MVP-based approaches in a variety of situations: For example, in the development of ambitious digital products. Such as a lean solution for partner integration we created for a global logistics company. Or location based offers and services for the apps of one of Germanys biggest loyalty programs. In this case a focused and iterative approach helped to implement a challenging technology stack of GPS, Bluetooth and NFC in a stable way, while still being able to react in a flexible way to new inputs in regard to product specification. In general MVPs are especially viable (you see what we did here) in cases were technological and creative demands have to be reconciled. Situations that often are informed by conflicting priorities, high pressure and uncertainty.

Another case where we made good use of an MVP, was the relaunch of the platform-portfolio of a big German energy supplier. It helped managing an ambitious roadmap and a big scope. But also in trialing new technology, as in the development of an AR prototype for a big railway company, an MVP-approach helps to keep cool in the heat of the digital jungle.

Beyond the buzzword

So, new challenges need new ideas. We need new approaches for working together, we need new tools and we need new ways of collaborating. We need to be willing to try out things together, to achieve greatness or sometimes even fail together. What we do not need are buzzwords being detonated in meetings like stun grenades, leaving behind nothing but a headache and a ringing sound in everybody’s ears. We need to look “beyond the buzzword” and pragmatically assess how “agile” and “lean” approaches can help us solve problems instead of obscuring them.

In the The inside story x 3 series, experts from the Plan.Net group regularly explain a current topic from the digital world from different perspectives. What does it mean for granny, and for an agency colleague? And what does the customer – in other words, a company – get out of it?

In recent years, hardly any other technology has painted so many scenarios of a golden future as blockchain, THE solution for decentralised, tamper-proof storage of transaction data. But what’s next?

My granny and blockchain: What is it and is it the same thing as those bitcoins?

Not really, dear granny; blockchain is actually the technology that provides the basis for non-physical currencies like bitcoin, among other things. Simply put, blockchain is an extendible list of data records that are arranged in blocks – hence the name. Imagine a train that gets more and more carriages attached to it over time. In these data carriages, every process – a payment transaction in the case of bitcoin, for example – is recorded chronologically. Since this ‘data train’ is not just stored and updated on a single computer, but on many different ones, the data are much more secure. This is because if somebody wanted to retrospectively modify a data record – say they wanted to cover something up, for example – it wouldn’t work, because the single piece of false information wouldn’t be able to combat the many correct ones.

Bitcoin, a digital payment system that can be used worldwide, is based on the blockchain principle. Extremely secure and simple to use, it manages without a bank because the payment transactions take place in a network in which everyone can co-operate. But although it sounds good in theory, in practice there are a few disadvantages. In order to access these digital coins, you need an online bureau de change that you can hand over your money to. Unfortunately, however, it is very difficult to know whether this currency exchange is sufficiently protected against attacks from hackers. In addition, there are very few things you can buy using bitcoins, as bitcoin is actually extremely unsuited to being used as a means of payment: processing payments takes a long time and involves high fees, and the value of the currency is too variable. And that’s why it’s quite a bad idea to invest your savings in bitcoin – unless gambling is your passion.

However, blockchain also has potential in areas other than digital currencies: for example, end-to-end, transparent supply chain documentation in the food industry. In several countries, blockchain technology is also frequently used in public administration, for example, for notarial services or the administration of medical data. But it’s still got a long way to go.

My colleague and blockchain: Will it help us to create transparency in online advertising?

In the digital media business, there are several potential areas of application for blockchain. One example is the transparent and secure handling of campaigns that use programmatic advertising. In this context, blockchain would be able to solve problems around ad fraud, brand safety and billing by providing end-to-end, tamper-free documentation in the chain that shows which impression was delivered to which bid as well as where and when. In terms of reporting performance value, too, blockchain would be able to help to ensure consistent data records between publishers, agencies and customers.

To this end, a number of start-ups have already designed solutions, but large-scale use is still blocked by huge development and coordination costs among all parties involved. However, it does not always have to be blockchain – the Ads.txt initiative from IAB Tech Lab is a simple tool that has already made great strides in terms of helping to avoid inventory fraud.

Blockchain for the customer: So which of the many providers are trustworthy?

Programmatic media is interesting for advertising companies too, of course. Moreover, blockchain can be deployed as a commercial platform between producers and customers in the area of content delivery, for example, in the field of image rights.

In recent months, numerous start-ups and established technology companies have launched products and apps based on blockchain. The biggest disadvantage for many providers: as a rule, each platform uses its own currency as a means of payment in the form of coins or tokens, which are used to pay for transaction processing as well as buying and selling media stock or digital licences. If several services are used, companies basically have to have a wallet full of different currencies on hand – and it’s not just holidays that this is impractical for.

First sale of these currencies (known as an Initial Coin Offering, or ICO for short) has given start-ups around blockchain a fresh opportunity to raise capital. However, belief in the platform’s success is still the underlying factor: if the business model works, the value of the currency increases at the same time. This is useful for the investors and platform operators; not, however, for anyone who would like to use the currency to buy media or (digital) goods on the platform. And by no means are all providers successful. Research carried out by the cryptocurrency news site shows that of the more than 900 ICOs in 2017, almost 60 per cent of the companies failed or as good as failed. For now, the best course of action is: wait, drink a cup of tea and see how things develop.

SEO News

If you think this June issue of SEO News will only be about the impact of Google’s Mobile Index, think again. We prefer to wait a bit on that. As the summer begins, we are therefore focusing on the return of a powerful tool, the prerequisites for good SEO work, and an industry in the throes of fake news.

1) The return of Google image search

The image bubble has burst. After a long legal dispute with the image agency Getty Images, Google decided to make some changes to its popular image search function. How positively these changes have affected website operators can be seen from a survey by the US search expert Anthony Mueller. But let’s start from the beginning. In January 2013, Google changed the way its image search function worked so that every user could directly view and download found images. A key aspect of this was that the files were buffered on the servers of the search engine, where users could access them with the ‘View Image’ button. As a consequence, clicks on the sites of content providers and rights holders nearly vanished and systematic traffic from image searches plummeted by more than 70 per cent in some cases. This development was especially perilous for websites that focus on visual impact for inspiration, such as fashion or furniture merchants, and had put a lot of effort into optimising their image content. Particularly for e-commerce operators, this collapse in traffic also meant a collapse in turnover. Three years later, the renowned Getty Images agency submitted a competitiveness complaint to the European Commission, apparently hoping that ‘Old Europe’ would again set things right. Getty’s efforts were rewarded, with the result that the ‘View Image’ button disappeared from Google image search in early 2018. Interested users had to visit the original sites to access the original files. That stimulated Mueller, the well-connected search expert, to ask some 60 large enterprises worldwide, if after nearly six months following the change, they had seen any impact in their website traffic. The result was that on average, visits from Google image search have risen by 37 per cent. Although the figures for impressions and ranking positions in image search have remained relatively stable, click-throughs have risen dramatically with all of the surveyed enterprises. The survey also indicates that conversions from image searches have grown by about 10 per cent. Of course, savvy users can still switch to other search engines, such as Microsoft’s Bing or Duck Go. Those two search engines never got rid of direct access to image files. However, due to Google’s market power this is exactly the right time to give new priority to the optimisation of image content and exploit the new growth potential, according to the author. Presently text search is still the dominant method for acquiring information. However, there are signs of a paradigm shift to visual search, particularly in retail.

2) Getting better results with smart SEO goals

Thanks to the Internet, contacts and advertising impact are now more measurable than ever before. Although the digital revolution in advertising is no longer in its infancy, it has by no means reached the end of its evolution. With digital campaigns, it is easy to define suitable key figures to measure impact and effectiveness, and it is not technically difficult to obtain corresponding campaign data. However, defining goals for search engine optimisation is not so easy. For example, Google stopped offering keyword-level performance data for systematic searches many years ago. Marketing managers and SEO experts are therefore repeatedly confronted with the challenge of developing an SEO KPI concept that visualises optimisation results and, above all, gets the company’s budget controller onside for professional SEO work. For this reason, search guru Rand Fishkin has put together some rules for formulating the goals of SEO activities, which are interesting to advertisers and enterprises alike. According to Fishkin, the main rule is that the business goals must form the basis for the SEO concept. The next step is to break down these higher-level expectations, which are usually financial, into marketing goals – for example, by defining requirements for various communication channels along the customer journey. The actual SEO goals come into view only after this point, and they can be mapped out in the last step using just six metrics. These KPIs are ranking positions, visitors from systematic searches (divided into brand searches and generic search objectives), enterprise representation with various hits on the results page of a search term, search volume, link quality and quantity, and direct traffic from link referrals. Fishkin checks his concept against two different example customers. For example, a pure online mail-order shoe seller has a fairly simple business goal: boosting turnover by 30 per cent in the core target group. In Fishkin’s view, the next step is to specify in the marketing plan that this growth will be generated by a high probability of conversions at the end of the customer journey. From that you can derive an SEO goal of 70 per cent growth in systematic traffic. In order to achieve this goal, you then adopt and carry out implementable SEO measures. For the contrasting scenario of local SEO without reference to e-commerce, Fishkin’s example is a theatre that wants to draw more visitors from the surrounding area. In this case the regions where the target audience should be addressed are defined in the marketing plan. The SEO plan then consists of setting up local landing pages, utilising theatre reviews and blogs, and other content-related and locally driven measures. The advantage of this sort of top-down approach is the alignment of individual SEO measures, which are often difficult to grasp, to the overall aims of the organisation. According to Fishkin, the rewards are higher esteem and faster implementation of the laborious SEO work.

3) Fake news threatens the existence of the SEO industry

Did you get a shock when you read this heading? That’s exactly what we wanted, in order to get your attention. Of course, you rarely see such highly charged headings on SEO blogs, but competition in the IT sector does not spare the search industry. Every year we hear that SEO is dead, but supply and demand for optimisation services have growing steadily for more than 15 years. A large part of that is doubtless due to the intensive PR activities of the parties concerned. Starting as the hobby of a few individuals, over the course of time search engine optimisation has developed into specialised agencies and migrated to in-house teams of enterprises. Along the way there has been continual testing, experimentation and comparison, SEO expertise has been constantly expanded, and above all a lot has been written about it. SEO blogs therefore serve on the one hand as an inexhaustible source of information – a sort of global treasure of SEO experience, forming the basis for success. On the other hand, postings on search topics are also a form of self-advertising and customer acquisition for service providers and agencies. John Mueller, the well known Senior Webmaster Trends Analyst at Google, has now criticised some SEO blogs. He claims that some of them use postings as click bait. That all started with a report on an alleged bug in an SEO plugin for WordPress. In the course of the discussion about the tool, information was presented in abridged form on some SEO sites and important statements by John Mueller on behalf of Google were not passed on. He is now saying that postings should pay attention to all aspects of complex search topics. What matters is to create long-term value with balanced reporting. People should resist the temptation to get quick clicks. According to Mueller, the goal should be to convey knowledge. It is clear that even the search scene cannot evade the grasp of digital attention. It looks like speed has become a goal in itself, and it is assumed that online readers no longer have time to pay attention to the details. In this way our own methods endanger the industry’s collective wealth of experience. In an increasingly complex search world, it is particularly important to not lose sight of the details, and we have to take the time for a thorough treatment of each topic. For example, the threat to the existence of our democracy from the SEO activities of Russian troll farms is a topic that still needs a thorough treatment.

Programmatic Advertising

Programmatic advertising (PA) is a multi-faceted term. Many market players use it as a buzzword, a label for the hype that has at times raised very high expectations among lots of market players, especially advertising customers. Others frequently use PA as a synonym for automation projects that are several years overdue, especially in so-called classic media, but in which nothing is “programmatic”. At mediascale, we generally define programmatic advertising as data-driven media-buying and as a process we are only just beginning.

Therefore, the disappointment that may have occurred with one or other advertisers is not a fundamental issue for programmatic advertising. Rather, it should be an incentive to take programmatic to the next level: on the one hand, by rethinking the set-up of service providers and technology; on the other, individual expectations should be reasonably calibrated.

In recent years, it has mainly been venture capital-financed players, who wanted to be part of the media value chain, who have fuelled the programmatic hype starting from their own interests, which has led to high expectations. And of course they have claimed “their” part of the supply chain. But those who worked more intensively with the market knew that the quantity and quality of the available profile data for programmatic campaigns is limited. However, only good data can increase the efficiency of campaigns significantly enough so that the additional costs for the additional members of the value chain are reintegrated. A possible disappointment was thus an announcement or based on unrealistic expectations.

In many conversations with our customers, we have realistically presented both the possibilities as well as the limits of data-driven advertising in order to rule out exaggerated expectations of PA from the outset. In doing so, the following assumptions were made, which our customers, sometimes against their initial will, have been getting along well with so far:

  • Programmatic advertising is not a new channel with completely different rules to traditional display business. Also when auctioned and backed by data, a content ad remains a content ad and will not develop the advertising impact of an instream pre-roll large format
  • Meaningful, validated data is the indispensable basis for programmatic advertising. Here it is important to look carefully and carry out comprehensive auditing of the available data offers. At first glance, the data market in DMPs seems expansive. But data segments that deliver what they promise (delivering a corresponding uplift to campaigns) are by no means abundant. And they have their price.
  • An impression that cannot be assigned to valuable data should not be bought programmatically. As meaningful as it is to uniformly track all advertising contacts and accumulate all campaigns and pseudonymous profile data in one system, it makes little sense to put untargeted campaign volume into systems just to have bought it “programmatically”. This results in costs and technical performance losses that are not offset by financial added value.
  • The open market, open to all, originally proclaimed by many to be programmatic’s central promise of salvation, has created more problems than it solves, as it has also opened up the market to a plethora of dubious players. The efforts of the large, open sell-side platforms to push the black sheep out are commendable, but unfortunately not always successful. That is why we only buy inventories that we can thoroughly test, both technically and commercially. Furthermore, whenever possible, we buy from partners (often in private marketplaces) that we know and have established business relationships with – including any sanction options which may be necessary in the interest of the customer in an emergency.
  • And we’re not forgetting the creation: What use is the most sophisticated planning on a profile basis, if only one means of advertising is available? That’s why data driven creativity is, in our view, the indispensable fourth pillar of programmatic advertising – alongside technology, media space and data.

Today, programmatic has already caused major changes in the digital media business. But we are sure that this transformation process is far from finished yet. And it will encompass more and more media types in the future: TV, out-of-home, audio, cinema and eventually also print. In five years at the latest, we will be able to plan, book and control more and more channels via programmatic. In addition, people’s media use is evolving, new, relevant platforms are being created at ever-increasing speed, and data protection requirements also require fundamental and sometimes new solutions. All these challenges keep us busy and agile. Staying at the current level of development is not a solution. Particularly as we are just scratching the surface with programmatic.

This article was first published in adzine.