Smartphone on wheels

Many questions will be raised when carmakers present their solutions and concepts for mobility of the future at IAA in Frankfurt in just a few days’ time: What do we do in the car if the car can soon drive itself? If the dashboard and side windows consist of screens in future, which contents do we use during the journey? What data does a fully connected vehicle supply and who uses it for what purpose? Manfred Klaus believes that achieving reduced emissions is just one aspect of what the car of the future will have to deliver. Writing in a guest article, the Plan.Net boss sees cars becoming their own communication platform in future – with new business models.

It is a mere coincidence that elections to the next German Bundestag are taking place on the final day of this year’s International Motor Show (IAA) in Frankfurt (24 September 2017). Fitting at the same time however, since Germany’s motorists suddenly find themselves in the middle of the election campaign. The debate revolving around banning diesel vehicles, electric mobility, software updates and hardware retrofits is dominating the media as well as political discussions.

There is no question but that the emissions from private vehicles are a key issue for the public at large – especially in the larger cities. Yet electric mobility is currently lacking the infrastructure and the reach to seamlessly replace the combustion engine on a grand scale. The same applies for the topic of autonomous driving. Just because studies indicate success, failure would still be the result in mass operation at present. As soon as the discussion on the diesel front becomes de-emotionalised, a different topic will raise its head for carmakers: with progressive vehicle digitisation and connectivity, the cars themselves will increasingly evolve into independent communication platforms – so a type of smartphone on wheels.

Many aspects of this development are grouped presently under the catch phrase “connected car”, which essentially encompasses three main feature areas. Firstly “general features” such as seamless Internet connection, WLAN hotspot in the vehicle or personal driver registration. Then there are the “vehicle-related features”, which include information on the vehicle condition, its position or additional on-demand features (brighter headlights, 4-wheel drive, and such like). Finally the “infotainment and entertainment features” provide real-time information on traffic as well as location-based services and content offers. Digital technologies can therefore already be found today in an entire range of car features – even aside from autonomous driving.

It’s just that most Germans have barely noticed it yet. Digital features have been advertised slightly cautiously to date by the manufacturers. According to a study on behalf of Motor Presse Stuttgart, only ten percent of Germans are acquainted with the terms “connected car” or “connectivity”. And even among Generation Y, the key target group for connected car offers, only one in every two is familiar with the term according to Deloitte. To add to this, the terms are also interpreted completely differently by motorists: from automatic parking assistance to the emergency call feature through to WLAN hotspot or a music playlist on the driver’s smartphone.

Initial studies by manufacturers, as demonstrated at the Consumer Electronic Show (CES) in Las Vegas, illustrate what could be conceivable in the near future: smart windscreens that offer more than just a pure head-up display, dashboards that simply consist of touchscreens and on which key features are processed like in apps or side windows that can be used via touchscreen to surf the Internet or call up apps. And when it comes to defensive driving, apps already exist in abundance. The latest offer however rebukes young people in an unusual way: if the novice driver exceeds the specified maximum speed limit, the app plays their parents’ favourite music. Now that should be punishment enough for most!

How marketing automation and creation fit together

When it comes to programmatic advertising or marketing automation in general, media or technology experts usually lead the discussion, while creation often remains sidelined. However, in a world of advertising where computers are increasingly performing control-based processes, creation is an important criterion of differentiation for brands and businesses and should not be considered as separate to the technical implementation.

In the key discussion regarding programmatic advertising and marketing automation, the market is driven exclusively by technology and media experts. So far it simply hasn’t been necessary in creation to speak about technological solutions.

However, avoiding the modern opportunities for advertising certainly isn’t a solution with a future. Creative minds should know and use the possibilities for involving new technologies such as programmatic advertising – even if it isn’t their main task to promote the standardisation of advertising media or the measurement methodology of online videos on Facebook or to discuss interface problems between DSPs and SSPs. But they do need to develop an idea at the beginning of the process that will surprise the market and that isn’t expected. Only once this umbrella idea for a brand or product has been developed is it possible to meaningfully engage automation in marketing.

The greatest hurdles for programmatic creation lie in everyday work. This is because advertisers’ briefings for media and creation are unfortunately rarely coordinated with each other. Completely different objectives are frequently formulated for the two areas – depending on whether the aim is to achieve something for the brand or for sales. Furthermore, media and creation are usually different departments (both for the customer and the agency) that don’t always communicate with each other. How exactly the creative process works in an agency and in cooperation with the advertisers strongly depends on how the campaign planning is organised. The areas of strategy, media and creation are usually involved. If one of these areas starts the work on its own or dominates the planning process (which is usually the case) then at least one department is often dissatisfied.

Anyone wanting to advertise successfully in the programmatic age should try to engage all those involved at an early stage and incorporate all their perspectives. Creative minds need to understand how algorithms work and how media people tick. While media needs to realise that creative individuals have an emotional connection with “their” motif and that it isn’t just any old piece of cargo. Only in the symbiosis, in the understanding that the other group also has a very important contribution to make, do we get an end result with added value and a meaningful strategy. The foundation for this approach in the future should be for creation and media to have a shared budget. If, for example, creation addresses users with more target-group-specific advertising and varying motifs (and requires more time and money to do so), this money can then be saved from the advertising effect and the media budget can be lower.

As a first step towards finding a common solution, advertisers should precisely define what they expect from their communication or campaign. Ideally, marketing, media, sales and other stakeholders should get together for this and formulate clear targets for creation, strategy and media. After all, only once a good strategy has been decided upon and a compelling creation developed can programmatic advertising and automation demonstrate their strengths.

This article was published at Arabian Marketer.

Augmented reality milestones

The first marketing initiatives with augmented reality (AR) appeared in Germany around the year 2011. Back then, Plan.Net integrated AR functions in a campaign for the special interest channel Syfy, for example. Posters were impressively brought to life using the technology available at the time from the Munich-based company Metaio.

Ever since, individual projects involving augmented reality were implemented every now and then, but the big breakthrough failed to materialise. Yet last year, AR suddenly became a hot topic of conversation again thanks to Pokemon Go. Augmented Reality was euphorically celebrated by marketing experts – they believed this would be the breakthrough. But this certainly was not the case. There was enormous hype surrounding Pokemon Go, but AR barely received a mention. Instead, it was virtual reality that appeared on the scene and drew the attention with HTC, Sony and Oculus hardware, associated with lots of interesting application scenarios. However, VR has so far more remained a good option for local productions or audiences enthusiastic about technology.

By releasing ARKit, Apple is now achieving another dimension. Hidden within the system is the software that Metaio from Munich have been enhancing since 2011; it is now much more accessible to all Apple developers and can be implemented even more easily in iOS apps.

With great joy and excitement, we relied on the new options available in the recently founded Plan.Net Innovation Studio – and we certainly haven’t been disappointed by the beta version of the ARKit which is currently still available. Habitually good software documentation provides the user with a quick introduction to the available options and therefore makes it as easy as possible to understand the world of AR.

Even though the beta version published by Apple in June still appears to be somewhat limited in terms of technical functionality, in a short space of time we have already explored many exciting applications and have used them to improve the first customer projects. The application examples range from AR-based navigation, to the placement of virtual furniture and the first mixed reality examples. A flood of ARKit-supported apps can certainly be anticipated in the App Store when iOS 11 is released.

Source: Apple

It will take some time before the full potential of the platform can be exploited. There is certainly still a functional gap when it comes to location-based data layers (Location Based Services). But Apple will probably add other functions soon and upgrade one or two components with new iPhone hardware before long.

Nevertheless, the options in existing devices are already very promising – and with around 380 million supported devices currently in circulation, the target audience isn’t exactly small.

The next anticipated milestone is certainly like to keep us in suspense: when will augmented reality applications continue to go beyond the constraints of smartphones and find their way into everyday glasses and lenses? Once this has been achieved, the gap between hardware obstacles and available data will be closed and everyone will be immediately able to access surrounding information in any place. About buildings, artwork, people, products.

A world of unlimited networking that we can help to shape both constructively and critically. We are certainly looking forward to this time!

The SEO News for August 2017

Search engines do not take a vacation. Therefore, we present just in time for the summer vacation the most important SEO News of the month July – with new competition for Amazon Alexa, positive news for Bing and, of course, exciting Google updates.

1. Google Mobile Search enables direct contact with potential customers

After the first tests in November of last year, Google in the USA has now officially launched the function enabling users to contact companies directly from the search results on mobile terminals. After a local search (e.g. for a restaurant, hairdresser, etc.) you will be able to notify the store of your choice directly by Messaging App. For providers, the new function is activated quickly via the Google MyBusiness-Account. The communication is processed either through the Google-Messaging-App “Allo” onto Android devices or directly in the native Messaging-App onto iOS.

2. Videos on Google and YouTube: New study explains the differences in the ranking

Do I want to focus on Google or YouTube in the case of optimising my moving image content? A new study from the USA provides support with this decision. Using a comprehensive ranking analysis, this could show that the algorithms of both search engines differ significantly due to different user intentions and monetisation models. As a result, the content of the video is decisive: While informative content on traditional Google search, such as operating instructions, seminars, or reviews gain high visibility, on YouTube you can achieve high rankings with entertainment content and serial formats. Interesting reading for any SEO.

3. Bing expands market shares in Desktop Searches

For a successful search engine optimisation, it is important not to depend only on the market leader Google. To reach your target group, you need to closely observe the broad spectrum of general and specialised search systems. This, of course, includes Microsoft’s search engine Bing, which, by his own account, serves older and financially stronger target groups than its competitor Google. According to the latest figures from Comscore’s market researchers, Bing was able to expand its European market share in desktop searches to nine percent in the first two quarters of 2017, in Germany to twelve percent and in the United States even to 33 percent. Bing was driven by a stronger integration of the search engine into the current operating system Windows 10 and its Voice Search “Cortana”, Depending on the audience and target market, it is thus worthwhile keeping an eye on the company from Redmond.

4. Bing expands results display for brand searches

And once again Bing: In the past, it has been shown that even Google is not afraid to copy new features from Microsoft’s search engine. For example, in the case of the image search, Bing was able to profile itself with new display formats on the search results pages. Recently, in the United States, Bing offers during the search for brand names, in addition to the well-known site links, also direct entry points for “Popular Content” in the form of screen shots and images. Whether or not this feature provides added value for the user is questionable, it serves quite definitely an attention increase and thus a potentially higher click rate.

5. Competition for Google and Amazon: Samsung and Facebook are planning their smart speakers

Up to now, the market for the smart speaker has been controlled mainly by Amazon and Google, where the trade giant currently plays a dominant role with Echo and Alexa. Now Samsung and Facebook are also preparing to enter this market. Currently, Samsung is focusing on the development of language assistant Bixby and once more positions itself as a competitor to Google. Apparently, Facebook will launch a corresponding offer on the market in the first quarter of 2018.These developments underline the trend that SEO will increase significantly in complexity given the rapid (further) development of language searches and the more diverse region of terminal equipment.

Let’s talk! – Programmatic Creation

Do creatives have zero desire for the new fascinating possibilities of programmatic advertising? Do media planners even understand how creation is made on and offline? Or are creation and media two such different poles that they are by nature hard to bring together? Under the motto “Let’s Talk”, mediascale Managing Director Wolfgang Bscheid and Markus Maczey, Chief Creative Officer of the Plan.Net Group, sat down and talked openly about advantages and disadvantages, opportunities and difficulties. They talked about marketing automation, creation and the future collaboration between advertisers and agencies.

Fundamentals for programming Amazon Echo Show

The new Echo Show is a result from Amazon’s learnings, designed to overcome the problems and obstacles faced by all voice interfaces when it comes to communicating information. Echo Show compensates for the limitations of communicating information via a voice Interface with a classic display. A no-interface device is thus transformed into a full service touchpoint in the digital ecosystem, opening up hitherto undreamt-of possibilities. In this article we will examine how we can approach this new component, the screen with delegation via the Amazon endpoint. Reverse engineering is the key.

Echo Show does not look particularly exciting, and reminds you of something from a 1970s sci-fi movie. At the front the surface is sloped, with the display at the centre, a camera at the top and the speaker grille at the bottom. The device won’t look out of place in your living room. We had to use an address in the United States to buy our Echo Show, as it is not yet available in Germany. Amazon has yet to announce a release date for the German market.

Currently no developer guidelines

Also lacking is information for developers and the development of skills on Echo Show. However, the Amazon Developer Portal provides information on how JSON communication between the skill and endpoint must look for the new functionality. Parameters are described, templates are shown and callbacks are explained. However, all of this information exists solely in the form of communication protocols and not as guidelines. As traditional users of the Alexa Skills Kit framework for Java, we feel left out in the cold. A note in the latest framework version in GitHub tells us that Version 1.4 was prepared for Echo Show, but there are neither documents nor code examples available.

What display options does Echo Show offer?

We have an Echo Show, and we want to develop a skill for it. So, let’s take a deep breath and dive into the framework code, which was committed in the last release. We must begin by asking ourselves what our actual expectations are. Echo Show can present information in a variety of ways. It must therefore be possible to transmit information on layouts and send this as a response to the Alexa endpoint. If we take a look at the response object, we see that virtually nothing has changed since the last release. The only point at which we could transmit dynamic data are the so-called directives. If we search a little in the framework, we will find the RenderTemplateDirective instance, and it is here where we can transfer a template.

We already have access to templates on the Amazon Developer pages ( At present there are six fixed templates: two for displaying content lists and four for displaying single content. The difference between the two content list templates is that one is intended for horizontal lists, while the other is for vertical lists. The four templates for single content differ in their display options as follows (see Fig. 1):

  • BodyTemplate1
  • Title (optional)
  • Skill icon (provided in developer portal)
  • Rich or Plain text
  • Background image (optional)
  • BodyTemplate2
  • Title (optional)
  • Skill icon (provided by developer portal)
  • Image (optional – can be a rectangle or square)
  • Rich or Plain text
  • Background image (optional)
  • BodyTemplate3
  • Title (optional)
  • Skill icon
  • Image (optional – can be a rectangle or square)
  • Rich or Plain text
  • Background image (optional)
  • BodyTemplate4
  • No title
  • Skill icon
  • One full-screen image (1024 x 600 for background)
  • Rich or Plain text

Fig. 1: Differences between templates for single content

If we want to display information on Amazon Show, we must first be aware of which template is suitable for which required information. A creative conceptioner must plan precisely to ensure that the user experience meets expectations and is above all intuitive. Even the cleanest of programming is of no use if the user cannot use the displayed information. Qualitative user surveys are a useful means of obtaining initial indicators and feedback.

How do I create image and text content?

From a technical perspective, we simply instantiate one of the templates that we find under For corresponding properties such as Title, Icon, Background, etc. there are getters and setters. For images there is an Image and an ImageInstance class. Images are transmitted in the form of URLs for the corresponding image source. Text content can be transferred as plain or rich text. In the latter case, there is the option of using markup tags for formatting, which we are also familiar with from HTML. For example, there is <br/>,<b>,<i>,<u> and <font size=”n”>. There are also different areas for content within the text content. Now that we have defined the images and text and entered them in the properties of the corresponding template, the next step is to transfer this template to the RenderTemplateDirective instance. All we have to do now is add our new directive to the list of directives and transfer this to the response object. When we now call the skill, we can see the newly created content.

How do I define content for the touchscreen?

The Echo Show display is a 7-inch touchscreen. This means that you can select elements of a content list or single content. Each single content template and each element of a list template has a token. From the point of view of the framework, this token is a property of type “String” and is used to identify the touched element for callback. If we look at the way in which we previously developed skills, we can see that we only received a callback for a recognised intent: in other words, only when the user said something. This is adequate for the voice interface, but not for the display. However, the SpeechletV2 interface exclusively supports voice callbacks.

If we take a closer look at the SpeechletRequestDispatcher in the framework, we see that the dispatcher can respond to a wide variety of requests. For example, we have AudioPlayerRequest, PlaybackController, SystemRequests, as well as DisplayRequests. When a DisplayRequest is recognised, the dispatcher attempts to call the onElementSelected method from the Display interface. To receive this callback, we need to not only implement SpeechletV2 in our Speechlet class, but also the display interface. Once we have done this, we can overwrite with the following method:


public SpeechletResponseonElementSelected(SpeechletRequestEnvelope<ElementSelectedRequest> requestEnvelope)

 Fig. 2: Overwriting the callback

This callback method is then always called when an element is selected on the display. When the callback is called, we can have the token – in other words, the identifier of the element that was pressed – returned from the requestEnvelope withrequestEnvelope.getRequest().getToken() and respond accordingly. We are completely free to select the identifier.

The response to an ElementSelectedRequest is a normal SpeechletResponse. We can therefore return both speech and an additional display template. It is thus also possible to implement the Master/Detail views commonly used for mobile devices. It is precisely for these mechanisms that the Back button, which can be activated by default for every template, is intended. However, it is up to the developer to implement the functionality for a “go back”.


At present, it is somewhat difficult for Java developers to get to grips with Echo Show. Google and Stack Overflow provide neither links to examples nor documentation. Apart from the small amount of information provided directly by Amazon, there isn’t much else available. If you don’t want to spend your time analysing the framework, you will have to wait until the developer community or Amazon provide more information. However, with expansion of the framework, the development of skills for Echo Show is impressive and well thought out.

There are only a few minor negative points. The pre-loading of images in content lists doesn’t work very well. It is not good to see images of list elements appear gradually. In this case, you must consider the access time for content servers when designing skills or hope that Amazon improves the corresponding mechanisms. It remains to be seen what kind of enhancements Amazon will come up with.

It will be interesting to see what developers can do with the combination of voice interface and touchscreen. Close cooperation between design, creation and development is crucial. In summary, Amazon Echo Show will without doubt bring about big changes in the market.


Performance-Tracking meets User Journey

On the topic of web analytics, we hear time and time again how complicated it still is to collect usable data and findings for website performance analysis across multiple devices and once URLs have been opened, so that the user’s journey can be optimised. The criticism is that the standard features of Google Analytics do not provide the desired data for this. This is certainly not true! Very often, ‘connected commerce’ is even simpler than one might imagine.

Google Analytics tracks many things, but nowhere near everything

For Web Tracking, many website operators fall back exclusively on the standard implementation of Google Analytics as their tool of analysis. In some cases, this can even be quite sufficient; after all, Google Analytics provides information about a whole range of important indicators (KPIs), such as the number of users, sessions and sites accessed, or the bounce rate within a defined period. Furthermore, together with demographic features, the tool provides a whole range of device information; for example, identifying the sub-sites that many mobile device users, in particular, bounce onto – in this way, you can discover problems in using your site on mobile devices.

However, how do you proceed, if you wish to evaluate how often a very specific button has been clicked on in a site? Or from which site a user has interacted with your Chat Support? And how do the scroll depth, file downloads or access to external links appear to me? Google Analytics alone cannot answer all these questions. There is, however, both a simple and powerful solution that is firmly integrated into Google Analytics: Event-Tracking, namely capturing all types of Events.

Up until now, these Events have already needed to be carefully planned during their preliminary stages and embedded when programming a website, so that they are also sent to Google Analytics for tracking. With Google Tag Manager, even set-up is more agile and clearly easier. It requires no knowledge of programming and can be launched immediately, as soon as the Google Tag Manager is integrated into your site. For many CMS and shop systems, plugins are already provided, so that you never need a developer for installation.

Never heard of Google Tag Manager before? Christoph Küpfer, from ad agents very pointedly describes the tool as follows in the article, ‘All-singing, all-dancing Google Tag Manager’ on, “Web analytics without Tag Manager is like washing without a washing machine. It works, but it’s a waste of time and resources”.

Capturing and analysing Events using Google Tag Manager – here are three examples of how to do it

1. Do you want to know how often a button/graphic has been clicked on?

Websites generally use several formulas or teaser-graphics, which request visitors to perform an action. With Google Tag Manager, you can easily register how often visitors actually interact with these ‘Calls to Action’ (CTAs). In each case, you will also set up a trigger (the ‘WHEN’) and a tag (the ‘THEN’) – WHEN, for example, a button labelled ‘Send Message’ is clicked, THEN an Event shall be sent to Google Analytics on a corresponding day. An Event always consists of four dimensions, which you can populate with random values in Google Tag Manager: the Event Category, Event Action, Event Label and optional Event Value.

Let us take, for example, a contact form: First of all, you determine an Event Category. This might contain the static text ‘Contact Form’. In the Event Action, the action will read ‘Sent’ in this category, if the form is dispatched. As Event Labels, we mostly capture the current page path during our current implementation. For this, there is a function installed in Google Tag Manager, which you can apply while you insert {{Page Path}} in the corresponding place. If you have installed your contact form on several sites or in a side tab, you can ascertain in this way, where the contact form was sent from and how often. The Event Value can then re-assess the Event’s weighting. You would consign a higher value to a sent contact request than to a teaser that has been clicked on.

2. Qualifying bounce rates through scroll depth

A bounce is not always a bad thing. Blog operators are familiar with the phenomenon of visitors finding a solution to their problem and then leaving the blog again, without any further interaction. Here, the dwell time will already give an initial indication of whether the visitor has read your text or left immediately. However, since more and more people open several sites in tabs, the dwell time in several tabs can run parallel, even in inactive tabs.

One of many ready-made scripts can be the remedy for this. These can be integrated using Google Tag Manager and can register the scroll depth in 10 or 25 per cent intervals. Corresponding Events will show you later in the analysis, how many of the visitors reading your text on the site in question have also read it to the end, or at least scrolled through it.

3. Capturing data downloads and outbound links

Google Analytics is not actually designed to track file downloads. Here again, functions installed in Google Tag Manager can help test link click conditions.

The link destination is contained within the installed function {{Click URL}} and can correspondingly be tested. Does the link contain a specific file extension, for example, ‘.pdf’? Then trigger an Event, which the current page ({{Page Path}}) and the link destination ({{Click URL}}) will capture. Therefore, you can later evaluate in Google Analytics which files were downloaded from which pages and how often. For outbound links, you will only test whether the destination domain differs from the current domain. If this is the case, then this is an outbound link and you will again capture the current and target pages in an Event.


While the standard functions of Google Analytics allow for more of an overview of user groups, you can obtain detailed insights by Event-Tracking with Google Tag Manager. You can even break down to user level, which Events a specific user has triggered. At the very least, this is therefore helpful, if you receive paid traffic from search engines or social networks. Through Events, you can very precisely establish, whether these visitors you have paid for then actually perform as you wish and generate a corresponding Return on Investment (ROI) for you. The subject is all the more interesting, if we place Web Tracking in the overriding context of connected commerce. For some unexplained reason, Google Tag Manager is only installed on around 14 per cent of all websites (according to a usage statistic from the Internet service, BuiltWith), despite the fact that it offers a robust and especially simple base with valuable functions that can help you get to know your customers and place them centre field. It will thereby contribute to you offering a seamless and user-centred customer journey.

This article was also published at Internet World Business.

Data-based Marketing: “Human Intelligence is Indispensable”

Diana Degraa, Managing Director of Plan.Net Hamburg, talks about her technological experience, her tips for female talents and what role big data and business intelligence will play in the future.

Why voice search is not the end of SEO

Siri, Alexa, Cortana, Google Assistant and their like are undeniably on trend: Since the market launch in 2015, Amazon alone has sold well over eight million Echos and Echo Dots in the USA and now Apple has jumped on board the smart loudspeaker movement with the HomePod. According to a current Statista analysis, around 17 million people in Germany use the virtual Google Assistant, eleven million ask Siri questions on Apple devices and almost seven million communicate with Microsoft Cortana. Whether it’s on a smartphone or via a smart loudspeaker, more and more people are using voice control for searches. According to a ComScore forecast, half of all searches will be carried out via voice command in just three years’ time. At least 30 percent of people will even be searching without their own screen. These numbers are making a lot of marketers nervous. If new devices are changing our search behaviour, what will happen to SEO? Is it time to wonder once again if this is the end for search engine optimisation? No, not yet!

There’s no doubt that voice search is dramatically changing our search behaviour, as verbal search requests are very different from typing queries. Search terms and phrases can be longer, less specific, descriptive and closer to natural language use via voice control. However, this can also make them more complex, making it harder to understand the actual intention behind the search, because keywords and their attributes are no longer the primary focus as features of the search.

Is this going to give agencies and advertisers a headache? No – voice searches and changes in input behaviour are more of a challenge for search system providers, i.e. for Google, Apple, Amazon, Microsoft etc., as they are in more intense competition with one another to adjust to new user behaviour. If Alexa, Siri and their like cannot understand certain questions, it is up to search system and search assistant providers to find the solution. However, this challenge is nothing new for the dominant company groups. Their algorithms are getting better and better at recognising the intention behind a search and delivering the right results. For example, Google prepared itself for the trend five years ago: with its ‘semantic search’ and, since 2015, the RankBrain system based on artificial intelligence. With one small exception: Try asking Siri about SEO. It does not come up with the right match, even on the fourth time of asking.

When it comes to search engine optimisation, I find one thing far more interesting than the questions that can be asked using voice search and that is the answers that are given as a result. Is there one answer, multiple answers or does the initial question then lead to a conversation between the search system and the searcher? In principle, the process is the same with a virtual assistant as it would be with a physical advisor: do you want a quick result or a full sales pitch? Do you want to be left alone to browse quietly or do you need the help of a sales assistant? Is a brief answer enough or do you want to break down your query more specifically in stages until you get the right result?

The challenge for SEO experts in future is therefore based on these conversations between the searcher and the voice assistant. Clear, direct answers are only possible for a small proportion of search queries, e.g. the weather forecast for the weekend, the opening hours of a doctor’s surgery, the number of people who live in Madagascar or traffic reports. The sources for Google’s ‘featured snippets’ already provide the answers for such questions. However, there is no need for general reorientation for voice-controlled searches when it comes to the identification, preparation and answering of such questions. In future, it is local search queries using voice search that will take on a prominent role in particular. Tagging geo-local information on a website is already part and parcel of basic SEO work today (keywords: semantic markups). The integration of local data for businesses, hotels or restaurants into existing search engines that specialise in such queries, such as Yelp or Kayak, is even more vital. Both providers already have skills on Amazon Echo and also use search assistants like Siri and Cortana as reference.

Open-ended questions and statements where the person asking the question is looking for advice are harder to deal with. They are similar to those we would ask in a shop: e.g. “I would like to buy a TV” or “I’m looking for a dress”. It’s not easy to simply counter this question with an answer. Questions have to be asked in return – for example: “Do you need the dress for a particular event?”.

These days, good SEO means optimisation relating to the intention of the search. In the voice search era, it will become even more vital for website operators and search experts to understand and handle the search intentions of their target groups. Only by doing so can they provide added value and tailor the information they offer precisely to the demands of their potential customers. Voice-controlled search will therefore not kill off SEO, but it will make us think more than before about what’s beyond Google. In future, SEO must also look at defining information in the context of content and presenting this automatically.

Why we urgently need better ways to advertise online

A skyscraper here, a billboard there and more often than not a layer ad between them: on an online page which, let’s say, does not have the needs of its users 100% at heart, it is easy to feel as if you were standing in a side street just off Times Square, with bright lights flashing on and off everywhere you look. It is not surprising that exasperated users turn away. Of course there is advertising elsewhere; in some publications there appears to be even more than on an online news page. However, the distribution of content and ads usually seems tidier and less insistent. It goes without saying that magazines and websites impose completely different layout constraints – but it must nevertheless be possible for advertising material to meet certain standards in the digital environment. Marketers promise high-quality advertising spaces. This is what users want, but the reality is sometimes still reminiscent of an overcrowded funfair.

It is high time for a digital spring (summer) clean. And that means all of us: marketers, advertisers, creative and media agencies. Let’s wave a gradual but final goodbye to advertising as an alien component in design. The optimum user experience should be the paramount consideration online as well as elsewhere. Flagrantly over-used pop-ups have exactly the opposite effect, as do traditional rectangular formats with an appearance and a content which bear little relation to the editorial.

In this online age, relevance is the be all and end all; this should apply not only to content, but also to the aesthetics of advertising. Rule number one: be polite. If I want to persuade consumers to buy my product, I should not be distracting them repeatedly as they read. We must find a way to attract attention without intruding. At the same time, we need balance. Rule number two: online advertising should occupy as much space as possible. Few creatives can really show what they are capable of on 200 x 300 pixels. Sticky Dynamics are a positive feature for use on desktops. Large-format ads which move as the user scrolls and which ideally are enhanced with moving elements but which do not break into the editorial content.

For me, the balance between target group, advertisement and editorial content is another consideration when placing large-format advertising designed to avoid irritation. In print media, advertisers can adjust their advertisement to suit the editorial plan. Although online articles are a much more short-term affair, in these times of big data the maxim “content is king” still holds true. Polite advertising means making the target group in each situation an offer: “You are reading an article about mountaineering. If you still need outdoor equipment for the season, this is the place for you.” Ideally, advertisers will use multitab advertising materials so that users can browse through what’s on offer without leaving the site they are on. In theory, it should be possible for a complete customer journey, including finalising a sale, to take place within a multitab advertising environment.

All content streaming formats function as well as they can on mobile end devices. Content and advertising are clearly distinguished and users can just scroll away from the ads. Generally speaking, every advertisement which does not need to be clicked away is a step in the right direction. This is because most of the clicks on many layer ads accumulate because some users fail to hit the X for closing the advertisement.

Furthermore, smartphones by their nature offer quite different functionality from desktops – interactive move formats such as shake ads, 3-D ads and panorama ads use movement of the smartphone to provide entertaining interaction and a completely new and surprising brand experience for users.

So, dear industry, let’s get to work! We urgently need to improve the quality of online advertising formats, because, unfortunately, the current standard still sometimes borders on highway robbery. The good news is that this seems to be recognised to some extent. For example, BURDA Forward’s “Goodvertising Initiative” is spearheading more user-friendly advertising. Striking evidence of implementation of this strategy is the change in the layout of Focus Online which deserves to be mentioned. The website’s original three-column basic layout has been replaced by a substantially more sophisticated two-column approach.

Excellent editorial content deserves innovative, high-quality and, above all, user-friendly advertising material – which, incidentally, can probably be sold for a higher price than a fairground bargain stall. If we can’t do this, then we know what the alternative is: users who reach the end of their tether and install adblockers.

This article was also published at W&V.