Personalisation is currently one of the mega trends in marketing. In less than two years, the market has developed to the point where there is no avoiding it. For business clients and solution providers as well. On the provider side, almost all industry giants, such as Adobe, Oracle, Salesforce, Microsoft, and IBM are building out their cloud marketing solutions. On the client side, they are increasingly looking for answers on how to use these new opportunities for profit. Finally, as a private user, most individuals have experienced how impressive personalisation and automation can be when scrolling through recommendations on Amazon, or when their own smartphone calculates, unasked, the time it will take to get from work to home. And new capabilities promise that this is just the beginning. It’s high time to use this potential for your own customers. Many of the mentioned cloud solutions now provide hitherto unimagined possibilities. Customers can now find more relevant information and be more quickly and efficiently served and supported, whether it is before or after they make a purchase.

Nevertheless, individual companies should be cautious. Experience shows that, over time, personalisation cannot remain a marketing trick. The decision to adopt these technical solutions is only the beginning. True personalisation means the desire or intention to distinguish one client from another. And you must be willing! This is not just a task for systems and machines, but rather it is a task for people, and, finally, the whole organisation. When companies take the route towards personalisation, they quickly realise where the opportunities lie, as well as the risks. Departmental structures, which for years guaranteed successful business management, now prevent many companies from truly understanding customers’ interests and using that knowledge effectively. It seems logical and paradoxical at the same time: to serve and support customers individually with relevant information, more people and departments in the company must work together without barriers.

This means creating horizontals that include departments such as sales, marketing, customer service etc. When a customer has just signed a mobile phone contract, it doesn’t make sense to them to continue seeing incompatible products from the same brand. Or if the customer is inconvenienced with answering further questions to supplement an online profile, but they’ve been a valued customer in retail stores for a long time. Vertical integration is required as well: areas such as procurement, IT, legal, etc., need to implement the necessary infrastructure, data and systems, as well ensure legal compliance. How should an IT department know which system is the best fit for a certain marketing strategy? The consulting market to prepare companies for the age of personalisation is booming right now. From a conceptual standpoint, but as well from the organisational perspective, removing barriers across departments makes companies more capable of acting.

But the challenge goes even deeper, who says that personalisation is a good fit for every organisation? Who says that it will be the decisive competitive advantage for a company within a sector? Companies should truly consider whether this is a mega trend they need to follow, and if so, how they can differentiate themselves from competitors. Is the desire to serve clients on a more personal level really in the DNA of the company, and therefore a competitive advantage, or is the competition ultimately superior? In the digital age, personalisation and automation mean an extremely fast pace and the ability to interact, which must be overcome in the long run. And this is a question not only for “old” competitors: this isn’t the first time a mega trend brought new players to the field who understand little of the traditional performance-related competitive advantages of an industry. However, recent factors, such as a consistent focus on personalisation as a key success indicator, have made attacks on established industries…

The Amazon Dash Button has been hotly debated in all forms of media over the last week. Readers of Germany’s “Stern” magazine, for example, had a very clear opinion. More than 70% answered the question “What do you think about the Amazon Dash Button?” with the answer “A load of rubbish”. (Source: Why has the response to the Amazon Dash Button, which aims to make a lot of peoples’ lives easier, been so negative?

Lots of arguments against the Dash Button

There are apparently many arguments. After its introduction on the German market, the Dash Button is being hotly debated. Data privacy advocates warn of the possible misuses of the button’s new features. Consumer advocates warn of a lack of price transparency when ordering. Usability experts raise the question of how many Dash Buttons it is sensible to have in a household and we’re wondering what actual benefits they would have for us. But where is all this aversion coming from? After all, with the Dash Button Amazon is the only company offering us a product integrated in everyday life that reflects the ideas of pervasive computing and the internet of things.

The Dash Button is currently only aimed at a specific target audience

Amazon was certainly aware that the Dash Button wouldn’t be mainstream at this stage and that it wouldn’t be every customer’s cup of tea. But we should also be clear that we are living in the world of connected commerce where the effect of the long tail is still valid because the target audience for the Dash Button may be small relatively speaking, but in absolute numbers is large enough to make the Dash Button a successful model for Amazon. Because, according to a survey by “Stern”, 10% of respondents clearly advocated the Dash Button. They said: “Great, I hate shopping in supermarkets!”

Could the real home of the Dash Button possibly be in B2B?

Another reason for the harsh criticism might be that the Dash Button was born in the “wrong world” – in the world of B2C e-commerce. Would it not actually be better suited in B2B e-commerce? Imagine a production operation. Synchronised supply chains, as well as just-in-time and just-in-sequence processes are already a reality in the area of series production. Demand impulses between manufacturers and suppliers synchronize the order and product flows here. But there are still a great number of processes that run manually.
Aside from the rigorously timed series production, there are plenty of production facilities that do not deal in large-scale production. They use machines and tools that need to be serviced at irregular intervals. Replacing and topping up auxiliary and operating materials is also performed according to need. In this scenario, the Amazon Dash Button could optimise internal logistics. If attached to the respective machines, it could be used for various materials or even maintenance services. Orders placed would go to the warehouse or requests to the servicing and maintenance service provider. Using the buttons, internal processes can be initiated and the costs allocated to the correct cost units. If equipped with NFC and the “Purchaser” code carrier, it would even be possible to assign the respective purchaser and thus ensure that only authorised individuals can submit material orders or service requests.
If we take a step back from the manufacturing industry and consider everyday office life, the Dash Button could also be of use in such environments: for example, an employee takes the last pencil or notebook from the material store and immediately orders new products by pressing a button on the shelf. Orders that have been placed can, if desired, be combined into a weekly or monthly order and the order is then automatically submitted at the specified time.
There will certainly be many scenarios like this where the Dash Button could make life a lot simpler. Without any security concerns.

The Dash Button – an exciting first evolutionary step

The Amazon Dash Button is a first mover product of its kind. However, it hasn’t necessarily been greeted with the appropriate levels of euphoria, but instead with a great deal of scepticism. “I am convinced that the Dash Button in its current form will not survive the next two years. But maybe that wasn’t even the idea behind it,” said Gerd Güldenast, Managing Director of hmmh. The Dash Button is a new generation of device that will evolve over the next few years and find new fields of application. It is a further step in the integration of connected commerce in everyday life, in order to improve and simplify life.

Maybe the Amazon Dash Button will find a wonderful home in B2B commerce.

This article was also published at

Yesterday marked the beginning of the annual Apple Worldwide Developers Conference (WWDC). Over the course of the two-hour kickoff event, there was innovation and information around all aspects of every operating system in the apple cosmos. While new software features were being presented, there was no mention by Apple of trending topics such as artificial intelligence, machine learning or virtual reality. It remains exciting therefore whether – and how – Apple will position themselves in this area.

From an agency perspective, a couple of innovations were extremely exciting, however. Many interfaces are being opened to developers, creating new opportunities for the optimisation of existing apps, and for the conception of new ones.

Siri can now finally be integrated into apps, and a new Maps API improves individual functions and interfacing with third party apps. And with new iMessage apps, completely new service and communication options for brands arise.

Because the WWDC goes on until Friday, and during this time many labs and sessions will be held for visitors, it could be that further interesting themes will be discussed. Here is a summary of the most important changes to the individual operating systems:


The upgrade to watchOS 3 for the Apple Watch contains new performance improvements, and a reworked Dock. Apple has also devoted more to the theme of health in the OS’s third generation. In this regard, there is an app that helps one breath properly at regular intervals. Fitness and health are clearly at the forefront here for Apple.


tvOS also got a couple of new improvements, even if these were a bit smaller in comparison. A stronger integration of Siri and ‘Single Sign-on’ (with help from Apple, the user can automatically log into apps via a single click) are the update’s highlights.


The name OS X is refreshed to macOS, and now fits better into the family of names. Applause for a name change? Yes, only with Apple!


Source: Apple

Siri now also moves to the desktop. The highlight: internet payments can be made with Apple Pay. Verifications are made via fingerprint on the iPhone, or with a click of the Apple Watch.


iOS 10 received the biggest update with many new functions. The first thing that strikes you, is the lock screen’s new look and  notifications. But even the widgets get a visual redesign, and can no longer be found in the Notifcation Center; instead, links are placed prominently on the home screen. With a hard press of the icons, widgets can be opened vis the means of 3d Touch.


Source: Apple

Apple Music gets a long overdue update, and Apple News gets a new coat of paint. Apple Photos now marks individual photos according to content, which can then be searched. Altogether, an attempt has been made to connect with the Google photo app via new intelligent functions.

The theme of data protection was also often mentioned during the presentation. For example, with regard to this, the computing power of the iPhone is used to analyse photos for keywords and not the Cloud.

For those who can’t get enough of innovations, there’s also the possibility to while away the hours at the Swift Playground, and learn how to program apps. The promotion of little ones’ ‘code skills’ is obviously close to Apple’s heart; therefore, Tim Cook was very happy about Developer Conference’s youngest participant: a 9 year old girl!



Source: Apple

The Sample City Lab shows us where we’re headed

Organised by Trend One, the Sample City Lab shows us the upcoming trends that will keep us busy next year. The event is focused on topics such as virtual reality, augmented reality, artificial intelligence, robotics and the internet of things in particular. The Plan.Net Mobile team were there in Innsbruck, where they weren’t just thrilled by the view from the ski jump (Bergiselschanze), but also by the content there.

Sample City Lab 1

Nils Müller, co-organiser of City Labs and founder of Trend One, introduced the innovations that would be shown at the exhibition. One of the exhibits at the show was the NAO robot. A completely programmable, autonomously acting humanoid robot that can supposedly help with issues such as programming, robotics and steering and control technology, as well as creativity, problem solving and working as a team.

Sample City Lab 2

The scanning robot NavVis measures room interiors quickly and cost-effectively. 3-dimensional diagrams of interior spaces can be called up via a browser based app to realise virtual tours.

Sample City Lab 3

The highlight of the show was the Microsoft HoloLens. The augmented reality glasses allow to display the user information and interactive 3D projections on the direct environment. The HoloLens works without a computer or smartphone, and can be used independently.

Sample City Lab 4

A few people at the Sample City Lab were allowed to test the HoloLens themselves. Games and videos right up to Office Programs can be controlled with hand gestures.

Sample City Lab 5

The fitness device ICAROS connects workouts with virtual reality. A virtual reality flight simulation is shown while you balance on the device, creating the believable illusion that you’re actually flying through a VR world. The positive side effect: training is fun this way.

Sample City Lab 6

An additional controller on the fitness device ensures that every movement of the device is measured precisely. Furthermore, it can control the virtual reality glasses and trigger specific actions.

Sample City Lab 7

Barbie has also arrived in the digital age. With artificial intelligence, she patiently answers all questions, and will gladly get into conversations. Sometimes, Barbie herself asks for advice, or wants to know more about her counterpart. The answers are surprisingly complicated, and some conversations take a rather interesting course. Childhood dreams come true here.

Sample City Lab 8

A holographic display was an eye-catching highlight. Video projections are reflected in a glass pyramid that conveys a 3-dimensional feeling, and brings the content to life. Additionally, the projections can be examined from three sides, and the scenery can be perceived from various angles.

Sample City Lab 9

The Sample City Lab shows us where we’re headed: virtual reality, augmented reality, artificial intelligence, robotics and the internet of things. These are the themes that drive us, and determine what our world will look like in the future.
Almost everything is fitted out with intelligence, is mobile networked and can react to environmental stimulus. With virtual reality, anyone can quickly immerse themselves in a unknown world, and experience new things. Mobile internet connects (almost) everything, and robots undertake tasks that previously only humans could do. The development is faster than ever before, and one thing’s for sure: it remains exciting!

Some exciting changes to the search engine giant from California in San Francisco were introduced at the Google Global Performance Summit last Tuesday. In addition to new features in local search ads and important extensions of the Google Display Network (GDN), now Google provides also expanded advertising and display options in the classic search ads, called Extended Text Ads (ETA).

Plan.Net Performance is one of the first agencies in Germany to test the new Google formats for a customer and enlightening experiences were gained.

Finally, there is more space with Google Extended Text Ads

25/35/35. Hitherto the number of characters was limited in the creation of text ads on Google Search for the title, text and URL. This limitation could cause sometimes real difficulties to advertisers, for example, if you wanted to promote a “pet owner liability insurance”.

Since last week, Google offers more freedom to selected advertisers: two headlines of 30 characters each and an 80 character line of text offer sufficient space for the use of USPs and call-to-actions. The domain of the URL display is generated automatically from the stored destination URL, additionally there are two fields for the individual definition of the URL path.

The easier ad creation by expanding the character limit is only partly true. In the old format advertisers were forced to restrict the texts to the most important information. Now there is a risk to use unnecessary text filler, thus distracting from the actual core.

Google AdWords: Google Extended Text Ads

Google Extended Text Ads

Is this a logical compensation after a few weeks ago all ads in the right column were deactivated from the search results? Agreed, for those who were used to the ads on the right side and the left-aligned view for years, Google search results page looked in February almost a bit empty.

The expanded text ads are available since Monday, 23 May 2016. The first results are promising and confirm the expected uplift in the core metrics (higher click-through rates, CTR, at slightly lower CPC). Google itself predicts an uplift in CTR by up to 20 percent. Since the new format during the beta phase is only limited and only few advertisers are unlocked, the actual effect will probably become clear in a few months.

Google’s strategy to further strengthen the premium positions has not changed meanwhile. Thus, the expanded text ads, as other enhancements, increase also the premium positions 1 to 3. The competition will not be lower.

GDN: Cross-exchange for Display Remarketing Campaigns and Responsive Ads

Through the Google Display Network (GDN) advertisers can publish classic display ads on a variety of participating websites and blogs. Under the keywords “Cross-exchange for display remarketing campaigns” Google facilitates its customers to extend their remarketing campaigns through additional inventory sources. So far, Google fell back on the DoubleClick Ad Exchange. DoubleClick is also part of the Google Group.

A major difference between the GDN and the major ad exchanges is the order process. While in GDN usually there are only incurred costs when an advertisement is actually clicked (CPC – cost per click), the Ad Exchanges are generally remunerated for each advertising appearance (CPM – Cost Per Mille). You might think that with the expansion of GDNs to additional ad exchanges, Google is taking a certain risk. Theoretically, this is also true, especially since Google most probably buys the advertising service on a CPM basis and offers it to its customers on a CPC basis. However, it would not be Google, if they did not know exactly what they are doing.

The newly acquired range is limited exclusively to remarketing campaigns. The generated CTRs are known to be many times higher than for campaigns with other targeting options. CTRs of 0.20 percent and higher for standard formats are not uncommon. With the higher expected CTR, Google is also in the position to pay the corresponding higher CPMs, or rather to ensure its own margin. This purchase model can be very successful, as other vendors like Criteo have long proved.

The extension of remarketing campaigns in the GDN to additional ad exchanges thus represents not necessarily a cannibalization, but rather a useful supplement for Google.

Another announcement are the “Responsive Ads for display”, i.e., advertisements that individually adjust to the respective content in which they are placed. This allows to place advertising spaces in the GDN which do not follow the usual format standards. It was exactly with especial formats when DoubleClick was not a very flexible partner. “Responsive Ads for display” should have a positive impact, especially on mobile devices and facilitate native advertising integrations. Google positions itself step by step in a “Mobile First” world and will significantly expand its range through adjustments.

What the new feature actually brings, will only be known in detail after a test. With the increasing “playground” of Google grows also the overlap with other areas of marketing. It is therefore more important to evaluate all the accordingly activities under an overarching strategy and coordinate the most important.

Other advertising opportunities in the local search

Finally, new features for Local Search Ads (LSA) were announced in San Francisco. So it will be possible for advertisers in the future, to highlight their ads on mobile devices and the Google Maps service. “Promoted Pins” put the company logo in the navigation via Google Maps prominently in scene. If a potential customer looks for services or products on the go and clicks on such a pin, in addition to the usual display texts, current information of offers or promotions will be available in the future. Google responded with this innovation to the unbroken trend towards mobile use of its services. According to own statements, one third of all mobile searches relates directly to local services, such as cafés, restaurants or shops. In addition, mobile requests with a local connection grow around 50 per cent faster than the totality of all mobile searches worldwide.

Google changes its appearance as an advertising platform in the context of an increasing competition and a rapidly changing user behaviour. Especially Facebook has been able to benefit from the increasing mobilization of internet usage. For advertisers and agencies, this means to observe developments and innovations closely and to have the courage to experiment and question traditional paths.

Virtual reality (VR) is unavoidable at the moment. It is one of the industry’s most discussed topics. The spectrum of devices spans from Cardboard to the Oculus Rift, and Google introduced a new VR concept called Daydream at their I/O developer conference a few days ago. The technology is market-ready, and looks for new creative possibilities with which to address consumers, whether at home, in store, or on the go.

Not all places where you see the words virtual reality are also virtual inside.

Because not everyone who discusses and reports on VR means virtual reality in a strict sense; they also mean 360-degree video, or augmented reality (AR). Two factors help to differentiate these: the user’s environment, and the kind of experience.

With augmented reality, the real environment is enriched with computer generated content that blends with the user’s vision via AR headsets or corresponding apps. The virtual enhancements come in various forms, such as an overlay with additional information, or 3D objects with which you can interact. Often, they are directly connected to the environment (location-based services), or to objects (beacons/QR codes).

With virtual reality, a user is taken out of their physical reality and transported to a closed virtual environment, where they can freely move around. Additionally, the visual sensation can be supported by sound or other stimuli, such as temperature, wind, or smells; enhancing the fantasy, and giving the user an impression of being in the middle of things. This is referred to as an immersive experience.

On the contrary, 360-degree videos, which are filmed from a fixed camera position, provide only a limited experience. The user can’t move freely; they can only change their point-of-view through head movement.

The crucial added value of VR against other technology is immersion. The feeling of being in the middle of things lends itself amazingly well to creating surprising and compelling brand experiences, and also to interacting with the consumer in a special way. But 360-degree videos and AR apps also offer exciting use scenarios. They differentiate themselves from other communication channels through three unique selling points.

1. AR and 360-degree videos make classic communication channels interactive and digital

Digital AR and 360-degree videos enhance the spectrum of classic media, and make newspapers or TV sports interactive, for example. With the help of AR apps that are used with a smartphone or tablet, products from print advertising can be experienced in 3D. Integrated buttons point to a website with more information, or directly to the company’s e-commerce shop.

The New York Times enhanced its print offerings with 360-degree reportages that can be viewed with Google Cardboard. The Guardian has also recently published a reportage of this kind, where the audience can find out how it feels to be in a 6×9 feet solitary confinement cell. All this creates an emotional kind of reporting, and is exceptionally well suited to storytelling.

2. VR and 360-degree videos overcome spatial distances

An attractive advantage of 360-degree videos and VR is that spatial distances can be overcome; even YouTube recently started to offer live-streaming in 360-degrees. This means that brands can take customers to almost any location, and allow them to take part in exclusive events that increase brand interest.

VR and 360-degree videos are particularly exciting for the tourist industry. In order to speak to young travellers and position themselves as an innovative hotel chain, Marriott in New York created a kind of telephone booth that transported visitors to a beach in Hawaii via Oculus Rift. The special thing about this was the additional support given by audio-visual sensations from external stimuli. The visitors sensed warmth and spray mist on the skin, and a salty breeze in the nose. With this, Marriott gave them the feeling of being in another place without leaving their physical location.

3. AR and VR intensify the product experience, and turn products and services into something that can be experienced 

Virtual reality makes it possible to intensify the product experience at the point of sale. For the launch of their new hiking boot, the outdoor supplier Merrell sent shop visitors on a virtual hike in the Dolomites. They had to run across a rickety  bridge, and feel their way around a cliff.  The connection of audio-visual with tactile stimuli makes the experience extremely immersive. These virtual experiences demonstrated the places to which the new hiking boot could take them. With this, Merrell focused on their roots, and spoke to exactly their core target group. Because they supplied an Oculus Rift, the company made the technology available to many visitors that were not (yet) ready to invest in VR glasses.

In 2014, Serviceplan used VR to stage a virtual test-drive together with BMW. With the help of Oculus Rift and a wind machine, ‘Eye Ride‘ achieved a realistic driving encounter that was an at-the-time unparalleled immersive experience.

It’s not just in-store where technology offers added value to potential customers, it can also offer it at home. With the Makeup Genius App from L’Oréal Paris, users could try out different make-up looks via AR. Through realistic product presentation, the customer’s uncertainly, which can occur before a sale, was taken away. This was particularly effective during sales.

IKEA is currently testing how customers in the future will be sent on virtual shopping tours through their stores. For this, IKEA had a free app developed for the HTC Vive VR system. With the app, one can freely move in the midst of a true to scale kitchen, choose different materials with the HTC Vive controller, open draws and even cook food. Additionally, the company plans a series of furnishing solutions that customers can virtually explore before buying. This way, interested parties can view products in detail without having to travel to the store.

Involvement, Immersion, Impact

Developments around the theme of VR create completely new ways of staging interactive brand experiences. Targeted approaches follow three stages. To generate involvement, it is necessary to have a concept and promise of experience that lead to the user’s active engagement with the brand; only then will they take the step towards this technology. Maximum immersion has to be focused with the implementation, so that the user also takes part into the emotional aspect of the brand experience, and this new kind of brand staging achieves optimal impact. The proof of efficiency?  The expression on the user’s face when they take the virtual ‘Eye Ride’ on a BMW motorbike, for example 😉

First published in German by

In recent months, the keyword “big data” was the Holy Grail of sorts in the marketing realm. But until now, many discussions around the topic were primarily carried out at congresses and conferences in the more theoretical form of the Knights of the Round Table. This is especially so when it comes to external data (third-party data) which advertisers purchase in addition to their own data (first-party data), in order to control more targeted online campaigns.

This is because up until now, campaigns with third-party data only existed in the specialist lectures of most international advertising service providers, but unfortunately, had far too little presence in the German online advertising market. The infrastructure stemming from ad servers, data management, and demand-side platforms was available, but data suppliers were missing who could help a market to get off the ground.

But this situation is changing and more and more companies are also offering data for sale which is relevant to the German market. This provides a good reason for advertisers and their service providers to ask two central questions in particular:

  1. How much uplift is third-party data expected to supply to my campaign?
  2. With that in mind, how much might third-party data cost, in order to ensure that the campaign remains at least as efficient as it was before — and in an ideal case, even more efficiently?

The answer to the first question in particular is a difficult one, because, for one thing, “the ultimate campaign” does not exist. By the same token, as a general rule, advertisers have little to no empirical knowledge regarding the use of third-party data.

Therefore, I recommend that the question be asked differently and restated in terms of the second point below: If third-party data costs a specific amount, then how high must the uplift of the campaign be, in order that efficiency remains at least at the current level? And if the result is that there must be a minimum uplift of over 30%, then at such a point in time at the latest, an additional purchase of external data should be more closely scrutinised or all media alarm bells should ring.

In order not to surrender in advance, I thereby offer—without obligation and free of charge—my three rules of thumb which can be helpful in the use of third-party data in digital marketing.

1. Examine data quality very carefully!

Data offered from a supplier or data management platform must absolutely be put to the acid test before purchase. Enquire about how the data is labelled and if it really comes from the market in which it is to be later used.

Pay attention to how the data was generated: Is it “hard” data, or were projection algorithms used in the generation of data? If the data was originally collected in the offline world, it must also be examined as to whether the matching procedures conformed to data protection laws. And, last but not least, the question arises as to whether the specified quantity and granularity of the data profile is truly credible in relation to the total size of the target audience.

It is essential to consider in advance how you can analyse the quality of the purchased profiles. For example, measure the hit rate of the purported characteristics, e.g. via a panel or an online survey. Or, alternately, are there other measurable key values (KPIs) in the campaign which are to be improved by the data? If the answer to both questions is “no”, then steer clear of this data.

2. Choose the shortest path!

Campaigns that rely on third-party data can run into a quantity problem very easily. Why? Because the quantity of available profiles are generally less than the desired amount. This is particularly the case if the target audience is especially narrow and, at the same time, the data quality is expected to be high.

In order to explain why the quantity of available data is so important, a small technical digression is unavoidable: In the use of external data in a campaign, unfortunately, all acquired cookies are never obtained. This means: Some data sets are purchased, but cannot be used. This occurs, for example, when a portion of the cookies have been deleted by the users since then, or originate from another group of users, who are not in the environments in which attempts are made to find these users again.

This shrinkage is exacerbated by the fact that during the transfer of data from the supplier’s system to the buyer’s system, a synchronisation of cookies must take place over the user’s browser. Both systems must exchange their cookie IDs as well. The quantity of data is reduced considerably by means of this cookie synchronisation, because at some point, every user must be found on the website by the system for this purpose. Our experience shows: Even in the best-case scenario, about one-fifth of the available profiles are lost during this process. If the cookie synchronisation is poorly implemented, then very quickly, more than half of the profiles can be lost.

The risk of loss can be minimised by allowing the cookie synchronisation to occur as near as possible to the place that the data was generated or put online. Thus, all unnecessary partners within the supply chain are eliminated! These only cost money and reduce the quantity of usable data. In case the data supplier already employs a data management platform, the data can possibly be transferred directly into the delivery system, and in this way, you avoid an additional data synchronisation process.

3. Recalculate in advance!

A simple calculation can decide the fate of your data campaign: Set the price that you should pay for third-party data in accordance with the added value that you must achieve by the use of external data. Do data costs eat up the performance improvement that the campaign is meant to achieve? In this case, the use of third-party data would not improve the campaign’s efficiency. If you still have no empirical knowledge as to whether the uplift that must be generated by the data is realistic, then ask experts who can present you with benchmark variables.

In the purchasing of third-party data, take profile quantities and target audience sizes into account. The acquisition of data does not always pay off. Thousands of users that were, for example, identified as clear-cut interested parties for an especially strong-smelling type of stockfish might be a valuable target audience. However, it is rather unlikely that it is worthwhile to address these people though a narrowly focused re-targeting with a display campaign. Here, it would be better to search for other, more cost-efficient methods of reaching these fish aficionados.

Whether the use of external data in online marketing is worthwhile can generally only be assessed by companies in retrospect. However, it does not hurt to carry out a few simple calculations in advance. At any rate, the hardest currency is the experience gleaned with the performance values of campaigns. Wherever suppliers of third-party data can provide satisfaction—both when it comes to the quality of data as well as the price—then they have a rather good chance to belong to the Round Table of advertisers in the future.

First published in German by

Just why is it called a search engine and not a find engine? Why do people google Google? Why does someone from the media write about Search? All of these are valid questions, but I will leave the first two on the roadside and concentrate on the last one. The answer is just as trivial as the other two questions: because Search is fundamentally not at all different from an advertisement driven customer journey in a well aired closed environment.

If this threw up questions, you can start wondering if Google has an answer for you.

Search engine marketing is much too complex for a well-versed ordinary person like me to handle. It is its own science, but still follows a few economic ground rules. It is all about AIDA when the journey goes from generic terms toward brand keywords. When asking myself how high I want to scale my budget, the answer is all about diminishing returns. For this, the cost for that last final sale/click/so on is the base for my future actions – and not the average. And yes, it’s also about might makes right. Whoever is more publicly known, has the best optimized website (this is where buying power is surely not a handicap) and is generally more competent, will disproportionately profit. Taken at its core, Search is advertisement ecology in a microcosm. That said, the industry I just labelled a microcosm is actually a monopolistic billion dollar business.

But this is not the end: it is not only a catch basin for generated interest through other web actions and thus close to the sale. It’s also – as described above – a complete sales funnel on its own merit. This versatility is the reason that this channel is so hard to grasp and pin onto a snazzy strategy chart.

Add to this that it is not only a good sales story from Mountain View when you are told that even non brand search engines can help advertisers. This message can be confirmed from our own in house information, because especially this source of advertisement has shown an incremental growth of capital influx for the likeable data kraken. If this helps or how much more help this provides in comparison to other media channels is something that can only be gauged on a case to case basis.

For this, the devil is in the detail: A short glimpse into online attributions often fails, in particular for multi-channel offerings with a high offline advertisements and assets such glimpses often lead to misallocation.  Only in depth, sophisticated modellings can alleviate this problem and separate the intrinsic use from incidental gains.

How will this continue in the future? People will get lazier and the search function will become increasingly mobile (android) as well as increasingly voice activated. Add to this that Google learns more from us than some of us might like. Despite this many will enjoy the more personalized search results, which will lead to increasing relevance with competing search engines. This will ultimately lead to a more target oriented phase of inspiration and that will mean good things for a further growth of relevancy this channel. Google is hardly known for standing still, which means you can already prepare yourself for future innovations.

Deep Learning is a sub-discipline of artificial intelligence (AI), whose basic idea harks back up to the 1950s. Although, mass suitability has as of now not been reached. With sinking costs for computer chips and thus also for networks, as well as the constantly growing amount of digitally available data, machine learning has been undergoing an impressive renaissance over the last few years. Deep Learning enables computer systems to recognise certain patterns in volumes of data through iteration, in this case the repeated execution of commands, and to further and further refine these. In short, Deep Learning machines are learning how to learn. The range of applications is sheer endless with only one requirement to be met: data in digital form should be available in large amounts to extract useful patterns.

Especially companies in the Silicon Valley are currently betting on this reawakened trend. The pace of evolution with which new insights can be won from the now massive amount of available data is enormous: In 2009, a team around Geoffrey Hinton from the University of Toronto delved into the topic of speech recognition. After intensive training, the software was in a much better position to convert spoken words into written text than all of its predecessors combined. Two years later, Google applied Deep Learning to data of its service YouTube and let it separate the data into several categories. The result saw next to categories such as ‘human faces’ also the category ‘cat’ appear, which led to a considerable degree of amusement.

Deep Learning has evolved enormously since that time. Only a few weeks ago, the Google computer programme AlphaGo beat the until then dominating champion Lee Sedol in the strategy game Go. Many consider this a milestone of AI, even if such excursions by Google should be seen rather as a gimmick. Google’s actual fields of application lie in the areas of search and the presentation of search results. For the company, the so-called Rank-Brain – which leads to even better search results – is much more important, because it is supposed to guarantee future domination on the search engine market.

Deep Learning is booming

The list of other current examples is already a long one – and it will grow even further in the future.

  • Facebook‘s new messenger M for instance, is being fed Deep Learning insights, which can result in entirely new services. Via machine-led interactions, the user can for example comfortably create a digital assistant, who facilitates everyday life through interactive calendar and reminder functions. As recently presented during the yearly conference, Facebook’s chatbots are becoming more powerful due to machine learning. Until a full-fledged assistant, able to make travel arrangements and administer an account, comes into existence, not that much more is needed.
  • IBM, Oracle and eBay are working on new solutions that are only possible because of Deep Learning. The goal is to make technology even more efficient, to customise search results or lists of suggestions according to the needs of the user.
  • Siri, Majel and Cortana are speech input systems, designed to facilitate input and search in smartphones of the platforms iOS, Android and Microsoft. The vision are devices that can be operated by only using one’s voice. These applications do not only revolve around a results list driven by an algorithm, but also around recognising semantic connections faster and better to further and further increase the programme’s intelligence.

It is also conceivable, that Amazon uses this technology to further refine the flow of goods. In doing so, the online merchant can get closer to its dream of delivering goods virtually in real-time. Should Amazon be capable of developing new prediction models to store goods in the respective warehouse before the customer orders his goods, the merchant must not stock the entire inventory in each warehouse. While this is still a dream of the future, it is already certain now that Amazon is working on speech input devices like Alexa, that are connected to the internet and as such, are supposed to facilitate everyday life.

The world will see lasting change because of Deep Learning within the next five to ten years. These innovations will also have consequences for job development. We will gain new insights through Deep Learning that would not be possible without it. In particular, data protection represents a big challenge, because not everything that is possible is being applied to the advantage of the consumer. The challenges consist of finding the correct norms. This is because there are no technical limits or industries, in which Deep Learning could not be used. As soon as – in whatever field – certain patterns have been identified, a huge potential for optimisation exists. These new insights will then be used in the most diverse fields, to exhaust its complete potential, increase reliability and to design the technology in an easier and easier way.