The future of search engine marketing is fully automated and has already begun. As you read this, artificial intelligence is managing millions of AdWords campaigns on Google. Contradiction can still help to put the power of algorithms in their place, but there is no stopping the trend.

The traditional keyword has long been redundant as a sole targeting parameter. In the past two years, the level of partial automation has risen constantly. Responsive ads and smart bidding are already a reality and they hint at the direction that search engine marketing will take in future.

Platforms like “Google 360” and the “Adobe Marketing Suite” show the way: it is clear that both systems are consistently broadening and enhancing the options for integrating and linking external data sources. Google and Adobe are already very close to achieving the aim of comprehensive access to the customer journey.

SEA campaigns are becoming more individual

The range of automation solutions on offer is a major factor in driving demand for these options. Furthermore, the complexity of today’s digital campaigns is always on the rise. Google and Adobe enable campaigns to be enriched with increasing amounts of profile data from a wide variety of sources. Traditional SEA campaigns therefore make increasing use of profile-based, individualised management. Every profile has a scaleable number of matches with relevant search terms – and the total of these matches will replace a more or less extensive keyword set.

At the same time, management of modern SEA campaigns will exceed what is humanly possible and will only succeed if artificial intelligence helps to process the large quantities of data needed for the targeted management of ads. In an ideal scenario, machine learning will help to steer the correct path between campaign aims, competition and search relevance.

Ultimately, the user decides between success and failure

While management of profile data is automated as much as possible, the quality and relevance of campaign content comes from the digital offering of the companies running the adverts. Whether it is a website or a data feed – companies like Google do not mind where the content comes from. Rather, the deciding factor is whether the data is relevant and current, responds to demand among the target audiences and is technically accessible to the marketing platforms. Optimisation of its digital offering makes the ultimate difference between a company and its competitors when it comes to search engine marketing, as its success or failure is still dependent on the user at the end of the day.

For agencies, this means that the SEA manager of the future will have to bring additional skills to the table. Process automation and increased data management in modern search engine advertising require strong division of labour and specialisation within the team. From programming and optimisation to quality assurance, previously separate functions from search engine optimisation (SEO), analytics, consultancy and traditional SEA will need to interact.

In the age of automated search engine marketing, interdisciplinary teams of specialists will need to ensure that agencies and customers do not blindly deliver a technological black box and that they keep the most important levers for economic success in their own hands.

The best way forward is always together. This key phrase characterizes not only the message of the Easter holiday, but also the SEO news for the month of April.

Why IT security is so relevant for SEO performance

Search machine optimisers are happy to label themselves as their customers’ caretaker. After all, SEOs stick their noses into almost all areas of the organisation and operation of digital assets such as websites or apps. Conscientious SEOs are not only required as strategic analysts, but also as nagging quibblers who are at some point confronted with security issues in addition to structural, content-related, and technical problems. Marketing staff are not responsible for IT security, of course, but a lack of awareness of online security is certainly capable of negatively influencing performance in search engines. Today’s basic SEO knowledge base should include the fact that the use of secured client-server connections by means of the HTTPS protocol of search engines has now become an essential quality factor. At least as important as these formal security components, however, is the monitoring of automated bot traffic on one’s own servers, for instance. The multitude of bots has perfectly legitimate purposes when it comes to website crawling. The best example of this is domain indexing for a search engine. According to a current study from the US bot specialist Distil Networks, however, around 19 percent of all active bots have darker motives. These include copying copyrighted content, looking for potential vulnerabilities on a given server, or even the distribution of malware. In total and over time, these generally unproblematic bot attacks can cause throttling of the loading speed or even completely prevent the delivery of websites.

Both have immediate negative affects on visibility within search engines. In addition to automated attacks, the topic of hacking in particular also takes the limelight. The term hacking refers to the act of changing content on a website through illegitimate intrusions from the outside. According to information from an analysis from the world’s largest domain handler, Go Daddy, around 73 percent of all hacked websites are taken over and changed for purely SEO reasons. In most cases, sites with relatively high authority are hacked in a specific topic area in order to deter potential visitors or to set up illegitimate links. The gravest consequences of such a hack would not only be the potential damage caused via search engines, such as a loss in incoming traffic and visibility, but also direct harm to website visitors due to fraudulent acquisition of their personal data (phishing) or direct infection with malware via infected downloads.

Strong together

The consequences of insufficient IT security for marketing and searching can be so severe that this creates a great commonality between the two areas. Neither effective search engine optimization nor comprehensive IT security can be outsourced; they must be the result of shared responsibility and teamwork. Those who continue to compartmentalise their thinking and organisation will not really be successful in either area.

The gaming market is booming: According to GfK, the gaming industry generated sales of around 4.37 billion euros in Germany in 2018. ESports tournaments fill entire stadiums worldwide, the Candy Crush Saga leads the Google Play Store ranking of the top-selling apps, and the augmented reality game Pokémon Go led millions of users on a hunt for virtual Pokémons. Playing triggers emotions that marketers can take advantage of: Fun, ambition and happiness.

There’s a child in every one of us

The play instinct is a natural instinct in humans. Especially for children, playing is important for their mental and physical development. But anyone who believes that the play instinct is limited to children or adolescents couldn’t be more wrong. According to a study by the Association of the German Games Industry, 28 percent of computer and video gamers in Germany are 50 years and older. With a total of 9.5 million players, they represent the largest age group in Germany. The average age of gamers has been rising for years: Whereas it was 32 years in 2013, it rose to 36 years in 2018. These figures show that my grandmother is a target group that should not be underestimated, especially against the background of our ageing society. Due to their social interaction character, games can perhaps even counteract social issues such as loneliness in old age.

In addition to the classic target group of so-called heavy gamers, i.e. players who spend an above-average amount of time playing video games, there are also people who only occasionally want to play computer or video games, but the relatively high purchase costs for a gaming PC or game console often put them off. This however may change this year with the planned launch of Google’s new game streaming platform “Stadia”. According to Google, the games can be downloaded from the cloud and played on any device using a dedicated controller. Although nothing is yet known about the pricing model of the service, it can be assumed that this will create a previously untapped target group potential.

New technologies offer a wide playground for stationary retailers

According to the German Retail Association, the stationary retail sector is still struggling with declining customer numbers. One way to attract more customers to city centers and shops is to make the shopping experience more exciting and interactive through gamification. Gamification is the integration of playful elements into a non-playful context. These playful elements address human needs such as the desire for interaction, ambition, competition or reward. The aim is to motivate the players to adopt the desired behaviours, such as increased buying intention and loyalty or an increased number of customers in the store.

Some companies have already successfully implemented the gamification approach in their businesses. In the spring of 2018, the bookseller Hugendubel motivated customers to visit the branches with its “Bookbuster” campaign. They developed a mobile game in which users could test their knowledge about current books in three levels. Level 1 consisted of a virtual book cover puzzle and Level 2 required users to guess book titles represented by images. Augmented Reality was used for the third level. The participants could collect virtual birds and win books in the process. According to Hugendubel’s Marketing Director Sarah Orlandi, the campaign has led to increased visitor numbers and increased sales.

In addition to Augmented Reality, Virtual Reality is also ideal for making the shopping experience more exciting and playful. For the opening of a new IKEA store in Dallas, the furniture manufacturer has developed a Virtual Reality Experience where visitors can immerse themselves in the IKEA world. A virtual cushion throwing game enabled playful interaction with the products. In another VR experience, the participants learned in a playful way about the sustainable design process of an IKEA bamboo lamp.

Another successful gamification model comes from NIKE. In order to promote the new “Epic React” shoe, the trial fitting of the shoes in the store was combined with a three-minute motion game. Before the customers got on the treadmill, they created an avatar of themselves. This avatar was controlled by the running movement of the customers and a hand-held button for jumping. In total, there were four different worlds for the participants to explore as they playfully tried out the new Nike shoe.

As you can see from the results, there are no limits to fantasy and creativity in Gamification. However, retailers should make sure that the game elements support the shopping experience and are not just a gimmick and distract from the buying process.

Gamification supports everyday agency life

Gamification will also play an increasingly important role in the ELearning sector in the future. The online advertising industry continues to evolve, new disciplines are being added, and often new tools are emerging whose functions employees need to learn.

Anyone who has ever attended such a course knows how theoretical they can be and how quickly they forget what they have learned. If onboarding events and tool trainings were gamified in the future, knowledge could be imparted in an entertaining way. Companies can, for example, set incentives that motivate employees to further their education and thus collect points or reach the next level.

Salesforce, an international provider of enterprise cloud computing solutions, is a prime example. A special learning platform called “Trailhead” was developed for a playful introduction to their software, where learners receive points and badges as a reward for solving tasks. The competition character promotes learning and later use of tools in everyday work.

Gamification approaches are also conceivable for recruiting and innovation workshops. So, dear colleagues, be creative and let your play instinct run free!

New technologies, devices, and content formats are challenging search engine experts around the world at an ever-increasing rate—but now help is coming from an unexpected source. What little helper can we expect to see in future strategic development and day-to-day business? Find out more in our March edition of SEO News.

Fraggles are back

The fraggles are loose, and they’re slowly taking over the Google world of tomorrow. Just to clear up any confusion right from the start: we’re not talking about the radish-loving, 65-cm tall (source: Wikipedia), cave-dwelling humanoids from the 80s TV show of the same name.

Like their namesakes, the fraggles our search engine optimisers have been working with for some time now are small and dynamic. Here, though, the word refers to content fragments identified by Google that can be scattered, either isolated or in innumerable combinations, throughout the ever-growing landscape of platforms, devices, and technologies.

Cindy Krum, a Denver, Colorado-based mobile marketing expert, was the first to use fraggles in this context. She says fraggles is intended as a portmanteau of “fragment” and “handle”, and describes them as Google’s response to dramatic changes in user behaviour and to the technological structural framework of websites, Progressive Web Apps (PWAs), personalised web services, and data feeds.

What these digital assets have in common is that most of their content is assigned to a single URL. One driving force behind this trend is the process of adapting to the needs of the mobile age; another is the development of server-based technologies like Javascript or Ajax, which are capable of generating individualised content dynamically.

Google, Bing, etc. adjusting their indexing

As a result, Krum says, the fixed allocation of content to URLs is being supplanted; search machines are increasingly indexing mere fragments of content from individual websites, feeds, or apps. Rather than indexing entire websites page by page, she explains, Google, Bing and friends now face the challenge of fishing the most relevant content fragments from the massive ocean of conventional HTML, dynamic Javascript, and endless streams of XML. Krum believes that Google’s Mobile First Index, which has been online for over a year, is simply a huge dragnet for fraggles of all types.

Indeed, looking at how the major providers’ search results have developed over the past two years, the fragment theory makes sense. Both Google and Microsoft are continuously experimenting with new content presentation modes and formats, especially on mobile devices: everything from integrated map and rating displays on local search results, to comprehensive reports on people, places, and brands through Knowledge Graphs, to concrete answers to FAQs through Features Snippet.

Search engines: the universal assistants of the future

Moreover, search engines are adapting their results more and more precisely to users’ search intentions and usage contexts. This development is sure to continue through the dawning age of voice assistants. Virtually calling a computer-generated Google Assistant while at the hairdresser’s is just the first of many coming high points in terms of search engines differentiating themselves as ever-present, universal information and assistance systems.

Relevance and consumability are inextricably linked in systems like these. Whether on the phone, watching television, or driving, modern users have neither the desire nor the ability to look through a page of search hits for the answer they need—much less scroll through a website. The real advantage of the fraggle concept lies in the immediacy and flexibility of small fragments of information, delivered to countless combinations of usage situations and device preferences.

Fraggles highlight Google’s growing emphasis on the customer journey

Fraggles also fit seamlessly into Google’s new strategic alignment of search results to user journeys. To celebrate the 20-year anniversary of its search engine, Google announced its intention to stop viewing search activities as a series of individual queries, but rather to use context and history information to try and pinpoint the user’s exact intentions and position within the customer journey. This, combined with artificial intelligence, means that search results are now meant to be seen not as a results-based service, but as a conversation. Fragments can be incorporated into this scenario as well, whether as product information, concrete purchase offers, or special queries during the post-purchase phase.

What does this all mean for SEOs? First and foremost, it means they will need to continue developing their own approaches to voice and visual search queries. Markups for structured data and voice responses (Google Speakable) need to be part of their standard repertoire, as do keyword analyses organised by intentions along the customer journey.

It doesn’t matter if some people are still arguing about whether the hype is over or not. The fact is that extended reality (augmented reality, virtual reality, 360° film) has become indispensable in many areas. It has already solved many problems in marketing alone. You just need to take a close look at what the differences and therefore advantages of the individual presentation forms are to see this.

There can only be one search engine! This statement does not sound very much like diversity and transparency, but rather monopoly, one-sidedness and dominance. But the reality is that Google has unrestricted control over the global search market. Whether this will remain the case depends largely on global competition. This issue of SEO News for the month of February is dedicated to the challengers and eternal second-placers in the global search market.

No choice but to be happy with Google

Data protection is not just a local issue, a fact which has recently come to the attention of Silicon Valley. In the last few years, European and national competition authorities have put massive pressure on Google and Facebook with sensational rulings on the collection and use of personal data.

According to recent figures, Google’s parent company Alphabet Inc. had to spend more money on fines last year than the company paid in taxes. The justification for bringing these successful proceedings is always the same: abuse of a dominant competitive position in the provision of services or products.

There are many causal factors influencing the often quasi-monopolistic market shares of search or social media platforms in the digital world. But the search industry is more than familiar with examples of how former market leaders (Yahoo, Alta Vista) can be overtaken by an unknown competitor (Google). So what about competition in organic search in 2019? Is there a contender for a post-Google world? And is it worthwhile for SEOs to take the broader view when it comes to competition?

You don’t have to spend much time checking the numbers to demonstrate Google’s market power, which currently stands at around 90 percent worldwide. The company’s professed goal is to combine the industry’s largest data mine with artificial intelligence to form an invisible and omnipresent information, solutions and convenience machine. The voice-controlled Google Assistant is an important cornerstone for Google. But when we look closer at the issue of Voice Search, we see that the subject is more complex than it may seem. Those grey talking boxes are just the information middlemen in a larger game. Usually, the search engine inside is not a product of the brand on the case. For example, Microsoft’s search engine Bing is the actual supplier behind the search results for Amazon’s bestselling voice assistant Alexa. Up until the end of 2017, Bing was also behind Apple’s language assistant Siri. More recently, the company based in Cupertino has started using Google search results, with the exception of image searches, which continue to be supplied by Microsoft. In particular, as local searches on smartphones or in the car are increasingly conducted via speech, Bing should not be written off or disregarded as a search system.

Expansion is creating a more diverse search market

The search market continues to expand as a result of technological evolution, so that even small search providers can show surprisingly good results in their own niches. “Duck Duck Go“, the search engine for anonymity and the protection of personal data, claims that the number of searches carried out on their platform has almost doubled since 2016. According to an analysis by the analytics service SimilarWeb, the provider from the Midwestern United States leaves even industry giants like Bing in its wake when it comes to bounce rates and user engagement. According to the study, this is in part due to the fact that DuckDuckGo users are more technologically aware and sensitive to data protection issues.

The fallen giant Yahoo is not planning to make a search comeback

With just under four percent of the global market share, the former search pioneer “Yahoo” is still in the game. However, for several years its search technology has been provided by Microsoft, and since the portal was sold in 2016 to the US telecommunications company Verizon, a return to a separate search business is no longer on the agenda for the company founded in 1994, although the service is still quite popular in Japan.

Similarly, even the small search engine “” is still in business, and is holding onto a stable market share of around four percent, at least in the US. Ask started in 1996 with their own search technology, however, around 15 years ago, it morphed into a social question-and-answer portal that attracts a relatively stable core audience, though this never got past the beta version in Germany.

The real challengers for Google are in the Far East

You have to look all the way to Asia to find a potential challenger for Google: China has developed into a search engine market of its own with a similar economic potential to that of the West. However, under the conditions of national censorship, it operates according to its own rules, and Western corporations are systematically denied access. But companies like Baidu or Tencent are in no way inferior to Silicon Valley in terms of technology. The race between the USA and China in the fields of artificial intelligence and quantum computing will also be trendsetting for the global search market.

In 2001: A Space Odyssey, Arthur C. Clarke came very close to predicting the future in one of his laws. He wrote that “Any sufficiently advanced technology is indistinguishable from magic.

In today’s world, in which technological innovation seems to be moving forward at warp speed, the revered science fiction writer seems to be closer to the truth than we might think…

Technology is in the air

In just ten or so short years, technology has gradually become invisible. The cumbersome computers that once took up space on our desks have morphed into tablets and smartphones. Bulky cathode-ray tubes have yielded floor space to flat-screen TVs, that some people even use as works of art. The wires that once connected telephones to the network have disappeared, leaving behind WiFi waves and 4G. Technology is now omnipresent in our daily lives. But it has moved out of sight.

Screens themselves also seem to be disappearing, giving way to connected speakers and voice command technology. We are more surrounded by technology than ever, but let’s face it… we can no longer see it. The ultimate expression of this disappearance is Amazon Go. In reality, there is nothing technological about the customer experience at Amazon’s checkout-free supermarket. You go in, take what you need from the shelves, fill your basket and leave. All the shop’s electronic equipment – sensors, cameras, and of course computers – is hidden behind the scenes, out of customers’ view.

From their perspective the shopping experience is no more “digital” than buying a lemon on a Friday evening from your local grocer’s. All the technology that Amazon uses has actually become completely transparent.

But there is still a magical element about this shop, in which there are no human interactions at all. Helping yourself, leaving the shop, and seeing your bank account automatically debited with the correct amount is the most positive brand of magic that modern technology can offer!

Power and Data

The magical Amazon Go experience owes its appeal to a whole host of technological innovations that we now place under the umbrella of “Artificial Intelligence”. Amazon Go uses weight sensors that detect when a product leaves a shelf. Cameras follow buyers and identify their movements. And there are computers that can link up these sources of information and determine who has purchased what.

In a nutshell, the boom in information transfer and processing capabilities is what has made Amazon Go possible. While just a few years ago, we were still strugging to analyse complex statistics, the increase in computers’ processing abilities and the deployment of high-speed networks (fibre and soon 5G) mean that we can now process images or videos in real time. Computers have learnt how to handle eminently complex data on our behalf.
Image interpretation is commonplace, and opens up a plethora of new possiblities. If only to mention some of the most striking: the facial recognition experiments that were carried out in Shenzen, China: detection of cheating during exams, identification of a criminal inside a crowded stadium, etc. not to mention the attribution of a “social” score to inhabitants, depending on their behaviour.

Here again, we are talking about transparent technology that has a very real impact on people’s lives.

The end of empathy

In fact we are seeing the impact of accelerating technology in our everyday experiences already. What criteria can be used to deny a Chinese citizen access to an international flight? And most of all, what can the citizen do to argue against these criteria?

Doesn’t basing administrative decisions on thousands of statistics mean casting shadows on real life?
In the United States, local authorities are using AI to decide how social benefits should be attributed. An increasing number of cases are being processed “digitally”, based on solely objective criteria. By developing fully automatic, and therefore “objective” systems, the authorities are actually creating discord.

Most federal employees using these systems have found them to be a way of relinquishing the burden of responsibility: “Our system has denied you access to this loan”. For beneficiaries, the programmes are seen as the end of empathy and human understanding. A refusal that is backed up with the humanity and compassion of a real-life personal explanation is easier to accept than when the decision is generated by a heartless source of artificial intelligence, which allows no response or opposition to its arguments.

Lost in digitalisation

In addition to Arthur C. Clarke’s “magic”, there is a fear of a certain modern illiteracy, the human mind becoming unable to understand the ins and outs of a decision. Because obviously, the ability of computers to store and process information bears no relation to human intelligence.

When faced with a decision involving an algorithm that impacts our daily lives, we don’t know how to react. Simply because we can’t understand and discuss the various factors involved in the decision. A counsellor can present arguments, even though they may be clumsy, whereas AI remains cold and explains nothing.

In fact, that’s the problem. The very criteria that make AI such an efficient mediator are – by definition – too complex to be understood by the people who are affected by them. How can the Shenzhen resident who is refused access to an international flight understand the reasons for this refusal? And above all, is the person given the opportunity to know how their actions may impact their visa request before it all begins?

And that’s what digital illiteracy is: not understanding the impact that technology has on our daily lives, and feeling that we are losing all control. It’s a type of curse.

While progress in artificial intelligence has aroused fear about the destruction of humanity – known as the Skynet syndrome – we worry somewhat less about the stranglehold that algorythms have on our daily lives. No longer understanding the world around us, how we interact and make decisions, and above all what impact our actions may have, are increasing dangers for our society.

While Arthur C. Clarke did predict the rise of magic, he couldn’t have known that it could be of the black variety.

Translated into English by Ruth Simpson

This article is part of Serviceplan’s Twelve #5 issue.

In our regular series The inside story x 3, experts from the Plan.Net group explain a current topic from the digital world from different perspectives. What does it mean for your grandma or your agency colleague? And what does the customer – in other words, a company – get out of it?

As we continue to take advantage of online services, apps, and websites of all kinds, we are creating enormous quantities of data. This data is often stored in clouds and can be linked back to each and every user. At the same time, most Internet users are failing to take the protection of their data as seriously as they should, while companies are often failing to keep up with the rapid pace of developments and resolve dangerous security vulnerabilities fast enough. Even policymakers have only recently reached the point of being able to enforce existing data protection laws.

A case currently receiving wide media coverage, in which a 20-year-old man is accused of spying on politicians and celebrities, shows just how easily data stored in the cloud can be used to infiltrate entire personal networks.

“No Grandma, you haven’t broken the Internet!”

Let’s admit it: who among us hasn’t been just a click away from a potential hacker attack? Even younger users are often tricked by false landing pages or phishing emails that invite them to reveal all of their data and passwords, together with those of their contacts. And net-savvy grandparents who have opted to embrace progress unfortunately aren’t safe either, as their grandchildren may have been kind enough to save all of the passwords to sites bookmarked in their browser. “It’s all just a click away, you see?”

Users are most commonly tricked by deceptively convincing emails that invite them to open an attachment or link. Once opened, a website asks the user to log in to their bank account for the purposes of authentication. If fallen for, this provides hackers with all of the access data they need to empty pension accounts and cause a great deal of upset. And as if this wasn’t enough, criminals are now able to purchase security vulnerabilities on the Dark Net, meaning that they no longer even need a hacker’s technical know-how in order to ply their trade online. Attacks of this kind are completely random, placing even unsuspecting grandmas at risk. No one solution is enough to protect against a threat as complex as this.

A first step in the right direction is using complex passwords and a password manager app – and, most importantly, only using one password per online service.

“Safety first – even in the ‘safety’ of your workplace”

As an Internet user, you aren’t only responsible for yourself, but also for the security of your colleagues and of the employer whose data you are working with. Backups of your own or your company smartphone are now often stored in a cloud automatically, which makes it almost impossible to prevent your colleagues’ data and contact information from being stored externally. If a hacker gains access to a personal account, such as Google Mail, Apple, or Facebook, this will give them automatic access to others’ business accounts and contact information, even though the company’s security standards have been followed.

One solution for this problem is provided by “sandboxes”, which manage the context of usage to help distinguish what is private from what is work-related. To begin with, however, businesses need to establish guidelines that make it clear which clouds are generally suitable to use and which should be avoided in a work-related context.

So what can my company do to properly counter this rapidly growing threat? Here too, your top priority is to not underestimate the human factor. Even the most complex technical safeguards can be rendered ineffective by one employee’s careless actions. Employees who use the Internet in their work should follow the same safety measures that apply to private users. For management this means that, in addition to the company making use of the available technical solutions, employees must also be trained regularly in the safe use of cloud services, smartphones, email accounts, and other tools. If a company provides goods or services over the Internet, there is always a danger that an attacker will succeed in hacking into its applications.

“But why on earth would anybody want to hack my company?”

Many clients, and small and medium-sized businesses in particular, underestimate the threat that hackers pose to them, asking: “Why on earth would anybody want to hack us? We’re far too small and insignificant.” The mass-hacks of today aren’t about hackers targeting specific victims, however. The Internet is home to search engines which, like Google, scan the entire Internet in order to catalogue the infrastructure that it uses, including the manufacturers of servers, routers, and so on, as well as the versions of software installed on them.

If a hacker knows that version 1.4 of web server software “A” contains a vulnerability that they could exploit, they’ll first run a search for the version online before launching an automated attack on all of the potential targets that the search engine suggests to them. This means that anybody present on the Internet can be identified by hackers – albeit indirectly – as a potential target. The only way of protecting yourself is to know the threat, and to invest in security for your system and training for your employees. In a live interview at CeBit 2017, Edward Snowden answered a question about how the Internet could be made a safer place. His response: “Everybody who contributes something to the Internet, whether via text, videos, apps, shops, cloud services, or similar, has an obligation to make their contribution as secure as possible.”

Even when using third-party software (software libraries) and other service providers, without which modern e-commerce platforms would cease to exist, it’s important to know the security risks. Only in this way can the risk be minimised of an apparently secure system being weakened via the unsecured “tunnel” of a third-party provider. Regular updates are an absolute necessity here. Regular pentesting of all system components is also a security precaution providers should take. Specifying security requirements (for example, in the form of secure coding guidelines) is also necessary nowadays when utilising implementation service providers, hosting companies and so on. As no software industry standard has been defined in this respect, it’s important to work with experts capable of defining a state-of-the-art standard. When service providers keep their cards close to their chests on this score, this is often the first indication of there being no long-term guarantee that security vulnerabilities will be resolved.

Nope, no SEO trends for 2019 at this point – definitely not. Someone else can take care of that. Instead in the first SEO newsletter of the new year, we take a look at what makes life really exciting: contrasts and conflict. Having said that, anyone who thought we might get involved in the current gender discussion can rest easy. We are sticking with search engines and, more precisely, the antagonism of range versus conversion, as well as the eternal SEA versus SEO dichotomy.

SEO & CRO: A relationship with conflict potential

A high retention time for the domain, high numbers of page views per visit, the use of rich media and finally conversion, for example in the form of a purchase or a lead, all represent positive user signals which are usually achieved by search engines with increased visibility. This means the top priority of the search engine optimiser is to generate these positive user signals with relevant content on a powerful, technical platform. So far, so causal, not to mention obvious. But is the connection really that easy? While high loading speeds and suitable value-added content can be achieved with classic SEO work, it requires the generation and measurement of positive user signals using an advanced method: conversion rate optimisation, or CRO for short.

In order to move forward in this area, CRO is used to develop test scenarios with different layouts, designs, mechanics and content. Depending on the results of the user tests, successful scenarios can be implemented, offering the potential target group a user experience with added value. In an ideal world, this approach also improves visibility in the search and performance of the website itself.

So far, however, the interaction between SEO and CRO measures have not been fully explored. In a recent study, Will Critchlow, founder of the prestigious British SEM agency Distilled, devoted himself precisely to this question: in which situations can SEO and CRO come into conflict and, in the worst case scenario, even counteract the positive results of one another? As in most cases, the answer is complicated. Using the example of a website that is not described in detail, Critchlow explains how consideration of the true success of tested and implemented conversion measures is put into perspective by taking into account the potential resulting loss of organic visibility.

This effect is non-linear and can vary depending on the SEO or CRO measures in question. Only after many test cycles it becomes clear that both measures have direct influence on each other and that an overall positive result is only achieved if the interactions of each action are assessed, and that only changes that have a positive impact on both CRO and SEO are ultimately published.

CRO results are immediately visible in the test environment, however the impact of SEO-driven changes are only reflected in the search after a certain amount of time has passed. Therefore, organic traffic and conversion development must be compared, even after the roll-out of design or content updates, and the results constantly and retrospectively reviewed to determine the right balance for their own offering and the defined user group.

In an earlier post, search guru Rand Fishkin made clear that SEO and CRO do not need to have the same impact on every website, and that conversion and search engine experts need to work in harmony. However, in reality this is often not the case: according to Fishkin’s experience, conversion experts are indeed concerned about the impact of their work on search engine visibility, while SEO experts in many cases ignore the potential negative impact of their implementations on site usability – which is of course crucial to conversion. This is where we, as SEOs, should be taking a good look at our own work. After all, as ever in life: the greatest success is achieved through the mindfulness, communication and the cooperation of everybody involved in the complex organisation of operating a commercial website. One-sided points of view as well as inflexible hierarchies stand in the way of comprehensive product understanding and therefore commercial success. Furthermore, agencies in particular are required to interlink their services in the areas of SEO, CRO and analytics in the interests of the customer, and to provide them with comprehensive experience and expertise in the integration of these areas.

Stars in the ring: why do people click search ads?

The English language has a wonderful term that is almost untranslatable: one-trick pony. The search marketing expert learns what this means applied in context during their compulsory quarterly meeting – the presentation of the quarterly results of Google’s parent company Alphabet Inc. During this short and snappy meeting, the expert learns, in addition to all sorts of insights into international accounting and tax tricks (the company’s effective tax rate currently stands at an affordable eight percent), that Alphabet posted total revenues of approximately USD 33 billion in the third quarter of 2018. No surprise for a conglomerate with around 90,000 employees, which has twelve attractive daughter companies offering everything from energy and information networks, to biotechnology and genetic engineering, consumer electronics, self-driving automobiles and eternal life – in short, everything the modern world needs to survive that little bit longer.

At second glance, however, it is clear that of this total turnover, 88 percent alone is accounted for by the advertising business of Google’s search engine, which amounts to just under USD 29 billion. On the one hand, this is no secret and it saves Alphabet from the ever-looming destruction of powerful corporations in the Anglo-Saxon world by government anti-trust authorities. On the other hand, this is exactly the definition of a one-trick pony, the circus horse that can amaze its audience with a single, amazing trick in the limelight of the ring. When it can no longer perform its trick, the pony becomes worthless to the circus, who send it off to the butcher.

Google’s trick, the auctioning of highly relevant advertisements on its web search site, is called Search Engine Advertising (SEA) and it has been working perfectly for over 20 years. Of course, the company has relentlessly refined and expanded its only truly profitable business model, successfully bringing it into the age of video and mobile, and will continue to perfect it with the help of artificial intelligence in the future. However, no click means no business. So, what is the intention of billions of users who, instead of clicking on supposedly ad-free, organic search results, are clicking on paid Google text ads with clearly commercial intent?

Not an insignificant question, since consumers of classic advertising are inherently more sceptical of the commercial messaging upon which the overall success of seemingly mighty tech giants such as Alphabet ultimately depends. Despite the fact that users around the world eventually click, traffic flows and the trick works, astonishingly few studies have focused on these intentions. People like to claim that Google users don’t distinguish between organic and paid search results, instead just clicking on the top, paid positions. A statement that Google once again vehemently contradicted just a few days ago.

A recent study by market research firm Clutch shed a little more light on this area of confusion. A poll among 506 people who clicked on a Google text ad, initially found that 77 percent of respondents were actually aware that they were following an advertising message. This finding supports Google’s data, and indeed, Mountain View ads are clearly labelled and more recognisable as advertising than, for instance, ad placements for Amazon’s AMS services on the marketplace giant’s search results and product pages. In addition, around 75 percent of respondents said that text ads simplified their search, provided they offered a direct answer to their search query.

So far, this behaviour is not very different from organic search queries. However, if you take brand awareness into focus, it becomes clear that the true magic of text ads lies in the combination of the brand and search query. More than a quarter of respondents said they clicked on the ad because of the brand, demonstrating the decisive role that trust plays here. So, as is true for any other, classic form of advertising, Google’s core business relies on taking the customer seriously. Pure marketing messages that ignore this fact guarantee zero success in the relentless environment of direct competition on search results pages.

As designers we are always on the lookout for something new and unique: something that stands out from the crowd and grabs our attention. Constantly on the hunt for the “face” or the look with the potential to take brands into new territory. Fairly often, even if subconsciously, we return to what we already know. The following three blasts from the past will see a renaissance in 2019:

1. Uromis cake stand at the international design festivals

Posters for the tenth Adobe 99U conference in New York did not stand out at first glance, but caught the attention nonetheless thanks to an interesting hint of retro: two-tone, two-dimensional colour gradients transposed onto the simplest geometric shapes and a note of broken white as the background.

THE YOUNG ONES festival, taking place this spring, uses a similar low-key aesthetic. Even though the overall impression is more figurative, the look is still dominated by merging graphic colour gradients. In this case too, the viewer is left with a sense of intimacy.

If you start delving into design history, the origins of this “new visual idea” can be found somewhere between Art Deco and Functionalism – so around one hundred years ago. At the time, crockery with astonishingly forward-looking designs could be seen on many middle-class tables. Abstract and geometric spray patterns created using stencils adorned many manufacturers’ ceramics. Bright primary colours and incremental colour gradients characterised the appearance of emerging mass production. What once acted as a reflection of the leap into uncompromising modernity now, once again, appears very visually attractive because it comes across as slightly unfinished, rough and therefore artisanal and authentic.

2. Coincidental Dadaism through responsive web design

It is no secret that, from an aesthetic perspective, responsive web design (RWD) comes across as a cost-effective, slightly lazy compromise between desktop and mobile variants. Nonetheless, technical use of this “forced marriage” can also create some attractive visual outcomes as for example on the website of fannymyard design. This new style, which some designers are now producing artificially, tends to originate from sources of error and the straitjacket of the limited technical options when using RWD. The forced wrapping of text and image elements in set stages during programming creates accidental collage-like mixes in which headlines may stick right to the edge of the image as on Julie Cristobals website or, as in a recent illustration by W. Stempler,  overlap with an image, but only halfway.

What was until recently an absolute no-no in design terms is now the new design paradigm for various campaigns and corporate designs. Here too, there are parallels with a wild period of the last century: Dadaism. The two most important characteristics of the revolutionary artistic movement consisted of nonsense and coincidence. Examples for their implementation are pictures influenced by Dadaism (to be seen here, here and here).

3. Brutal Design – the power of the ugly

While web design is still getting used to this new self-determination, poster design has revelled in “bad taste” since the 1950s.

However, the message remains unchanged: fighting convention to take a stand against the arbitrary, the pleasant and the interchangable. In the digital age, brutalism is very different from what we are used to seeing. Design’s new extremism is rewarded with the most important currency in our age of short attention spans: the viewer’s undivided attention. Stylistically, the look is characterised by complete minimalism, plainness, classic Hex colour codes in flashy colours, non-contemporary use of fonts and closeness to the traditional code optics as seen at modeselektor, Vicky Boyd, rutgerklamer and Officeus.

Typographical harmony, large-scale images, micro-interactions, carefully crafted navigational approaches or clear hierarchies: all the rules that we were taught to promote good usability fly out the window. You can think what you like about brutalism, but even its critics agree on one thing: compared to the website designs that we are used to, this design trend offers exceptionally short loading times. And that means conversion!