Market Brew

September 3, 2014

Automating Online Marketing: Is it Possible?

Automating Online Marketing: Is it Possible?

Market Brew’s CTO takes you through a new process that aims to automate much of what digital marketers do today.

The Evolution of an Idea

We started building the search engine and “optimization engine” in late 2007, out of necessity. We needed a way to find out why Google was ranking some of our sites and our client’s sites better than others. In 2008, we launched a limited ALPHA release of our technology and gathered enough user feedback and proof of concept to push forward. We began building a much larger and more complex version and in 2011 launched our first BETA. More than 60,000 users later, we had a fully tested, robust enterprise-grade platform, with many bells and whistles showing all kinds of amazing data.

But something was missing. We had all of this data, yet our clients were more confused than ever. It was evident that just having the data was not enough – this Big Data had to be actionable. In 2013 we began discussing what we could do with all of this data. What happened next was magical.

We knew that we had significant competitive advantages in the following areas:

  1. Exposing Ranking Distances: The search engine’s search results could determine not just the rankings of each search term, but also the overall query score and the individual scores that comprised the overall query score. We call this the “deltas” and it allows us to figure out the “density” of a search engine result page. For instance, if webpage “A” is ranking #3 for “law”, how much farther does it need to go to get to #2? How much further to #1? These answers are crucial in the first steps of automating online marketing.
  2. Simulating Optimization Scenarios: We could simulate all possible ranking changes for every webpage / keyword combination. Once we had a clear picture of the deltas between each ranking, we could simulate webpage “A” moving to #2 and then to #1. Each webpage/keyword simulation could be run millions of times and an expected ROI could be produced.

Taking Shape

First, we began running “what if” simulations for a given set of keywords and webpages.

Within each simulation, we first determined how much efficiency there was within each ranking change. To determine efficiency, we needed to create a metric for each simulated ranking change, which would allow us to see how much a particular webpage could gain in terms of traffic vs. how much it cost (i.e. increase in query score) to gain that additional traffic. We named this metric “Reach Potential”.

It was extremely exciting to see this data, even in its raw form. We were starting to see true answers to exactly how much effort was needed to move up in the rankings. Moving from #3 to #2 might be a very high-ROI move, whereas moving from #3 to #1 a very low-ROI move, etc.

Expected Reach Potential

Once we had calculated all of the possible outcomes for a given keyword and webpage, we combined these simulations into a statistical expected value. In the patent, we called it the “Expected Reach Potential”, after the statistical nomenclature that many other professionals use, including poker players at a casino.

Given enough hands, what is your statistically expected result at the end of a simulation? In the product, we call it the “Optimization Score”. On the “Top Optimization Simulations” screen, each row represents a potential ranking change, due to a set of optimizations that can be applied to that webpage. Some example optimizations, which would be directed by the specific situation, would be to change the content or link structure around that webpage.

When we sorted on this optimization score, all of a sudden we could see every high-ROI keyword / webpage simulation that the Google Simulator calculated. Effectively, we were seeing WHICH simulations would gain the most traffic with the least amount of work. But how would we know which types of optimizations to implement?

Ironically, we had already solved that in the BETA. Through the webpage scorecards and link flow® distribution screens, we already had an arsenal of weapons to unleash on any specific keyword / webpage combination. We just needed to know where to start, and finally we had those answers.

To recap, at this point, we could now show users precisely:

  • Which keyword / webpage to start optimizing first.
  • How to optimize that particular keyword / webpage.

Initial Results

After extensive testing, we rolled it out to a number of big clients. In a matter of 2 months, the first results were out of this world. In one instance, we had produced an estimated 25x ROI for a Fortune 200 company. Those companies who embraced this new approach saw huge increases in traffic, with relatively very little work. We suspected this in theory, but now we had confirmation.

Customized Simulations

Some companies had extensive lists of not just their top keywords, but their “revenue-producing” keywords. Similarly, they had lists of each webpage and the conversion percentage of those webpages. So we had to come up with a new way to use the power of the engine, but with their data.

The solution was to let users override both the keyword traffic values and the webpage conversion values. Want to use your own traffic estimates? Override them. Certain webpages produce more value? Override them. Users could now customize the entire simulation, so that their top revenue-producing keywords and webpages were flagged inside those millions of simulations.

Summary

We have filed a patent for this new way of using big data to automate online marketing, and the whitepaper on Automatic SEO dives into a bit more technical detail (with equations), for those of you who desire more. We hope to share many more amazing stories in the future regarding this new approach to SEO, and for our team this is another step towards creating true transparency in search.