From The North Star To The Right Bid

From The North Star To The Right Bid

Attribution Models and KPIs for optimisation keep getting more complex. What is the right bid? How do I set up the right budget for my campaigns? Where and when do I move budget? Ideally we want to spend money where our sales are coming from. But where our sales are coming from is dependent on the attribution model. With more and more attribution models becoming available, and customer journeys not getting any simpler, the seemingly simple questions we ask ourselves as marketers on a day to day basis are becoming less and less clear. And to be fair, these questions can be daunting at times. It feels like you are picking an entire ‘belief system’ when it comes to choosing an attribution model. Much like picking the right type of pasta for your sauce, it takes a lot of thinking, pondering, and holding internal philosophical discussions to identify which attribution model and KPIs are best to optimise your operation.

In this article we will highlight how to reverse engineer what the right bidding systems are for your campaigns based on the status of your operation and how this can determine which attribution model you should use for your bids and budgets.

#1: Pick the right North Star

The first element to take into consideration is Picking the right North Star. In ancient science, “North Star” is not only Kanye West’s daughter, but a term used to define the overall goal of a marketing operation. So “Picking the right North Star” really means defining the goal and our ambition, and the metric that we want to use to optimise our business. There are a few options.

The most common “North Star” is Cost per Acquisition (CPA or CAC).  It is easy to implement because it is simply the cost sustained in order to acquire a new customer. It is however, a very crude metric that ignores the quality of the acquisition. This is the metric that is mostly used by businesses that are focused on lead generation (imagine if you are generating leads for an insurance company or for a credit card application). Obviously, analysing your business in terms of the cost for each lead opens a big challenge on how to assess the quality for each lead. For instance, if we are measuring the cost for each credit card application, we don’t actually know how many of these applications will be successful or not.

Therefore, a more sophisticated step is the Cost per Marketing Qualified Lead (CPMQL). In this case, the KPI we are considering is the cost for a lead that has been deemed to be good enough. Leads from people who apply for a credit card will go through an acquisition process where the lead is acquired and then passed to the sales team who will assess whether the lead is good enough and then inject this information back into the CRM. It is a process of analysing if the lead is a real human being, a real contact, and whether they are on target with what the marketing team wants to define. Returning to the credit card example, offering a credit card to businesses located in the UK and that have a certain credit score might be the parameters to define if the lead is considered qualified or not. This is a decent level of granularity but relies on the lead quality assumption that qualified leads will convert into paying customers at a certain rate, which is obviously an assumption that is not necessarily true in 100% of cases.

An even more granular KPI can be the Cost Per Paying Customer (CPPA), sometimes referred to as CPACCost Per Acquired Customer. In this case, this is a really good level because we are analyzing the actual value of a paying customer. Imagine a credit card or an insurance policy that has actually been issued, the problem with this approach is the fact that the number of paying customers will be significantly lower than the number of MQLeads or Leads and there may be a time delay between the lead arriving, the lead being qualified, and the lead becoming a customer, not to mention the challenges of getting all of this information into one place. This makes it difficult to identify and make the right decision when it comes to bidding correctly and allocating budget. We can start to appreciate that, as with pretty much everything in life, there are going to be tradeoffs. For example, if something is tasty, it either makes you fat or it’s illegal! Similarly, the deeper you want to go into a more sophisticated metric that allows you to capture the complexity of the user’s behaviour,  the lower the volume of data you will have which makes it more difficult to make decisions that will help you optimise your campaigns. In other words, you can either find a very rough metric that will give you enough volume of information or you can find a more sophisticated metric that will give you a limited amount of data and will be more difficult to use for optimization. For a credit card company the CPMQL might be the right balance between volume (higher than CPPA) and quality (deeper than CPA).

Next is ROAS (Return On Advertising Spend). This provides a highly accurate level of optimization. Most e-commerce optimise by ROAS. However, there are a couple of problems: first of all, revenue can be scattered across multiple elements in a marketing operation (e.g. multiple keywords in a  campaign); and secondly, we might also have outliers that could bias the judgement on a specific campaign. We might also find that cheaper items sell better online, making ROAS look better but potentially reducing our bottom line if these items have high postage costs and low margins that have not been taken into account. In non e-commerce businesses it is not trivial to implement ROAS as you need to take offline conversions into consideration. So despite ROAS being the most used KPI by e-commerce as it provides a high level of optimization, it is definitely not immune from detracting factors.

Now, this is a normal spectrum, but we can go a step forward and we can optimise by LTV (and particularly by LTV/CAC). In this case, what we are doing is optimising by taking a smarter version of ROAS into consideration. While ROAS only captures revenue on advertising spend, LTV can capture the likelihood that a certain customer will come back in the future and will buy again. Further, in LTV we are not simply looking at revenue but also considering the margins. We can see what margin we made on the first transaction. These considerations make the LTV  a more accurate level of optimization. The cons are the same as we mentioned for ROAS but on top of this there is also another assumption; the likelihood of customers to come back – which is based on an average – so obviously it will not always be a perfectly clear picture and might take a long time to even get a rough outline.

To summarize, when it comes to picking the North Star we could simplify and divide 2 types of goals for businesses:

E-commerce will want a return on investment and the best way to do that would be optimizing for LTV/CAC or ROAS;
Lead Generation businesses should opt for CPL or CPA, CPMQL or even more sophisticated – CPPA or CAC.
The perfect metric does not exist, it depends on several aspects such as your business goals, the nature of your business, and the sophistication of your digital operations. Therefore it also depends on whether you are just starting out or if you are more established. Once we pick the right North Star we can discuss attribution models.

 

#2: Attribution Models

Attribution Models are an important topic in marketing today and the best analogy to define the importance of this conversation is to liken the experience to going out for drinks. Imagine you go out and start off with a couple of pints, then a glass of Prosecco, then another Prosecco, then a cocktail and then finally you have the brilliant idea to close the night off with a shot of Jagermeister. If you end up feeling a bit drunk… Which drink was it that actually made you drunk?

If we are looking at the last click attribution model, then Jagermeister would be entirely responsible for your hangover….but this probably wouldn’t be a fair interpretation of reality. So the attribution model conversation is about how you attribute the value of a conversion to the right element in the user journey that pushes the user to buy a product, or sign up to a newsletter, or whatever the business goal is.

Multi-event attribution aims to distribute the credit of a specific conversion to all the advertising touchpoints that were influenced by that conversion. There are a few models and the position based model is within the so-called heuristic models (off-the-shelf models offered by Google).  This is the lesser of several evils in most cases (assigning values across positions in the chain, regardless of actual impact on the completion of a sale). This attribution is heuristic as in a heuristic approach we are simply assigning the attribution based on the position in the chain using a top down approach. This is a very simplistic way of seeing the complexity of reality.

The step beyond this is the algorithmic attribution — complete analysis of the available data to determine the relative impact of a given touchpoint on conversions. Rather than “shortcutting” and applying a blanket statement with a position rule, the algorithmic attribution involves having a custom model and weightings for each touchpoint based on every single specific user dynamic. There are several algorithmic models. One of the models that we find quite accurate and reliable is the model based on Markov Chains.

As well as the position of the touch point, we can also think about the type of touch point, especially if we are advertising outside of search. Do we consider ads that drive conversions from clicks equal to ads that drive the same value of conversions from impressions? Again this will depend on your business model and how and where you are reaching your customers. Typically, for search campaigns we can focus on click attribution, and when it comes to display, video, or social campaigns we would look at view attribution but take it with a pinch of salt. Again, we can start off the heuristic assumption that a click is more likely to influence the behaviour of a customer than a view, however we could test this assumption using an “uplift” test where one group of people sees our ads and another group does not.

To pick the right attribution model, brands need to think about where they are in the level of sophistication of their business. Last click, despite being largely used, is one of the worst solutions that you can adopt, so at the very least everyone should try to consider the position based model and should go at least once per quarter into Google Analytics and check the model comparison tool and try to compare the last click vs position base vs linear attribution and first click. You will see some interesting differences. The position-based approach is preferred to last click, but still, it is a very simplified view of life, more sophisticated operations should embrace non-heuristic models moving towards algorithmic models, and Markov is definitely one of the best options out there. Like our friend wondering which drink has caused his hangover, the right attribution model will ensure that each drink is given the appropriate credit and thus help us decide whether we should start with Jagermeister shots and finish with beers the next time or whether it’s best to just skip the Prosecco.

#3: Pick the right Bidding Model

The first step was picking the right North Star (ie: what is the goal we want to optimize for); then we went ahead and analyzed the best attribution model (ie: what is the system we want to use to attribute that value across our marketing operations). Now we are ready to define the right bidding model.

When it comes to bidding models, the best analogy you can think of is how you drive a car. When it comes to driving a car there are three possible approaches: you can have a manual gear, an automated gear, or actually, you can have a self-driving car (autopilot).

This is a similar approach with bidding. The manual gear is a very resource-intensive way of driving, you are the one who needs to change the gear every single time and it is comfortable for everyone only if you’re a good driver. This is very similar to Manual bidding: you are bidding by hand, keyword by keyword; you are bidding manually on keyword view operations, or the placement, or the ad assets on Facebook.

The second approach is Automated Bidding. In this case, you can adjust a set of rules that would define the bidding for you. And you are still in charge of the bidding because you are defining the rules which your bidding system should apply to each keyword, but you are not relying on Google.

The third approach is going entirely on autopilot and using the Google Automated Solution. So there are three ways you could bid: Manual Non Google Bidding, Automated Non Google Bidding, Google Bidding – and even then, it really depends on the KPIs that you are using. There are obviously pros and cons for each one of these bidding models. No calorie-free chocolate can satisfy your tastebuds.

If you are doing the Manual non Google Bidding, this is really good because it can be customized to consider disruptive events on the business (e.g. change in seasonality, introducing special discounts, weather conditions etc.). Cons are obviously that this bidding model is not sustainable for a large account, it is suitable only for small operations (as it is very time intensive); moreover,  it ignores Audience related information (i.e. Google knows more of what’s going on behind an auction, Google has access to signals that otherwise are not available: e.g. user location and operating system).

A better, or just a different, approach is using Automated Non Google Bidding. Here we are the ones deciding the bid by using a set of rules. These are additional to the Manual bidding model and also customizable in case of disruptive events. This model can also be used for larger accounts as it is implemented programmatically through APIs or Google scripts. However, we still lack the information on the audiences.

The next model is Google Smart Bidding. In that case, it does require a limited amount of data to be activated and it knows the auction from inside out, but usually it takes time to be trained, relies on historic series, and tends to ignore disruptive events. Imagine with the current situation with COVID-19, this model obviously has problems for this type of bidding solution. Very often, it is a full-on black box and is very difficult to read what it will bring in terms of positive or negative results.

When it comes to the bidding model, it is crucial to understand the size of your operations. If you are a large operation, there is no way you can bid manually. At the very least, you should use automated non Google Bidding and if it is important to be in the driver’s seat and it is not possible to sustain a black-box type of approach, then Google Bidding should be something to steer away from.

In a nutshell:

If we aren’t sure we are calculating the right bid for a keyword or spending the right amount on an ad, we don’t need to guess. We can follow this process to make sure that our small, day to day (often very granular) decisions are linked to our high level, long term business goals (i.e. making money).

First, make sure we set the right KPI. ROAS, CPA or whatever, we need to make sure our marketing North Star is navigating us towards what we actually care about.

Next, we need to find a usable attribution model. It won’t be perfect, but it won’t be worse than last click. This tells us which ads are helping us get to our business goals.

Finally, we can make sure that our bidding is helping us hit our KPIs based on the best possible attribution model.

Remember to review this over time. As you get more data, you learn more about your customers and can better predict their value and you can begin to slowly start moving further down the funnel and building a more sophisticated growth engine. But perhaps most importantly, and never forget this, cheese never, ever, ever, goes with fish.

Are your Facebook Ads adding value?

Are your Facebook Ads adding value?

Here’s 3 ways to measure the incremental impact of Facebook Ads in your Digital Marketing

When it comes to measuring impact from Facebook ads, making sense of data in a complex cross channel and cross device environment is not an easy task. Advertisers and Media Agencies have all struggled to define the incremental impact of Facebook ads, but we’ve been able to, with a high degree of confidence, assess the impact of Facebook ads on the business.

Have you ever been faced with this classic conundrum? Facebook says you’ve made 50k in Revenue from Facebook Ads, and Google Analytics says Facebook Ads have made 10k in Revenue. Which one should we trust?

The reality is that we should consider both data points and bear in mind the differences.

Google analytics is a powerful tool but has limitations when it comes to accurately assessing FB’s true value, meaning that FB’s true value is often underestimated… Why?

GA is cookie based! – This means that users are not automatically recognised when using different devices or browsers, so if a user deletes his cookies, GA won’t know that this is the same user!

Limited cross-device capabilities – This only works for users that are simultaneously logged into their Google account in their current browser and have the ads personalisation feature turned on (Google signals feature) or via login into the tracked website (User-ID feature)

No visibility on ad impressions – Meaning that it is not possible to see post-view attributions.

On the other hand, FB tends to be too optimistic sometimes. Some of the differences with Google Analytics are:

Facebook pixel doesn’t consider other channels – so it takes credit for every conversion where a Facebook ad was involved during the selected attribution window.

Post view conversions are considered – Facebook takes ad impressions into account for post-view attribution (users that see an ad on Facebook and then convert from other traffic sources, even without clicking on the ad);

Facebook has more cross-device measurement capability – Facebook’s measurement system is based on people as opposed to cookies, and the FB login is much more common than the Google login!

You cannot compare apples to oranges! Well, you can. But if you do… Don’t expect to see the same thing 🙂

By default, Facebook shows all conversions which take place within 28 days after clicking and 1 day view after initial ad impression. GA would show all conversions taking place within 90 days of clicking (by default) according to your chosen attribution window.

So how can we measure the incremental impact of Facebook ads effectively?

Here are 3 data driven ways you can estimate FB incrementality:

  1. Geo Treatment test
  2. Facebook Lift Analysis
  3. Markov cross channel analysis

 

1. Geo Treatment test

In medicine, the typical approach to test a new drug is by a Randomized Controlled Trial (RCT) where the new drug is given to some randomly selected patients (the treated) and a placebo to other patients (the control group).

If, after some time, we see there is a difference in the two groups and nothing else occurred to these patients, we can infer the causal effect of the new drug on a specific outcome.

This type of approach can be used to test the impact of Facebook as well. In our analogy with the new drug scenario, our patients are geographical locations (they can be cities, DMAs, regions, countries ecc.) and the new drug is the introduction of Facebook advertising.

This means that in some locations we will show the Facebook ads and in the remaining ones we will not.

At this point, we can use a simple OLS regression to estimate the impact of introducing Facebook in our media mix:

Here y_i is the outcome of interest (ideally our conversions), D is a dummy variable that takes value one if the location was treated and 0 otherwise and X refers to some variables you want to control for.

If the randomization was properly carried out, the delta coefficient should capture the so-called ATE (Average Treatment Effect) which represents the average effect of the introduction of Facebook Ads.

This is the “light” version of the theory, Google has some in depth documentation on this,  and an even more complex approach here if you want to try it out!

A few things to watch out for if you want to implement this approach:

  • There is a city mismatch from FB data to GA data, pick your cities/towns wisely and make sure they are defined in the same way on both platforms!
  • Not necessarily applicable for operations with only a few geographic units available for testing.
  • Ensure there aren’t any specific functions that would affect the significance of the test like city specific promotions, physical events or offline promos.
  • Seasonality interference. Try to run the test over a period of relatively flat seasonality so that your results are not skewed.
  • The Facebook Algorithm performs best with bigger audiences so be careful not to over segment the cities and hinder the performance of campaigns.
  • In the pre-launch phase, you need to double check that your randomization was balanced in terms of characteristics that could affect your experiment. For example, if, by any chance, the cities you decided to use as “treated” were the wealthiest ones, you could inadvertently drive incorrect conclusions because of this.

2. Facebook Lift Analysis

Similar to the Geo test outlined above the Facebook Conversion Lift analysis is a similar test. But instead of grouping Geos manually lift tests use randomised groups, with some people in a control group, and some people in a test group. People in the test group are exposed to ads while people in the control group do not receive ads. The estimated uplift is then measured by making a comparison to the test group and control group after collecting significant data.

image from here

A few things to watch out for if you want to implement this approach:

  • You can’t be certain that there are no external factors affecting the results as you don’t have full control over the randomised groups while the experiment is running.
  • Testing can sometimes result in smaller-sized test audiences with comparatively higher CPM’s.

 

3. Markov Cross channel

Another approach would be to use a slightly different set of math rules, i.e. using Markov attribution to see the impact of Facebook in cross channel operations.

There is a nice presentation by Gianluca here that explains how the Markov model works. The main idea is to use Markov chains to see what happens to your conversion rate if you remove certain parts of your conversion paths.

In this case we can perform this analysis at channel level in order to assess the impact of Facebook across all of our media mix.

Here you can find a sample of a script to implement the Markov attribution model, the only ingredients are a CSV file containing conversion paths (you can use the one from Google Analytics in the Multi-Channel-Funnel level) and the R software to launch the script.

A few things to watch out for if you want to implement this approach:

Google Analytics conversion paths only account for people who clicked on your ads directly and don’t consider post view!

To conclude: There is no right or wrong way to measure the impact of Facebook Ads on your digital marketing and this is definitely not an exact science, however considering these 3 different aspects will help you better gauge the incremental value that Facebook brings to the whole operation!

Booster Box wins Best Small PPC Agency of the Year

Booster Box wins Best Small PPC Agency of the Year

Just last week, on June 20th, we were standing on a stage accepting the Best Small PPC Agency of the Year award at the European Search Awards in Budapest. 

The European Search Awards, now in its 8th year, is an international competition that celebrates the very best in SEO, PPC, Digital and Content Marketing in Europe. Hundreds of entries from the leading search and digital agencies from across Europe were vying for top places. 

A few weeks earlier, Gianluca, our founding father, (who art in the office early)  secured second place in PPC Hero’s Most Influential Experts of 2019, so we were still riding that high when we went to the European Search Awards. 2019 has proven, and is still proving to be a successful year for us.

What made us win the Best Small PPC Agency award? Well, that’s an extremely long story…but to sum it up, we’re an unconventional company in an unconventional location: sunny Tuscany. Amidst hills, olive trees, and Spritz, we’re helping large international brands with their PPC. And we’re growing. Fast. In less than three years we’ve built a 24 person company and we keep growing. 

image result for search awards best small ppc agency“What’s your secret sauce?” you ask.  

Mayonnaise and relish mixed together and slathered liberally between two sesame seed buns. 

Oh…

You mean, that secret sauce. 

Well, we always shoot for the moon, because even if we miss we’ll still be among the stars. And now, we are one of the stars. Yes, we’re small, but we try hard to punch above our weight.  Rather than focusing on local business, we are focusing on international clients with complex problems. Our talented team is incredibly focused and hard-working. Every day is a fresh opportunity to learn something new about familiar platforms, or to work on a tool that will automate a tedious process so we can focus on the highlights and insights that make our account management and performance shine. 

Who says that you have to be in foggy London or overly zig-zaggy San Francisco to build cutting edge marketing technology when the shores of the Mediterranean are calling and the sun is shining?