Articles

Selection Effect — What It Is and Why It’s Important

by Jaydeep P. Digital Marketing | SEO
Marketing Analysts should take note of Selection Effect. Unchecked, Select Effect bias can lead to false positives and the wrong investments.

Selection Effect is a pervasive threat to the validity of any marketing analysis. So analysts should be acutely aware of this phenomenon to ensure they don’t overstate marketing impact.

This article is a brief discussion about Selection Effect and how I try to combat this type of bias in my day-to-day work in marketing analytics.

This is by no means a definitive guide — you can find example academic articles on Selection Effect here and here.

Starting With An Introduction

Selection Effect is the bias introduced when a methodology, respondent sample or analysis is biased toward a specific subset of a target population. Meaning it does not reflect the actual target population as a whole.

Let’s dive into a few quick examples.

Example 1. You run an analysis of an SEM campaign. Your analysis looks at the return-on-investment (ROI) of your paid search ads via link click through to purchase. However, the analysis does not account for those “link clickers” who would have purchased your product anyway. Not accounting for Selection Effect in this example means that your analysis gives undue credit to your SEM ads and the ROI is overstated.

Example 2. You test overall brand awareness of your health food products and decide to collect data via in-person interviews at gyms and health stores. In this example, the data is biased because your methodology is targeting people who frequent health-related venues and therefore are likely predisposed to health food products. This will likely over-state the overall brand awareness of your health food products.

With a very small leap, Example 1 shows how easy it would be for any attribution algorithm to give undue credit to SEM ads when Selection Effect is ignored.

And Example 2 highlights the dangers of experiments that aren’t carefully interrogated for possible biases.

Using these two examples, it’s easy to imagine how ignoring Selection Effects can quickly result in error-prone results that lead to a dark spiral of poor investment recommendations and buckets of wasted marketing investment. No one wants that, of course.

Ways to Minimize Selection Effect

Selection Effect is an always-present challenge in marketing analytics. This is partly due to the nature of the work and partly due to organizational biases that favor cherry-picking analysis techniques, fast-tracked experimentation and positive results.

Here is a mix of ways I try to minimize Selection Effect in my own practices:

  • Randomized control trials (RCTs) — RCTs are my gold standard for experimentation and measuring incremental marketing activities. Good experimentation is at the heart of minimizing Selection Effect and RCTs among a target population are one of the best ways of getting representative results. RCTs aren’t always possible in marketing due to the complex nature of some media strategies and the inability to control impressions. That said, I always start with RCTs as a best-practice.
  • Validating learnings across multiple experiments — As long as experiments are well-designed, validating learnings across multiple experiments is an excellent way to build confidence in a specific piece of evidence and minimize unexpected Selection Effects.
  • Document measurement design, goals and analysis type before starting — Defining the measurement design and analysis technique ahead of time helps minimize any Selection Effect as a result of the analysis type or segmentation. Selection Effect can creep in at different stages in the analysis process so its important to be diligent throughout.
  • Standardized templates, documented audience definitions and formal reporting processes — In addition to defining measurement design ahead of time, standardized templates and reporting processes also help minimize biases throughout the analysis. This works by ensuring that there are consistent methods, formats and audience definitions limiting the ability of the analyst to Selection Effect bias in segmenting the audience or displaying results to highlight a certain result from a subset of the target population.
  • Randomized variability of the media mix — Randomized variability is the practice of introducing significant variability in the number of impressions delivered via media channels in a given timeframe. This is specific to instances where RCTs aren’t an option due to a complex media mix so marketing impact needs to be modeled. Implementing random and high-variability in media delivery is an experimental lever used to manipulate the independent variable (e.g. impressions) and try to assess any impact on the dependent variable (e.g. purchases). This unnatural randomization is one way of reducing Selection Effect by inserting an element of randomized control (from the RCT playbook) into the campaign even when the overall experiment isn’t controllable.
  • Peer reviews — Peer reviews are another way of checking the validity of some evidence. We can often get caught up in our own pieces of work that it takes an outside opinion to notice any unchecked Selection Effects. Peer reviews of measurement plans and of analysis findings help limit any unintentional Selection Effects.

Bias is Like Death And Taxes

At the end of the day, bias is ever-present and Selection Effect is no different. It’s a fact that anything and everything created by humans are biased in one way or another. This best we can do is to be aware of the different biases and implement measures that limit these as much as possible.

Selection Effect is particularly relevant for those of us in marketing analytics. And, as a result, should be high up on our list of biases to track and minimize. In my mind, the best way to limit the possibility of Selection Effect at all stages in the analysis workflow is via a combination of RCTs, standardized processes and validated learnings.

Originally Posted on Medium.com

Sponsor Ads


About Jaydeep P. Magnate I   Digital Marketing | SEO

3,507 connections, 110 recommendations, 7,981 honor points.
Joined APSense since, August 12th, 2016, From Ahmedabad, India.

Created on Mar 3rd 2020 00:02. Viewed 543 times.

Comments

No comment, be the first to comment.
Please sign in before you comment.