Skip to content

The Statistics of Persuasion

The Art of Persuasion is the title of more than one book in the self-help genre, books that have spawned blogs, podcasts, speaking gigs and more. But the science of persuasion is actually of more interest, because it produces useful rules that can be studied and deployed.

Marketers and politicians have long been enthusiastic users of the fruits of behavioral research, some of which I noted in last week’s review of Daniel Kahneman’s book Thinking Fast and Slow. Illustrating the anchoring effect, he described how grocery stores found that sales of canned soup per customer doubled when customers saw a sign advising of a 12-can limit (as compared to when the sign specified no limit). The number 12, though it had no real meaning to consumer needs, served to anchor their behavior. Political leaders take advantage of the same effect, when they make statements that are demonstrably false. Even an obviously false statement can work its way into your thinking at a subconscious level, no matter how strongly you would reject it at an analytical level. Joseph Goebbels said If you tell a lie big enough and keep repeating it, people will eventually come to believe it.

But what about the statistics of persuasion? Several techniques are available.

  • Predictive models
  • Uplift models
  • Multi-arm bandits

Predictive Models

Using methods that date back to the pre-internet days of direct (paper) marketing, predictive models can help identify who, in a large list, is most likely to respond to an offer. Predictive models use existing data where you have information on a set of predictor variables for a person, and you also know whether that person responds to an offer or not. With that information, you can train a statistical or machine learning model to classify new data, where it is unknown whether a person will respond. You can then rank people by their propensity (probability) to respond, and then just send the offer to those at the top of the list.

Uplift Models

Uplift models take predictive models a step further – they help you decide not just who should get an offer, but also what offer they should get. It’s a three-step process:

Formulate two competing offers, A and B, that you want to test. Take a sample of your customer list, and split it randomly in two. Send offer A to one half of the list, send offer B to the other half of the list, and record whether a person responds.

Now formulate a predictive model in which offer version is one of the predictors; you’ll get a predicted probability of response for each person.

Finally, for a new customer, you can run the model twice – once with A as the offer, and once with B as the offer, and determine which does better. The improvement that you get with one offer versus the other is the uplift. The results of the model can tell you on a customer-by-customer basis

  1. Which offer, A or B, gives the highest probability of response, and
  2. Whether that probability is high enough to justify sending the offer

Multi-arm Bandits

An A-B test in statistics, as we did in the uplift model,is fairly simple – an experiment with two treatments applied to two groups. You wait until the experiment is complete, then, if a meaningful difference arises between A and B, you make a decision about whether to go with the better treatment. Often, if you are contemplating shifting from an established approach you will be cautious about shifting to something new, because change can be costly. The traditional statistical approach to this is to apply a hypothesis test, to determine how likely it is that the improvement you saw could arise if there were really no difference between the treatments.

Hypothesis tests can seem obscure, though, and with good reason. They don’t directly answer the question you are interested in: Which approach should I choose?†Plus, A-B tests are time-consuming, asking one question at a time and waiting for enough responses to accrue, before moving on to different question.

This is where multi-arm bandit come in. The name comes from the arms that slot machines have, which players pull to execute a gamble. They are primarily used in testing internet activity experiments (clicking, filling out a form, checking a box, buying something). They directly address the question of optimizing the experimenter’s decisions, by framing the issue as one of continuous decisions to explore or exploit. Simplifying:

  1. Offer two choices, A or B, randomly to a web user (A and B could differ by color, wording, images, etc.)
  2. Track responses
  3. Establish a criterion to decide when to abandon the poorly-performing option and go with the better one (e.g. at least 50 responses achieved, with one doing at least 10% better than the other)
  4. After a decision is made, repeat with new options.

This sets up an explore (keep gathering data on the two options) versus exploit (switch fully to the better performer) dichotomy. There is no single right answer on what the decision rule should be, but there are several algorithms whose properties have been studied, so it need not be an arbitrary choice.

When implemented on the web, bandit algorithms can be expanded to include multiple options , not just two (hence the term multi-arm bandit), and can be part of a skein of ongoing automated tweaks and adjustments.

Microtargeting

Both uplift models and multi-arm bandits are part of the science of microtargeting – optimizing the message that gets shown to small, highly targeted groups, or even to individuals. Microtargeting has seen its greatest development in the political world. Ken Strasma, director of targeting for the presidential campaign of Barack Obama, was a pioneer in this area and teaches the Persuasion Analytics course here at Statistics.com. More on microtargeting in this article.