A classic machine learning task is to predict something’s class, usually binary – pictures as dogs or cats, insurance claims as fraud or not, etc. Often the goal is not a final classification, but an estimate of the probability of belonging to a class (propensity), so the cases can be ranked. A good example of a classification problem where the goal is ranking, rather than actual classification, is Google’s algorithm for ranking online ads according to their propensity to attract clicks (the click-through-rate, or CTR). Since Google gets paid by the click, it wants to show ads that have the highest probability of generating clicks. The advertiser’s bid price is also a key factor in the equation; a higher bid price will offset a lower CTR.
Google’s program to sell ads that show when a user types in a search phrase is called Adwords. The general ad display algorithm works as follows (simplified):
1. Advertisers supply Google with candidate ads and metadata (ad copy, keywords, and the price they are willing to pay per click)
2. User types in a search query
3. A set of candidate ads is chosen based on comparing ad keywords to the user’s query
4. A predictive model is developed to estimate the probability that an ad will be clicked, based on predictor variables (features) that include:
The terms used in the ad
The terms used in the search query
The ad meta-data
User browsing behavior
5. The candidate ads are ranked according to expected revenue – the fee per click (i.e. the bid set by the advertiser) times the estimated probability of click.
6. Ads’ positioning and frequency of display is determined by their rankings
Note that steps 3-6 must happen almost instantaneously, so speed is of the essence in the modeling process. The potential number of features used in the modeling is huge – each word or term in the language of the ad is could be a predictor (either binary – presence or absence, or a count – frequency). Of course, most values will be zeros. Google’s 2013 research paper on their ad ranking algorithm (search for “McMahon A View From the Trenches”) outlines computational tricks that are used to condense the calculations involved with such a huge, but sparse, matrix.
One interesting point in this age of sophisticated and mysterious machine learning algorithms and artificial intelligence, Google chose as the foundation of its model logistic regression, the classic statistical method that has been the workhorse of predictive modeling for decades, ever since the early days of automated credit scoring. It is faster than data-centric machine learning algorithms, and performs well in the ad ranking task.