Content Optimization with Multi-Armed Bandits & Python *Recorded Webinar*
Taught by Mr. Kris Wright

Content Optimization with Multi-Armed Bandits & Python *Recorded Webinar*

Taught by Mr. Kris Wright

Aim of Course:

Whenever you have multiple items to choose from, and are not sure which will result in the highest level of engagement or action, you have to make a choice. A/B testing can help you, but there is a better way, a quicker, less-wasteful way, and it is called a multi-armed bandit. Bandits are a type of reinforcement learning, a branch of machine learning that typically flies under the radar (unless you are trying to teach a robot how to juggle while navigating a maze, or trying to teach a computer program how to learn to play (and win) Pac Man).

Reinforcement learning deals with trial-and-error and searching for the best action to take, and is classified as a type of online learning, in contrast with offline learning, or batch learning. In online learning, you start with no knowledge, and you learn as you go, sequentially making optimal decisions. Hence bandits are perfect for recommendation engines when you know nothing about your users (the first day your app is up and running, a situation known as a cold start). Bandits balance exploring what you don’t know with exploiting what you know, a situation commonly referred to as the exploration-exploitation dilemma.

Typical applications of multi-armed bandits include subject line testing for emails, button colors, page design/layout, and headline optimization. Anything you can test in the A/B fashion, you can do with bandits, but bandits will ensure you quickly converge to your best option, saving you time and money, and saving your users from viewing irrelevant content. 

You will learn different strategies for balancing exploration and exploitation in order to learn the best action to take when you initially know nothing about the payoffs of the different actions. You will learn how to implement these algorithms, tune them, and incorporate them into various apps. In short, this webinar, made available by District Data Labs) will give you the tools to make optimal decision in the face of uncertainty.

Course Program:Recorded Webinar
  • Visualization: Overview of visualization in Python.
  • Multi-Armed Bandits: Bandits are a way to maximize reward given uncertain payoffs.
  • Bandit algorithms we will cover: greedy, epsilon greedy, epsilon decreasing, exponential, upper confidence bound, and Bayesian
  • Data Types: Static, Restless, and Volatile data will be covered.
  • Static rewards exist forever and their expected payoff never changes
  • Restless rewards exist forever and their expected payoff changes over time
  • Volatile rewards exist for a certain period of time, then become unavailable
  • Simulation: Simulate bandit systems and visualize the results.
  • Application 1: Command line application that uses bandits.
  • Application 2: Website that uses bandits. 

Content Optimization with Multi-Armed Bandits & Python *Recorded Webinar*

Level:
Intermediate
Prerequisite:

Students should be familiar with any high-level programming language (C++, Java, Python).

Organization of the Course:

Recorded webinar:  content is delivered via video.  You do not have to be online at specific times, there is no homework and no instructor interaction.

Time Requirement:  

Approximately 3 hours

Options for Credit and Recognition:

None

Course Text:
No required text.  We have this suggested resource:  when you launch Anaconda for Python, the Launcher program has many sample iPython notebooks on the right side. These are great tutorials for data analysis and visualization in python.
Software:

You should have installed the Python 2.7 version of Anaconda, by Continuum Analytics. Useful links are below:

  1. Installing Python: https://wiki.python.org/moin/BeginnersGuide/Download
  2. Install virtualenv and virtualenvwrapper: http://docs.python-guide.org/en/latest/dev/virtualenvs/
  3. Get a Github account: https://github.com/
  4. Python Hello World: http://www.learnpython.org/en/Hello,_World!
  5. Using the terminal: http://cli.learncodethehardway.org/book/
  6. Python programming: http://learnpythonthehardway.org/
  7. Anaconda: https://www.continuum.io/downloads
Instructor(s):

Dates:

To be scheduled.

Content Optimization with Multi-Armed Bandits & Python *Recorded Webinar*

Instructor(s):

Dates:
To be scheduled.

Course Fee: $35.00

Do you meet course prerequisites? What about book & software? (Click here to learn more)

We have flexible policies to transfer to another course, or withdraw if necessary (modest fee applies)

Group rates: Click here to get information on group rates. 

First time student or academic? Click here for an introductory offer on select courses. Academic affiliation?  You may be eligible for a discount at checkout.

AVAILABLE ON DEMAND:  Unlike standard Statistics.com courses, this recorded webinar is available on-demand and is not tied to a date.

Register Now

Add $50 service fee if you require a prior invoice, or if you need to submit a purchase order or voucher, pay by wire transfer or EFT, or refund and reprocess a prior payment. Please use this printed registration form, for these and other special orders.

Courses may fill up at any time and registrations are processed in the order in which they are received. Your registration will be confirmed for the first available course date, unless you specify otherwise.

The Institute for Statistics Education is certified to operate by the State Council of Higher Education in Virginia (SCHEV).

Want to be notified of future courses?

Yes
Student comments