Week #47 – Feature engineering
In predictive modeling, a key step is to turn available data (which may come from varied sources and be messy) into an orderly matrix of rows (records to be predicted) and columns (predictor variables or features).
In predictive modeling, a key step is to turn available data (which may come from varied sources and be messy) into an orderly matrix of rows (records to be predicted) and columns (predictor variables or features).
A full Bayesian classifier is a supervised learning technique that assigns a class to a record by finding other records with attributes just like it has, and finding the most prevalent class among them.
In computer science, MapReduce is a procedure that prepares data for parallel processing on multiple computers.
There was an interesting article a couple of weeks ago in the New York Times magazine section on the role that Big Data can play in treating patients -- discovering things that clinical trials are too slow, too expensive, and too blunt to find. The…
Likert scales are categorical ordinal scales used in social sciences to measure attitude. A typical example is a set of response options ranging from "strongly agree" to "strongly disagree."
A node is an entity in a network. In a social network, it would be a person. In a digital network, it would be a computer or device.
Latent variable models postulate some relationship between the statistical properties of observable variables.
K-nearest-neighbor (K-NN) is a machine learning predictive algorithm that relies on calculation of distances between pairs of records.
The kappa statistic measures the extent to which different raters or examiners differ when looking at the same data and assigning categories.
Censoring in time-series data occurs when some event causes subjects to cease producing data for reasons beyond the control of the investigator, or for reasons external to the issue being studied.
Survival analysis is a set of methods used to model and analyze survival data, also called time-to-event data.
The probability distribution for X is the possible values of X and their associated probabilities. With two separate discrete random variables, X and Y, the joint probability distribution is the function f(x,y)
With a sample of size N, the jackknife involves calculating N values of the estimator, with each value calculated on the basis of the entire sample less one observation.
In the interim monitoring of clinical trials, multiple looks are taken at the accruing patient results - say, response to a medication.
The classic illustration of the power of brand is perfume - expensive perfumes may cost just a few dollars to produce but can be sold for more than $500 due to the cachet afforded by the brand. David Malan's Computer Science course at Harvard, CSCI…
The big news from the SAS world this summer was the release, on May 28, of the SAS University Edition, which brings the effective price for a single user edition of SAS down from around $10,000 to $0. It does most of the things that…
Nobody expects Twitter feed sentiment analysis to give you unbiased results the way a well-designed survey will. A Pew Research study found that Twitter political opinion was, at times, much more liberal than that revealed by public opinion polls, while it was more conservative at…
Boston, August 3 2014: Bill Ruh, GE Software Center, says that the Internet of Things, 30 billion machines talking to one another, will dwarf the impact of the consumer internet. Speaking at the Joint Statistical Meetings today, Ruh predicted that the marriage of the IoT…
A NoSQL database is distinguished mainly by what it is not -
A similarity matrix shows how similar records are to each other.
Predictive modeling is the process of using a statistical or machine learning model to predict the value of a target variable (e.g. default or no-default) on the basis of a series of predictor variables (e.g. income, house value, outstanding debt, etc.).
A hold-out sample is a random sample from a data set that is withheld and not used in the model fitting process. After the model...
Heteroscedasticity generally means unequal variation of data, e.g. unequal variance. More specifically,
Goodness-of-fit measures the difference between an observed frequency distribution and a theoretical probability distribution which
The geometric mean of n values is determined by multiplying all n values together, then taking the nth root of the product. It is useful in taking averages of ratios.
Hierarchical linear modeling is an approach to analysis of hierarchical (nested) data - i.e. data represented by categories, sub-categories, ..., individual units (e.g. school -> classroom -> student).
In medical statistics, the hazard function is a relationship between a proportion and time.
The Fleming procedure (or O´Brien-Fleming multiple testing procedure ) is a simple multiple testing procedure for comparing two treatments when the response to treatment is dichotomous . This procedure...
In a directed network, connections between nodes are directional. For example..
An adjacency matrix describes the relationships in a network. Nodes are listed in the top..
The exponential distribution is a model for the length of intervals between two consecutive random events in time or
Error is the deviation of an estimated quantity from its true value, or, more precisely,
Step-wise regression is one of several computer-based iterative variable-selection procedures.
Regularization refers to a wide variety of techniques used to bring structure to statistical models in the face of data size, complexity and sparseness.
SQL stands for structured query language, a high level language for querying relational databases, extracting information.
A Markov chain is a probability system that governs transition among states or through successive events.
MapReduce is a programming framework to distribute the computing load of very large data and problems to multiple computers.
As data processing requirements grew beyond the capacities of even large computers, distributed computing systems were developed to spread the load to multiple computers.
The curse of dimensionality is the affliction caused by adding variables to multivariate data models.
A data product is a product or service whose value is derived from using algorithmic methods on data, and which in turn produces data to be used in the same product, or tangential data products.
Ever wonder why, in World War II, ships in convoys were safer than ships traveling on their own? Most people assume it was due to the protection afforded by military escort vessels, of which there was a limited supply (insufficient to protect ships traveling on…
Statistical models normally specify how one set of variables, called dependent variables, functionally depend on another set of variables, called independent variables.
Statistical distance is a measure calculated between two records that are typically part of a larger dataset, where rows are records and columns are variables. To calculate...
In predictive modeling, the goal is to make predictions about outcomes on a case-by-case basis: an insurance claim will be fraudulent or not, a tax return will be correct or in error, a subscriber...
In the machine learning community, a decision tree is a branching set of rules used to classify a record, or predict a continuous value for a record. For example
In predictive modeling, feature selection, also called variable selection, is the process (usually automated) of sorting through variables to retain variables that are likely...
In predictive modeling, bagging is an ensemble method that uses bootstrap replicates of the original training data to fit predictive models.
In predictive modeling, boosting is an iterative ensemble method that starts out by applying a classification algorithm and generating classifications.
In predictive modeling, ensemble methods refer to the practice of taking multiple models and averaging their predictions.
When talking to several people, do you address them as "you guys"? "Y'all"? Just "you"? And is the carbonated soft drink "soda" or "pop?" Maps based on survey responses to questions like this were published in the Harvard Dialect Survey in 2003. Josh Katz took…
What's the probability that the NSA examined the metadata for your phone number in 2013? According to John Inglis, Deputy Director at the NSA, it's about 0.00001, or 1 in 100,000. A surprisingly small number, given what we've all been reading in the media about…
The expected value of a random variable, in a simple sense, is nothing but the arithmetic mean.
Exact tests are hypothesis tests that are guaranteed to produce Type-I error at or below the nominal alpha level of the test when conducted on samples drawn from a null model.
In statistical models, error or residual is the deviation of the estimated quantity from its true value: the greater the deviation, the greater the error.
Endogenous variables in causal modeling are the variables with causal links (arrows) leading to them from other variables in the model.
In a study or experiment with two groups (usually control and treatment), the investigator typically has in mind the magnitude of the difference between the two groups that he or she wants to be able to detect in a hypothesis test.
In the interim monitoring of clinical trials, multiple looks are taken at the accruing patient results - say, response to a medication.
In a test of significance (also called a hypothesis test), Type I error is the error of rejecting the null hypothesis when it is true -- of saying an effect or event is statistically significant when it is not.
The devastation wrought by Super-Typhoon Haiyan in the Philippines is the biggest test yet for the nascent technology of "artificial intelligence disaster response," a phrase used by Patrick Meier, a pioneer in the field. When disaster strikes, a flood of social media posts and tweets…
There are Red States and Blue States. The three blue states of the Pacific coast constitute the Left Coast. For Colin Woodward, Yankeedom comprises both New England and the Great Lakes. If you're into accessories, there's the Bible Belt, the Rust Belt, and the Stroke…
A time series x(t); t=1,... is considered to be stationary if its statistical properties do not depend on time t .
Data partitioning in data mining is the division of the whole data available into two or three non-overlapping sets: the training set (used to fit the model), the validation set (used to compared models), and the test set (used to predict performance on new data).
Data mining is concerned with finding latent patterns in large databases.
An observation´s z-score tells you the number of standard deviations it lies away from the population mean (and in which direction).
In multivariate analysis, cluster analysis refers to methods used to divide up objects into similar groups, or, more precisely, groups whose members are all close to one another on various dimensions being measured.
In psychology, a construct is a phenomenon or a variable in a model that is not directly observable or measurable - intelligence is a classic example.
Collaborative filtering algorithms are used to predict whether a given individual might like, or purchase, an item.
The "righteous vengeance gun attack" is just one of 10 types of terrorism identified by Chenoweth and Lowham via statistical clustering techniques. Another cluster is "bombings of a public population where a liberation group takes responsibility." You can read about the 10 clusters, and the…
Crowdsourcing, using the power of the crowd to solve problems, has been used for many functions and tasks, including predictive modeling (like the 2009 Netflix Contest). Typically, problems are broadcast to an unknown group of statistical modelers on the Internet, and solutions are sought. Every…
Longitudinal data records multiple observations over time for a set of individuals or units. A typical..
Cross-sectional data refer to observations of many different individuals (subjects, objects) at a given time, each observation belonging to a different individual. A simple...
Tokenization is an initial step in natural language processing. It involves breaking down a text into a series of basic units, typically words. For example...
A natural language is what most people outside the field of computer science think of as just a language (Spanish, English, etc.). The term...
White Hat Bias is bias leading to distortion in, or selective presentation of, data that is considered by investigators or reviewers to be acceptable because it is in the service of righteous goals.
An edge is a link between two people or entities in a network that can be
Stratified sampling is a method of random sampling.
When probabilities are quoted without specification of the sample space, it could result in ambiguity when the sample space is not self-evident.
A discrete distribution is one in which the data can only take on certain values, for example integers. A continuous distribution is one in which data can take on any value within a specified range (which may be infinite).
The central limit theorem states that the sampling distribution of the mean approaches Normality as the sample size increases, regardless of the probability distribution of the population from which the sample is drawn.
Classification and regression trees (CART) are a set of techniques for classification and prediction.
CHAID stands for Chi-squared Automatic Interaction Detector. It is a method for building classification trees and regression trees from a training sample comprising already-classified objects.
In a census survey , all units from the population of interest are analyzed. A related concept is the sample survey, in which only a subset of the population is taken.
I did not start off in the field of statistics; my first real job was as a diplomat. And from my undergraduate days I recall a professor who taught a cultural history of Russia. He was one of the world's top experts. Possessed of a…
Discriminant analysis is a method of distinguishing between classes of objects. The objects are typically represented as rows in a matrix.
Also called the training sample, training set, calibration sample. The context is predictive modeling (also called supervised data mining) - where you have data with multiple predictor variables and a single known outcome or target variable.
Mutual attraction is a dominant force in the universe. Gravity binds the moon to the earth, the earth to the sun, the sun to the galaxy, and one galaxy to another. And yet the universe is expanding; the result is a larger universe comprised of…
A general statistical term meaning a systematic (not random) deviation of an estimate from the true value.
One of several computer-based iterative procedures for selecting variables to use in a model. The process begins...
Outcomes to an experiment or repeated events are statistically significant if they differ from what chance variation might produce.
In multiple comparison procedures, family-wise type I error is the probability that, even if all samples come from the same population, you will wrongly conclude
A cohort study is a longitudinal study that identifies a group of subjects sharing some attributes (a "cohort") then
The coefficient of variation is the standard deviation of a data set, divided by the mean of the same data set.
In regression analysis, the coefficient of determination is a measure of goodness-of-fit (i.e. how well or tightly the data fit the estimated model). The coefficient is
An estimator is a measure or metric intended to be calculated from a sample drawn from a larger population...
In regression analysis , collinearity of two variables means that strong correlation exists between them, making it difficult or impossible to estimate their individual regression coefficients reliably.
A cohort study is a longitudinal study that identifies a population or large group (a "cohort") then draws a sample from the population at various points in time and records data for the sample.
The centroid is a measure of center in multi-dimensional space.
Bootstrapping is sampling with replacement from observed data to estimate the variability in a statistic of interest. See also permutation tests, a related form of resampling. A common application
A Binomial distribution is used to describe an experiment, event, or process for which the probability of success is the same for each trial and each trial has only two possible outcomes.
A combination of treatment comparisons (e.g. send a sales solicitation, or send nothing) and predictive modeling to determine which cases or subjects respond (e.g. purchase or not) to which treatments.