A "discovery" is a hypothesis test that yields a statistically significant result. The false discovery rate is the proportion of discoveries that are, in reality, not significant (a Type-I error). The true false discovery rate is not known, since the true state of nature is not known (if it were, there would be no need for statistical inference).
The signal is the component of the observed data that carries useful information.
Non-parametric regression methods are aimed at describing a relationship between the dependent and independent variables...
A nominal scale is really a list of categories to which objects can be classified.
The noise is the component of the observed data (e.g. of a time series) that is random and carries no useful information.
In a network analysis context, "edge" refers to a link or connection between two entities in a network
The 2006 Netflix Contest has come to convey the idea of crowdsourced predictive modeling, in which a dataset and a prediction challenge are made publicly available. Individuals and teams then compete to develop the best performing model.
The linear model is ubiquitous in classical statistics, yet real-life data rarely follow a purely linear pattern.
Association rules, also called "market basket analysis," is a data mining method applied to transaction data.
This week's word is actually a letter. R is a statistical computing and programming language and program, a derivative of the commercial S-PLUS program, which, in turn, was an offshoot of S from Bell Labs.
With the advent of Big Data and data mining, statistical methods like regression and CART have been repurposed to use as tools in predictive modeling.
The Netflix prize was a famous early application of crowdsourcing to predictive modeling.
An A-B test is a classic statistical design in which individuals or subjects are randomly split into two groups and some intervention or treatment is applied.
In time series forecasting, a moving average is a smoothing method in which the forecast for time t is the average value for the w periods ending with time t-1.
In regression models, an interaction term captures the joint effect of two variables that is not captured in the modeling of the two terms individually.
A naive forecast or prediction is one that is extremely simple and does not rely on a statistical model (or can be expressed as a very basic form of a model).
RMSE is root mean squared error. In predicting a numerical outcome with a statistical model, predicted values rarely match actual outcomes exactly.
A label is a category into which a record falls, usually in the context of predictive modeling. Label, class and category are different names for discrete values of a target (outcome) variable.
A strip transect is a small subsection of a geographically-defined study area, typically chosen randomly.
Spark is a second generation computing environment that sits on top of a Hadoop system, supporting the workflows that leverage a distributed file system.
Bandits refers to a class of algorithms in which users or subjects make repeated choices among, or decisions in reaction to, multiple alternatives.
In discrete response models, overdispersion occurs when there is more correlation in the data than is allowed by the assumptions that the model makes.
In a classification model, the confusion matrix shows the counts of correct and erroneous classifications. In a binary classification problem, the matrix consists of 4 cells.
In a classic statistical experiment, treatment(s) and placebo are applied to randomly assigned subjects, and, at the end of the experiment, outcomes are compared.
Classification and regression trees, applied to data with known values for an outcome variable, derive models with rules like "If taxable income <$80,000, if no Schedule C income, if standard deduction taken, then no-audit."
The predictors in a predictive model are sometimes given different terms by different disciplines. Traditional statisticians think in terms of variables.
In logistic regression, we seek to estimate the relationship between predictor variables Xi and a binary response variable. Specifically, we want to estimate the probability p that the response variable will be a 0 or a 1.
Bayesian statistics typically incorporates new information (e.g. from a diagnostic test, or a recently drawn sample) to answer a question of the form "What is the probability that..."
Consider two (or more) samples subjected to different treatments. A permutation test assesses whether,
One avid reader took issue with a recent definition of "quasi experiment." I had defined it
In social science research, particularly in the qualitative literature on program evaluation, the term "quasi-experiment" refers to studies that do not involve the application of treatments via random assignment of subjects.
In survey research, curb-stoning refers to the deliberate fabrication of survey interview data by the interviewer.
Bag-of-words is a simplified natural language processing concept.
In language processing, stemming is the process of taking multiple forms of the same word and reducing them to the same basic core form.
Structured data is data that is in a form that can be used to develop statistical or machine learning models (typically a matrix where rows are records and columns are variables or features).
In predictive modeling, a key step is to turn available data (which may come from varied sources and be messy) into an orderly matrix of rows (records to be predicted) and columns (predictor variables or features).
A full Bayesian classifier is a supervised learning technique that assigns a class to a record by finding other records with attributes just like it has, and finding the most prevalent class among them.
In computer science, MapReduce is a procedure that prepares data for parallel processing on multiple computers.
Likert scales are categorical ordinal scales used in social sciences to measure attitude. A typical example is a set of response options ranging from "strongly agree" to "strongly disagree."
A node is an entity in a network. In a social network, it would be a person. In a digital network, it would be a computer or device.
Latent variable models postulate some relationship between the statistical properties of observable variables.
K-nearest-neighbor (K-NN) is a machine learning predictive algorithm that relies on calculation of distances between pairs of records.
The kappa statistic measures the extent to which different raters or examiners differ when looking at the same data and assigning categories.
Censoring in time-series data occurs when some event causes subjects to cease producing data for reasons beyond the control of the investigator, or for reasons external to the issue being studied.
Survival analysis is a set of methods used to model and analyze survival data, also called time-to-event data.
The probability distribution for X is the possible values of X and their associated probabilities. With two separate discrete random variables, X and Y, the joint probability distribution is the function f(x,y)
With a sample of size N, the jackknife involves calculating N values of the estimator, with each value calculated on the basis of the entire sample less one observation.
A NoSQL database is distinguished mainly by what it is not -
A similarity matrix shows how similar records are to each other.
Predictive modeling is the process of using a statistical or machine learning model to predict the value of a target variable (e.g. default or no-default) on the basis of a series of predictor variables (e.g. income, house value, outstanding debt, etc.).
A hold-out sample is a random sample from a data set that is withheld and not used in the model fitting process. After the model...
Heteroscedasticity generally means unequal variation of data, e.g. unequal variance. More specifically,
Goodness-of-fit measures the difference between an observed frequency distribution and a theoretical probability distribution which
The geometric mean of n values is determined by multiplying all n values together, then taking the nth root of the product. It is useful in taking averages of ratios.
Hierarchical linear modeling is an approach to analysis of hierarchical (nested) data - i.e. data represented by categories, sub-categories, ..., individual units (e.g. school -> classroom -> student).
In medical statistics, the hazard function is a relationship between a proportion and time.
In a directed network, connections between nodes are directional. For example..
An adjacency matrix describes the relationships in a network. Nodes are listed in the top..
The exponential distribution is a model for the length of intervals between two consecutive random events in time or
Error is the deviation of an estimated quantity from its true value, or, more precisely,
Step-wise regression is one of several computer-based iterative variable-selection procedures.
Regularization refers to a wide variety of techniques used to bring structure to statistical models in the face of data size, complexity and sparseness.
SQL stands for structured query language, a high level language for querying relational databases, extracting information.
A Markov chain is a probability system that governs transition among states or through successive events.
MapReduce is a programming framework to distribute the computing load of very large data and problems to multiple computers.
As data processing requirements grew beyond the capacities of even large computers, distributed computing systems were developed to spread the load to multiple computers.
The curse of dimensionality is the affliction caused by adding variables to multivariate data models.
A data product is a product or service whose value is derived from using algorithmic methods on data, and which in turn produces data to be used in the same product, or tangential data products.
Statistical models normally specify how one set of variables, called dependent variables, functionally depend on another set of variables, called independent variables.
Statistical distance is a measure calculated between two records that are typically part of a larger dataset, where rows are records and columns are variables. To calculate...
In predictive modeling, the goal is to make predictions about outcomes on a case-by-case basis: an insurance claim will be fraudulent or not, a tax return will be correct or in error, a subscriber...
In the machine learning community, a decision tree is a branching set of rules used to classify a record, or predict a continuous value for a record. For example
In predictive modeling, feature selection, also called variable selection, is the process (usually automated) of sorting through variables to retain variables that are likely...
In predictive modeling, bagging is an ensemble method that uses bootstrap replicates of the original training data to fit predictive models.
In predictive modeling, boosting is an iterative ensemble method that starts out by applying a classification algorithm and generating classifications.
In predictive modeling, ensemble methods refer to the practice of taking multiple models and averaging their predictions.
The expected value of a random variable, in a simple sense, is nothing but the arithmetic mean.
Exact tests are hypothesis tests that are guaranteed to produce Type-I error at or below the nominal alpha level of the test when conducted on samples drawn from a null model.
In statistical models, error or residual is the deviation of the estimated quantity from its true value: the greater the deviation, the greater the error.
Endogenous variables in causal modeling are the variables with causal links (arrows) leading to them from other variables in the model.
In a study or experiment with two groups (usually control and treatment), the investigator typically has in mind the magnitude of the difference between the two groups that he or she wants to be able to detect in a hypothesis test.
In a test of significance (also called a hypothesis test), Type I error is the error of rejecting the null hypothesis when it is true -- of saying an effect or event is statistically significant when it is not.
A time series x(t); t=1,... is considered to be stationary if its statistical properties do not depend on time t .
Data partitioning in data mining is the division of the whole data available into two or three non-overlapping sets: the training set (used to fit the model), the validation set (used to compared models), and the test set (used to predict performance on new data).
Data mining is concerned with finding latent patterns in large databases.
In multivariate analysis, cluster analysis refers to methods used to divide up objects into similar groups, or, more precisely, groups whose members are all close to one another on various dimensions being measured.
In psychology, a construct is a phenomenon or a variable in a model that is not directly observable or measurable - intelligence is a classic example.
Collaborative filtering algorithms are used to predict whether a given individual might like, or purchase, an item.
Longitudinal data records multiple observations over time for a set of individuals or units. A typical..
Cross-sectional data refer to observations of many different individuals (subjects, objects) at a given time, each observation belonging to a different individual. A simple...
Tokenization is an initial step in natural language processing. It involves breaking down a text into a series of basic units, typically words. For example...
A natural language is what most people outside the field of computer science think of as just a language (Spanish, English, etc.). The term...
White Hat Bias is bias leading to distortion in, or selective presentation of, data that is considered by investigators or reviewers to be acceptable because it is in the service of righteous goals.