Dependent and Independent Variables:
Statistical models normally specify how one set of variables, called dependent variables , functionally depend on another set of variables, called independent variables . The functional relationship does not necessarily reflects a causal relationship - i.e. the independent variables do not necessarily describe the cause.
The terms "dependent" and "independent" here have no direct relation to statistical dependence of variables or events. The term "(in)dependent" reflects only the functional relationship between variables within a model. Several models based on the same set of variables may differ by how the variables are subdivided into dependent and independent variables.
For example, a simple linear regression model states a linear relationship between the body weight and body height , and the weight is considered the dependent variable:
where and are parameters of the model.
At the same time, another reasonable model may consider body height as the dependent variable and the weight as the independent variable:
and are parameters of the second model.
In other words, the models explain the value of the dependent variable by values of the independent variables. Therefore, independent variables are often called predictor variables or explanatory variables.
In general, statistical models state some functional relationship between dependent variables and independent variables in the following form:
are dependent variables;
are independent variables;
are functions of the independent variables, usually including random terms simulating statistical uncertainty.
Want to learn more about this topic?
Statistics.com offers over 100 courses in statistics from introductory to advanced level. Most are 4 weeks long and take place online in series of weekly lessons and assignments, requiring about 15 hours/week. Participate at your convenience; there are no set times when you must to be online. Ask questions and exchange comments with the instructor and other students on a private discussion board throughout the course.
The aim of this course is to provide an easy introduction to inference and association through a series of practical applications, based on the resampling/simulation approach. Once you have completed this course you will be able to test hypotheses and compute confidence intervals regarding proportions or means, computer correlations and fit simple linear regressions.
The aim of this course is to provide an easy introduction to inference for two variables through a series of practical applications. Once you have completed this course you will be able to test hypotheses regarding a simple regression or a comparison of proportions or two means.
This course provides an easy introduction to ANOVA and multiple linear regression through a series of practical applications.
In this course you will learn how multiple linear regression models are derived, use software to implement them, learn what assumptions underlie the models, learn how to test whether your data meet those assumptions and what can be done when those assumptions are not met, and develop strategies for building and understanding useful models.
This course covers the fundamental concepts and theory of Structural Equation Modeling: model specification, model identification, model estimation, model testing, and model modification. Reading assignments, examples and exercises are included using the LISREL software package.