**Variance**

**Variance** is a characteristic (or parameter) of a set of of data, or a
series of answers. From a statistical perspective variance is one measures of
how cases are distributed within a range, in other words, of how difference there is among responses.
If every case has the same value on some measure, then the variance is 0. Otherwise there will be some level of variance. Many statistical processes use variance or related parameters (frequently standard deviation) in assessments of similarities or differences between/among classifications (groups).

Variance is the sum of the squared differences between each score and the mean average of all scores.

**V = ** **[ ∑ ( X _{i} - MA)^{2} ]/
N**

Where:

**V**is variance,

**X**is each of the scores,

_{i}**MA**is the mean average, and

**N**is the number of scores.

There are other formulas that can be used to make the calculation in fewer steps, and there are specialized formulas for calculating an estimate of the population parameter when using samples drawn for a population. But since this is not a statistics course, we can leave all that for another day. For those interested in more detail, take a look at Wikipedia's article.

**Standard deviation is the square root of variance. **

**Explained and Unexplained variance**: Variance is frequently
partitioned into that which can be be attributed to a specific condition (explained) and
that which is assigned to other unmeasured conditions (unexplained).

This division is frequently referred to as "explained variance" and "unexplained variance." The higher the explained variance relative to the total variance, the stronger the effect of an identified variable. The proportions of variance explained as compared with either the total variance or the portion of the variance unexplained are valuable tools in many statistics including regression, ANOVA, and t-tests. Now, we need to look at the "unexplained variance."

The unexplained variance can be further portioned into two parts. Some part of the unexplained variance is due to random, everyday, normal, free will differences in a population or sample. There is nothing we can do about it, and that's OK because with aggregation of data these conditions are assumed to equal out. This assumption is necessary in much research. It is covered by the phrase: "all other unmeasured variations are equal in a well designed project.

Then there is the second "unexplained" variance that comes from some condition that has not be identified, but that is systematic. This variance, since it is consistent with some specific condition, introduces a bias.

When we examine cause and effect such a bias can lead to a false conclusion because it is not identified.