11.1 The normal distribution

Any statistical method for determining a mean (and confidence limit) from a set of observations is based on a probability density function. This function describes the distribution of observations for a hypothetical, infinite set of observations called a population. The Gaussian probability density function (normal distribution) has the familiar bell-shaped form shown in Figure 11.1a. The meaning of the probability density function f(z) is that the proportion of observations within an interval of incremental width dz centered on z is f(z)dz.


EPSfiles/gauss.eps

Figure 11.1: a) The Gaussian probability density function (normal distribution, Equation 11.1). The proportion of observations within an interval dz centered on z is f(z)dz. b) Histogram of 1000 measurements of bed thickness in a sedimentary formation. Also shown is the smooth curve of a normal distribution with a mean of 10 and a standard deviation of 3. c) Histogram of the means from 100 repeated sets of 1000 measurements from the same sedimentary formation. The distribution of the means is much tighter. d) Histogram of the variances (s2) from the same set of experiments as in c). The distribution of variances is not bell shaped; it is χ2.


The Gaussian probability density function is given by:

         1       - z2
f(z) = -√---exp (----),
       σ  2π      2
(11.1)

where

z = x---μ.
      σ

x is the variable measured, μ is the true mean, and σ is the standard deviation. The parameter μ determines the value of x about which the distribution is centered, while σ determines the width of the distribution about the true mean. By performing the required integrals (computing area under curve f(z)), it can be shown that 68% of the readings in a normal distribution are within σ of μ, while 95% are within 1.96σ of μ.

The usual situation is that one has made a finite number of measurements of a variable x. In the literature of statistics, this set of measurements is referred to as a sample. Let us say that we made 1000 measurements of some parameter, say bed thickness (in cm) in a particular sedimentary formation. We plot these in histogram form in Figure 11.1b.

By using the methods of Gaussian statistics, one is supposing that the observed sample has been drawn from a population of observations that is normally distributed. The true mean and standard deviation of the population are, of course, unknown. But the following methods allow estimation of these quantities from the observed sample. A normal distribution can be characterized by two parameters, the mean (μ) and the variance σ2. How to estimate the parameters of the underlying distribution is the art of statistics. We all know that the arithmetic mean of a batch of data x drawn from a normal distribution is calculated by:

       N
x = -1∑   x ,
    N  i=1  i
where N is the number of measurements and xi is an individual measurement.

The mean estimated from the data shown in Figure 11.1b is 10.09. If we had measured an infinite number of bed thicknesses, we would have gotten the bell curve shown as the dashed line and calculated a mean of 10.

The “spread” in the data is characterized by the variance σ2. Variance for normal distributions can be estimated by the statistic s2:

           N
s2 = --1---∑  (x - x)2.
     N - 1      i
           i=1
(11.2)

In order to get the units right on the spread about the mean (cm – not cm2), we have to take the square root of s2. The statistic s gives an estimate of the standard deviation σ and is the bounds around the mean that includes 63% of the values. The 95% confidence bounds are given by 1.96s (this is what a “2-σ error” is), and should include 95% of the observations. The bell curve shown in Figure 11.1b has a σ (standard deviation) of 3, while the s is 2.97.

If you repeat the bed measuring experiment a few times, you will never get exactly the same measurements in the different trials. The mean and standard deviations measured for each trial then are “sample” means and standard deviations. If you plotted up all those sample means, you would get another normal distribution whose mean should be pretty close to the true mean, but with a much more narrow standard deviation. In Figure 11.1c we plot a histogram of means from 100 such trials of 1000 measurements each drawn from the same distribution of μ = 10= 3. In general, we expect the standard deviation of the means (or standard error of the mean, sm) to be related to s by

sm =  √--s---.
       Ntrials

What if we were to plot up a histogram of the estimated variances as in Figure 11.1c? Are these also normally distributed? The answer is no, because variance is a squared parameter relative to the original units. In fact, the distribution of variance estimates from normal distibutions is expected to be chi-squared (χ2). The width of the χ2 distribution is also governed by how many measurements were made. The so-called number of degrees of freedom (ν) is given by the number of measurements made minus the number of measurements required to make the estimate, so ν for our case is N - 1. Therefore we expect the variance estimates to follow a χ2 distribution with N - 1 degrees of freedom of χν2.

The estimated standard error of the mean, sm, provides a confidence limit for the calculated mean. Of all the possible samples that can be drawn from a particular normal distribution, 95% have means, x, within 2sm of x. (Only 5% of possible samples have means that lie farther than 2sm from x.) Thus the 95% confidence limit on the calculated mean, x, is 2sm, and we are 95% certain that the true mean of the population from which the sample was drawn lies within 2sm of x. The estimated standard error of the mean, sm decreases 1/√--
 N. Larger samples provide more precise estimations of the true mean; this is reflected in the smaller confidence limit with increasing N.

We often wish to consider ratios of variances derived from normal distributions (for example to decide if the data are more scattered in one data set relative to another). In order to do this, we must know what ratio would be expected from data sets drawn from the same distributions. Ratios of such variances follow a so-called F distribution with ν1 and ν2 degrees of freedom for the two data sets. This is denoted F[ν12]. Thus if the ratio F, given by:

     s21
F  = -2,
     s2
is greater than the 5% critical value of F[ν12] (check the F distribution tables in your favorite statistics book or online), the hypothesis that the two variances are the same can be rejected at the 95% level of confidence.

A related test to the F test is Student’s t-test. This test compares differences in normal data sets and provides a means for judging their significance. Given two sets of measurements of bed thickness, for example in two different sections, the t test addresses the likelihood that the difference between the two means is significant at a given level of probability. If the estimated means and standard deviations of the two sets of N1 and N2 measurements are x11 and x22 respectively, the t statistic can be calculated by:

    x1---x2-
t = σ(x1- x2),
where
          ∘ ----------------------------------
            (N1---1)σ21 +-(N2----1)σ22) 1--  -1-
σ(x1- x2) =              ν            (N1 + N2 ).
Here ν = N1 + N2 - 2. If this number is below a critical value for t then the null hypothesis that the two sets of data are the same cannot be rejected at a given level of confidence. The critical value can be looked up in t-tables in your favorite statistics book or online.