Plug-in estimates¶
As we have discussed, experimental observations are produced by an underlying generative probability distribution. If we have a full understanding of the generative distribution, we have learned how the data were generated, and thereby have an understanding of the physical, chemical, or biological phenomena we are studying. Statistical inference involves deducing properties of these (unknown) generative distributions.
In this lecture, we will start with nonparametric inference, which is statistical inference where no model is assumed; conclusions are drawn from the data alone. The approach we will take is heavily inspired by Allen Downey’s wonderful book, `Think Stats <http://greenteapress.com/thinkstats2/index.html>`_ and from Larry Wasserman’s book `All of Statistics <https://link.springer.com/book/10.1007/978-0-387-21736-9>`_.
The plug-in principle¶
Let’s first think about how to get an estimate for a parameter value, given the data. While what we are about to do is general, for now it is useful to have in your mind a concrete example. Imagine we have a data set that is a set of repeated measurements, such as the repeated measurements lengths of eggs laid by C. elegans worms of a given genotype.
We could have a generative model in mind, and we will do this in coming lessons. Instead, we will assume we only know that there is a generative distribution. Let \(F(x)\) be the cumulative distribution function (CDF) for the distribution. Remember that the probability density function (PDF), \(f(x)\), is related to the CDF by
A statistical functional is a functional of the CDF, \(T(F)\). A parameter \(\theta\) of a probability distribution can be defined from a functional, \(\theta = T(F)\). For example, the mean, variance, and median are all statistical functionals.
Now, say we made a set of \(n\) measurements, \(\{x_1, x_2, \ldots x_n\}\). You can think of this as a set of C. elegans egg lengths if you want to have an example in your mind. We define the empirical cumulative distribution function, \(\hat{F}(x)\) from our data as
with
We have already seen this form of the ECDF when we were studying exploratory data analysis. We can then differentiate the ECDF to get the empirical density function, \(\hat{f}(x)\) as
where \(\delta(x)\) is the Dirac delta function.
With the ECDF (and empirical density function), we have now defined an empirical distribution that is dependent only on the data. We now define a plug-in estimate of a parameter \(\theta\) as
In other words, to get a plug-in estimate a parameter \(\theta\), we need only to compute the functional using the empirical distribution. That is, we simply “plug in” the empirical CDF for the actual CDF.
The plug-in estimate for the median is easy to calculate.
or the middle-ranked data point. The plug-in estimate for the mean or variance seem at face to be a bit more difficult to calculate, but the following general theorem will help. Consider a functional of the form of an expectation value, \(r(x)\).
A functional of this form is called a linear statistical functional. The result above means that the plug-in estimate for a linear functional of a distribution is the arithmetic mean of the observed values themselves. The plug-in estimate of the mean, which has \(r(x) = x\), is
where we have defined \(\bar{x}\) as the traditional sample mean
(the arithmetic mean of the measured data), which we have just shown is
the plug-in estimate. This plug-in estimate is implemented in the
np.mean()
function. The plug-in estimate for the variance is
This plug-in estimate is implemented in the np.var()
function.
Note that we are denoting the mean and variance as \(\mu\) and \(\sigma^2\), but there are not in general the parameters with the same common name and symbols from a Normal distribution. Any distribution has a first moment (called a mean) and a second central moment (called a variance), unless they do not exist, as is the case, e.g., with a Cauchy distribution. In this context, we denote by \(\mu\) and \(\sigma^2\) the mean and variance of the unknown underlying univariate generative distribution.
We can compute plug-in estimates for more complicated parameters as well. For example, for a bivariate distribution, the correlation between the two variables \(x\) and \(y\), is defined with
where the expectation in the numerator is called the covariance between \(x\) and \(y\). It is or large magnitude of \(x\) and \(y\) vary together and close to zero if they are nearly independent of each other. The plug-in estimate for the correlation is