Pairs bootstrap and correlation
[1]:
# Colab setup ------------------
import os, sys, subprocess
if "google.colab" in sys.modules:
cmd = "pip install --upgrade bebi103 watermark"
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
data_path = "https://s3.amazonaws.com/bebi103.caltech.edu/data/"
else:
data_path = "../data/"
# ------------------------------
We continue our analysis of the drone sperm quality data set. Let’s load it and remind ourselves of the content.
[3]:
df = pd.read_csv(os.path.join(data_path, "bee_sperm.csv"), comment='#')
df.head()
[3]:
Specimen | Treatment | Environment | TreatmentNCSS | Sample ID | Colony | Cage | Sample | Sperm Volume per 500 ul | Quantity | ViabilityRaw (%) | Quality | Age (d) | Infertil | AliveSperm | Quantity Millions | Alive Sperm Millions | Dead Sperm Millions | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 227 | Control | Cage | 1 | C2-1-1 | 2 | 1 | 1 | 2150000 | 2150000 | 96.7263814616756 | 96.726381 | 14 | 0 | 2079617 | 2.1500 | 2.079617 | 0.070383 |
1 | 228 | Control | Cage | 1 | C2-1-2 | 2 | 1 | 2 | 2287500 | 2287500 | 96.3498079760595 | 96.349808 | 14 | 0 | 2204001 | 2.2875 | 2.204001 | 0.083499 |
2 | 229 | Control | Cage | 1 | C2-1-3 | 2 | 1 | 3 | 87500 | 87500 | 98.75 | 98.750000 | 14 | 0 | 86406 | 0.0875 | 0.086406 | 0.001094 |
3 | 230 | Control | Cage | 1 | C2-1-4 | 2 | 1 | 4 | 1875000 | 1875000 | 93.2874208336941 | 93.287421 | 14 | 0 | 1749139 | 1.8750 | 1.749139 | 0.125861 |
4 | 231 | Control | Cage | 1 | C2-1-5 | 2 | 1 | 5 | 1587500 | 1587500 | 97.7925061050061 | 97.792506 | 14 | 0 | 1552456 | 1.5875 | 1.552456 | 0.035044 |
Correlation
We might wish to investigate how two measured quantities are correlated. For example, if the number of dead sperm and the number of alive sperm are closely correlated, this would mean that a given drone produces some quantity of sperm and some fraction tend to be dead. Let’s take a look at this.
[4]:
hv.extension("bokeh")
# Only use values greater than zero for log scale
inds = (df["Alive Sperm Millions"] > 0) & (df["Dead Sperm Millions"] > 0)
hv.Points(
data=df.loc[inds, :],
kdims=["Alive Sperm Millions", "Dead Sperm Millions"],
vdims=["Treatment"],
).groupby(
"Treatment"
).opts(
logx=True,
logy=True,
).overlay()
[4]:
There seems to be some correlation (on a log scale), but it is difficult to tell. We can compute the correlation with the bivariate correlation coefficient, also known as the Pearson correlation. It is the plug-in estimate of the correlation between variables (in this case alive and dead sperm). The correlation is the covariance divided by the geometric mean of the individual variances
The bivariate correlation coefficient is implemented with np.corrcoef()
, but we will code our own. We will JIT it for speed.
[5]:
@numba.njit
def bivariate_r(x, y):
"""
Compute plug-in estimate for the bivariate correlation coefficient.
"""
return (
np.sum((x - np.mean(x)) * (y - np.mean(y)))
/ np.std(x)
/ np.std(y)
/ np.sqrt(len(x))
/ np.sqrt(len(y))
)
We can use it to compute the bivariate correlation coefficient for the logarithm of alive and dead sperm.
[6]:
bivariate_r(
df.loc[inds, "Alive Sperm Millions"].values,
df.loc[inds, "Dead Sperm Millions"].values,
)
[6]:
0.1785839325544448
Pairs bootstrap confidence intervals
How can we get a confidence interval on a correlation coefficient? We can again apply the bootstrap, but this time, the replicate is a pair of data, in this case a dead sperm count/alive sperm count pair. The process of drawing pairs of data points from an experiment and then computing bootstrap replicates from them is called pairs bootstrap. Let’s code it up for this example with the bivariate correlation.
Our strategy in coding up the pairs bootstrap is to draw bootstrap samples of the indices of measurement and use those indices to select the pairs.
[7]:
@numba.njit
def draw_bs_sample(data):
"""Draw a bootstrap sample from a 1D data set."""
return np.random.choice(data, size=len(data))
@numba.njit
def draw_bs_pairs(x, y):
"""Draw a pairs bootstrap sample."""
inds = np.arange(len(x))
bs_inds = draw_bs_sample(inds)
return x[bs_inds], y[bs_inds]
With our pairs sampling function in place, we can write a function to compute replicates.
[8]:
@numba.njit
def draw_bs_pairs_reps_bivariate(x, y, size=1):
"""
Draw bootstrap pairs replicates.
"""
out = np.empty(size)
for i in range(size):
out[i] = bivariate_r(*draw_bs_pairs(x, y))
return out
Finally, we can put it all together to compute confidence intervals on the correlation. To start, we extract all of the relevant measurements as Numpy arrays to allow for faster resampling (and that’s what our Numba’d functions require).
[9]:
# Extract NumPy arrays (only use values greater than zero for logs)
inds = (df["Alive Sperm Millions"] > 0) & (df["Dead Sperm Millions"] > 0)
alive_ctrl = df.loc[
(inds) & (df["Treatment"] == "Control"), "Alive Sperm Millions"
].values
alive_pest = df.loc[
(inds) & (df["Treatment"] == "Pesticide"), "Alive Sperm Millions"
].values
dead_ctrl = df.loc[
(inds) & (df["Treatment"] == "Control"), "Dead Sperm Millions"
].values
dead_pest = df.loc[
(inds) & (df["Treatment"] == "Pesticide"), "Dead Sperm Millions"
].values
Now we can compute the bootstrap replicates using our draw_bs_pairs_reps_bivariate()
function.
[10]:
# Get reps
bs_reps_ctrl = draw_bs_pairs_reps_bivariate(
np.log(alive_ctrl), np.log(dead_ctrl), size=10000
)
bs_reps_pest = draw_bs_pairs_reps_bivariate(
np.log(alive_pest), np.log(dead_pest), size=10000
)
And from the replicates, we can compute and print the 95% confidence interval.
[11]:
# Get the confidence intervals
conf_int_ctrl = np.percentile(bs_reps_ctrl, [2.5, 97.5])
conf_int_pest = np.percentile(bs_reps_pest, [2.5, 97.5])
print(
"""
Correlation in control conf int: [{0:.2f}, {1:.2f}]
Correlation in pesticide conf int: [{2:.2f}, {3:.2f}]
""".format(
*tuple(conf_int_ctrl), *tuple(conf_int_pest)
)
)
Correlation in control conf int: [0.44, 0.70]
Correlation in pesticide conf int: [0.28, 0.66]
We see a clear correlation in both samples, with a wide, but positive, confidence interval. Note that we did this analysis on a log scale, since the data span several orders of magnitude.
Computing environment
[12]:
%load_ext watermark
%watermark -v -p numpy,pandas,numba,bokeh,holoviews,bebi103,jupyterlab
Python implementation: CPython
Python version : 3.8.12
IPython version : 7.27.0
numpy : 1.21.2
pandas : 1.3.3
numba : 0.53.1
bokeh : 2.3.3
holoviews : 1.14.6
bebi103 : 0.1.8
jupyterlab: 3.1.7