Model comparison in practice¶
[1]:
# Colab setup ------------------
import os, sys, subprocess
if "google.colab" in sys.modules:
cmd = "pip install --upgrade colorcet bebi103 arviz cmdstanpy watermark"
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
import cmdstanpy; cmdstanpy.install_cmdstan()
data_path = "https://s3.amazonaws.com/bebi103.caltech.edu/data/"
else:
data_path = "../data/"
# ------------------------------
import numpy as np
import pandas as pd
import cmdstanpy
import arviz as az
import bebi103
import bokeh.io
import bokeh.plotting
bokeh.io.output_notebook()
When comparing models, posterior predictive checks are a must. As discussed in the previous lesson, we can use information criteria to assess relative predictive effectiveness of models. In order to understand this lesson, you will need have carefully read and understand the previous lesson on the theory behind model comparison.
The key quantity we compute for more comparison is the expected log pointwise predictive density, or elpd. There were a few key ideas and assumptions in using elpd.
The elpd is an approximation of the difference in the Kullback-Leibler divergence between the posterior predictive distribution and the true generative distribution.
For our set of measurements \(y = (y_1, y_2, \ldots y_N)\), where each \(y_i\) may be multidimensional (as you would have, for example, of beak length/beak depth measurements for a single finch), we assume that \(y_i\)’s are independently distributed, both in the model and in the true generative distribution.
With these assumptions, we can approximately compute the elpd using the Watanabe-Akaike information criterion (WAIC) or leave-one-out cross validation (LOO). See this paper by Vehtari, Gelman, and Gabry (arXiv version) for more details about the implementation. As described in that paper, LOO, when computed using Pareto-smoothed importance sampling, is the preferred method for computing an approximate elpd.
Importantly, the (approximate) elpd by itself is not terribly useful in assessing a model. The elpd of one prospective model needs to be compared to another. For this comparison, we can compute Akaike weights. This is the most straightforward calculation of relative weights of respective models, and perhaps easiest to understand. However, it may not be the best way to assess the predictive capabilities of a model, especially in situations where the true generative model is not known (which is often the case for us as scientists). As we think about generative models, and we are not sure which model best generates observed data, it is useful to think about model averaging if our aim is to be predictive. The idea here is that we do not know which model generates data. Instead, we try to find a combination of models that spans all of the models we are considering, that best describe the data. The respective weights of the models give their contributions to this combination of models. As a scientist, I tend to shy away from model averaging; I am rather seeking to understand how nature generates the observations I see, and nature is not trying to predict, nor average models. However, taking a model averaging approach with an eye for optimizing predictive performance leads to more robust estimates of model weights, as outlined in this paper by Yao and coworkers, which describes a technique known as stacking.
In this tutorial, we will demonstrate how to calculate model weights both by using Akaike weights (and variants thereof), and stacking. Conveniently, this may be done approximately directly from samples out of the posterior distributions. The ArviZ package provides much of the functionality we need.
An example model comparison¶
To show how we can do an model comparison, we will again consider the data set from Singer and coworkers, where they performed RNA FISH to determine the copy numbers of RNA transcripts of specific genes in individual cells in their samples. The data set can be downloaded here.
We will work with the Rex1 gene. In previous lessons, we considered two models. First, a model in which transcript counts are generated from a single Negative Binomial distribution. Second, a mixture model in which the transcript counts are generated from two Negative Binomial distributions. Note that this is a purely academic exercise, since the first model will fail posterior predictive checks quite spectacularly. We would never really come to this point where we needed to do a model comparison, but we proceed to demonstrate how it is done in practice.
Computing the pointwise log likelihood¶
Recalling from lecture, the elpd is the the logarithm of the posterior predictive distribution, \(f(\tilde{y}_i\mid y)\), averaged over the true generative distribution \(f_t(\tilde{y}_i)\).
\begin{align} \text{elpd} = \sum_{i=1}^N\int\mathrm{d}\tilde{y}_i\,\,f_t(\tilde{y}_i)\,\ln f(\tilde{y}_i\mid y). \end{align}
When generating our samples, we therefore also need to compute samples of the value of the log likelihood. To do this, for each set of parameters \(\theta\) that we sample out of the posterior, we compute the pointwise log likelihood of the data set, using the parameters \(\theta\). The Stan code below includes these log likelihood samples as well as posterior predictive checks for the single Negative Binomial model. Read the code carefully.
data {
int<lower=0> N;
int<lower=0> n[N];
}
parameters {
real log10_alpha;
real log10_b;
}
transformed parameters {
real alpha = 10^log10_alpha;
real b = 10^log10_b;
real beta_ = 1.0 / b;
}
model {
// Priors
log10_alpha ~ normal(0, 1);
log10_b ~ normal(2, 1);
// Likelihood
n ~ neg_binomial(alpha, beta_);
}
generated quantities {
int n_ppc[N];
real log_lik[N];
// Draw posterior predictive data set
for (i in 1:N) {
n_ppc[i] = neg_binomial_rng(alpha, beta_);
}
// Compute pointwise log likelihood
for (i in 1:N) {
log_lik[i] = neg_binomial_lpmf(n[i] | alpha, beta_);
}
}
In the array log_lik
, I store the pointwise log likelihood. That is, for each measurement (in this case for each \(n_i\)), I compute the log likelihood for that data point using the parameters (in this case alpha
and beta_
) that I sampled out of the posterior. Conveniently, Stan’s distributions all have a function that ends in _lpdf
that compute the log probability density function for the distribution (with _lpmf
for discrete distributions that computes the log
probability mass function for the distribution).
Let’s sample out of this generative model, keeping samples of posterior predictive data sets and pointwise log likelihoods.
[2]:
# Load data and make data dictionary
df = pd.read_csv(os.path.join(data_path, "singer_transcript_counts.csv"), comment="#")
data = {"N": len(df), "n": df["Rex1"].values.astype(int)}
# Compile the model
sm = cmdstanpy.CmdStanModel(stan_file="neg_binom.stan")
# Perform sampling
samples = sm.sample(data=data)
INFO:cmdstanpy:compiling stan program, exe file: /Users/bois/Dropbox/git/bebi103_course/2021/b/content/lessons/19/neg_binom
INFO:cmdstanpy:compiler options: stanc_options=None, cpp_options=None
INFO:cmdstanpy:compiled model file: /Users/bois/Dropbox/git/bebi103_course/2021/b/content/lessons/19/neg_binom
INFO:cmdstanpy:start chain 1
INFO:cmdstanpy:start chain 2
INFO:cmdstanpy:start chain 3
INFO:cmdstanpy:start chain 4
INFO:cmdstanpy:finish chain 2
INFO:cmdstanpy:finish chain 4
INFO:cmdstanpy:finish chain 1
INFO:cmdstanpy:finish chain 3
When we convert the samples to an ArviZ object, we need to specify the posterior predictive and log likelihood.
[3]:
# Convert to ArviZ object
samples = az.from_cmdstanpy(
posterior=samples, posterior_predictive="n_ppc", log_likelihood="log_lik"
)
Now, we will check the diagnostics (as we always should) and make a corner plot.
[4]:
# Check diagnostics
bebi103.stan.check_all_diagnostics(samples)
# Make a corner plot
bokeh.io.show(bebi103.viz.corner(samples, parameters=['alpha', 'b']))
Effective sample size looks reasonable for all parameters.
Rhat looks reasonable for all parameters.
0 of 4000 (0.0%) iterations ended with a divergence.
0 of 4000 (0.0%) iterations saturated the maximum tree depth of 10.
E-BFMI indicated no pathological behavior.
Everything looks good. Actually, it doesn’t. The sampling looks good, but we should do posterior predictive checks.
[5]:
n_ppc = samples.posterior_predictive.n_ppc.stack(
{"sample": ("chain", "draw")}
).transpose("sample", "n_ppc_dim_0")
bokeh.io.show(
bebi103.viz.predictive_ecdf(
n_ppc,
name="n_ppc",
data=df["Rex1"].values,
x_axis_label="mRNA copy number",
)
)
We have clearly failed the posterior predictive checks. We can stop here, but we will continue to compute the WAIC and LOO for illustrative purposes. As we do that, let’s take a quick look at the output so we can see how the log likelihood samples are organized. The log likelihood is stored in ArviZ InferenceData
objects in the log_likelihood
attribute.
[6]:
samples.log_likelihood
[6]:
<xarray.Dataset> Dimensions: (chain: 4, draw: 1000, log_lik_dim_0: 279) Coordinates: * chain (chain) int64 0 1 2 3 * draw (draw) int64 0 1 2 3 4 5 6 7 ... 993 994 995 996 997 998 999 * log_lik_dim_0 (log_lik_dim_0) int64 0 1 2 3 4 5 ... 273 274 275 276 277 278 Data variables: log_lik (chain, draw, log_lik_dim_0) float64 -5.728 -5.918 ... -6.886 Attributes: created_at: 2021-01-25T07:40:05.290491 arviz_version: 0.11.0 inference_library: cmdstanpy inference_library_version: 0.9.67
- chain: 4
- draw: 1000
- log_lik_dim_0: 279
- chain(chain)int640 1 2 3
array([0, 1, 2, 3])
- draw(draw)int640 1 2 3 4 5 ... 995 996 997 998 999
array([ 0, 1, 2, ..., 997, 998, 999])
- log_lik_dim_0(log_lik_dim_0)int640 1 2 3 4 5 ... 274 275 276 277 278
array([ 0, 1, 2, ..., 276, 277, 278])
- log_lik(chain, draw, log_lik_dim_0)float64-5.728 -5.918 ... -5.54 -6.886
array([[[-5.72761, -5.91811, -6.66091, ..., -5.3076 , -5.5529 , -6.83061], [-5.61466, -5.94762, -6.73285, ..., -5.2718 , -5.55367, -6.91097], [-5.71942, -5.92071, -6.73957, ..., -5.24992, -5.51869, -6.92674], ..., [-5.75684, -5.91162, -6.63943, ..., -5.32079, -5.55569, -6.806 ], [-5.74229, -5.91471, -6.73269, ..., -5.2513 , -5.51484, -6.91994], [-5.71104, -5.92162, -6.69163, ..., -5.28578, -5.54226, -6.86741]], [[-5.74314, -5.91944, -6.79266, ..., -5.21323, -5.4931 , -6.99264], [-5.78551, -5.90381, -6.72426, ..., -5.25113, -5.50604, -6.9126 ], [-5.91199, -5.87572, -6.73449, ..., -5.23002, -5.46908, -6.9332 ], ... [-5.79353, -5.9082 , -6.58763, ..., -5.36344, -5.57768, -6.74342], [-5.73898, -5.91762, -6.62872, ..., -5.33395, -5.56819, -6.7912 ], [-5.54228, -5.97685, -6.64488, ..., -5.36669, -5.63257, -6.79493]], [[-5.73455, -5.91696, -6.73922, ..., -5.24799, -5.51447, -6.92736], [-5.93118, -5.87153, -6.58221, ..., -5.34355, -5.53665, -6.74691], [-5.62507, -5.9458 , -6.67921, ..., -5.31222, -5.57723, -6.84546], ..., [-5.58925, -5.95486, -6.70743, ..., -5.29631, -5.57449, -6.87768], [-5.67171, -5.93533, -6.77929, ..., -5.23101, -5.51763, -6.97165], [-5.69482, -5.92588, -6.70773, ..., -5.27629, -5.53967, -6.88605]]])
- created_at :
- 2021-01-25T07:40:05.290491
- arviz_version :
- 0.11.0
- inference_library :
- cmdstanpy
- inference_library_version :
- 0.9.67
The log likelihood is three dimensional, one dimension each for chain and draw, and then the final dimension is for the pointwise log-likelihood estimates.
Given that the log likelihood is stored in the ArviZ object, we can directly use the samples to compute the WAIC and LOO.
Computing the WAIC and LOO¶
We will start with the WAIC. We use the scale="deviance"
kwarg to get the WAIC. By default, az.waic()
will return the estimate for the elpd.
[7]:
az.waic(samples, scale="deviance")
[7]:
Computed from 4000 by 279 log-likelihood matrix
Estimate SE
deviance_waic 3281.45 17.28
p_waic 2.06 -
The output gives an estimate for the WAIC, and also the contribution of \(p_\mathrm{waic}\). It also give an estimate of the standard error in the WAIC.
Let’s now compute the LOO.
[8]:
single_loo = az.loo(samples, scale="deviance")
single_loo
[8]:
Computed from 4000 by 279 log-likelihood matrix
Estimate SE
deviance_loo 3281.46 17.28
p_loo 2.06 -
We see that the LOO and WAIC give almost identical results (as they should). Remember, though, that LOO has better performance across a wider variety of models.
Calculations with the mixture model¶
Now, let’s do the same calculation for the mixture model. In this case, we do not have a built-in distribution to use a _logpmf
function. Fortunately, the log_mix()
function in Stan accomplishes exactly what we need, as it did when we added it to target
in the model specification.
data {
int<lower=0> N;
int<lower=0> n[N];
}
parameters {
vector<lower=0>[2] alpha;
vector<lower=0>[2] b;
real<lower=0, upper=1> w;
}
transformed parameters {
vector[2] beta_ = 1.0 ./ b;
}
model {
// Priors
alpha ~ lognormal(0.0, 2.3);
b ~ lognormal(4.6, 2.3);
w ~ beta(1.0, 1.0);
// Likelihood
for (n_val in n) {
target += log_mix(
w,
neg_binomial_lpmf(n_val | alpha[1], beta_[1]),
neg_binomial_lpmf(n_val | alpha[2], beta_[2])
);
}
}
generated quantities {
int n_ppc[N];
real log_lik[N];
// Posterior predictive checks
for (i in 1:N) {
if (uniform_rng(0.0, 1.0) < w) {
n_ppc[i] = neg_binomial_rng(alpha[1], beta_[1]);
}
else {
n_ppc[i] = neg_binomial_rng(alpha[2], beta_[2]);
}
}
// Pointwise log likelihood
for (i in 1:N) {
log_lik[i] = log_mix(
w,
neg_binomial_lpmf(n[i] | alpha[1], beta_[1]),
neg_binomial_lpmf(n[i] | alpha[2], beta_[2]));
}
}
Let’s compile the model and get some samples.
[9]:
sm_mix = cmdstanpy.CmdStanModel(stan_file="neg_binom_mix.stan")
samples_mix = sm_mix.sample(data=data, seed=3252)
samples_mix = az.from_cmdstanpy(
posterior=samples_mix, posterior_predictive="n_ppc", log_likelihood="log_lik"
)
INFO:cmdstanpy:compiling stan program, exe file: /Users/bois/Dropbox/git/bebi103_course/2021/b/content/lessons/19/neg_binom_mix
INFO:cmdstanpy:compiler options: stanc_options=None, cpp_options=None
INFO:cmdstanpy:compiled model file: /Users/bois/Dropbox/git/bebi103_course/2021/b/content/lessons/19/neg_binom_mix
INFO:cmdstanpy:start chain 1
INFO:cmdstanpy:start chain 2
INFO:cmdstanpy:start chain 3
INFO:cmdstanpy:start chain 4
INFO:cmdstanpy:finish chain 2
INFO:cmdstanpy:finish chain 1
INFO:cmdstanpy:finish chain 3
INFO:cmdstanpy:finish chain 4
We’ll do our usual diagnostic checks and make a corner plot.
[10]:
# Check diagnostics
bebi103.stan.check_all_diagnostics(samples_mix)
# Make corner plot
bokeh.io.show(
bebi103.viz.corner(
samples_mix,
parameters=["alpha[0]", "alpha[1]", "b[0]", "b[1]", "w"],
xtick_label_orientation=np.pi / 4,
)
)
ESS for parameter alpha[0] is 11.05622775669144.
tail-ESS for parameter alpha[0] is 31.34001923967729.
ESS for parameter alpha[1] is 14.051426926943527.
ESS for parameter b[0] is 7.247819981716271.
tail-ESS for parameter b[0] is 33.85742840462245.
ESS for parameter b[1] is 7.579851567980297.
tail-ESS for parameter b[1] is 42.50366279506183.
ESS for parameter w is 7.202699298500068.
tail-ESS for parameter w is 27.51572828743482.
ESS for parameter beta_[0] is 7.247804869144611.
tail-ESS for parameter beta_[0] is 33.857428404622524.
ESS for parameter beta_[1] is 7.5798492516112725.
tail-ESS for parameter beta_[1] is 42.50366279506098.
ESS or tail-ESS below 100 per chain indicates that expectation values
computed from samples are unlikely to be good approximations of the
true expectation values.
Rhat for parameter alpha[0] is 1.2799139759186011.
Rhat for parameter alpha[1] is 1.212337057904505.
Rhat for parameter b[0] is 1.528383527268928.
Rhat for parameter b[1] is 1.4904136768480312.
Rhat for parameter w is 1.5314071761650492.
Rhat for parameter beta_[0] is 1.5283841263522218.
Rhat for parameter beta_[1] is 1.4904137267917061.
Rank-normalized Rhat above 1.01 indicates that the chains very likely have not mixed.
0 of 4000 (0.0%) iterations ended with a divergence.
0 of 4000 (0.0%) iterations saturated the maximum tree depth of 10.
E-BFMI indicated no pathological behavior.
Oof! WE failed the ESS and Rhat diagnostics. Recall that sampling is tricky because of label switching. To protect against label switching, we will initialize the walker near where chain 1 sampled.
[11]:
# Compute mean of parameters for chain 1
inits = {
"alpha": [
float(samples_mix.posterior.sel(chain=1, alpha_dim_0=0).alpha.mean()),
float(samples_mix.posterior.sel(chain=1, alpha_dim_0=1).alpha.mean()),
],
"b": [
float(samples_mix.posterior.sel(chain=1, b_dim_0=0).b.mean()),
float(samples_mix.posterior.sel(chain=1, b_dim_0=0).b.mean()),
],
"w": float(samples_mix.posterior.sel(chain=1).w.mean()),
}
# Take a look
inits
[11]:
{'alpha': [3.2135633199999996, 5.2446836],
'b': [6.186374756, 6.186374756],
'w': 0.17275506890000003}
Now we’ll use this as our initial walker positions.
[12]:
# Sample
samples_mix = sm_mix.sample(data, inits=inits)
# Convert to ArviZ
samples_mix = az.from_cmdstanpy(
posterior=samples_mix, posterior_predictive="n_ppc", log_likelihood="log_lik"
)
# Check diagnostics
bebi103.stan.check_all_diagnostics(samples_mix)
# Make corner plot
bokeh.io.show(
bebi103.viz.corner(
samples_mix,
parameters=["alpha[0]", "alpha[1]", "b[0]", "b[1]", "w"],
xtick_label_orientation=np.pi / 4,
)
)
INFO:cmdstanpy:start chain 1
INFO:cmdstanpy:start chain 2
INFO:cmdstanpy:start chain 3
INFO:cmdstanpy:start chain 4
INFO:cmdstanpy:finish chain 1
INFO:cmdstanpy:finish chain 2
INFO:cmdstanpy:finish chain 3
INFO:cmdstanpy:finish chain 4
Effective sample size looks reasonable for all parameters.
Rhat looks reasonable for all parameters.
0 of 4000 (0.0%) iterations ended with a divergence.
0 of 4000 (0.0%) iterations saturated the maximum tree depth of 10.
E-BFMI indicated no pathological behavior.
Everything looks good! We can do a quick posterior predictive check.
[13]:
n_ppc = samples_mix.posterior_predictive.n_ppc.stack(
{"sample": ("chain", "draw")}
).transpose("sample", "n_ppc_dim_0")
bokeh.io.show(
bebi103.viz.predictive_ecdf(
n_ppc,
name="n_ppc",
data=df["Rex1"].values,
x_axis_label="mRNA copy number",
)
)
Much nicer! The model allows for plenty of flexibility to allow for the observed bimodal behavior. Let’s proceed to compute the LOO and WAIC, starting with the WAIC.
[14]:
az.waic(samples_mix, scale="deviance")
[14]:
Computed from 4000 by 279 log-likelihood matrix
Estimate SE
deviance_waic 3191.74 21.79
p_waic 5.65 -
And the LOO.
[15]:
mix_loo = az.loo(samples_mix, scale="deviance")
mix_loo
[15]:
Computed from 4000 by 279 log-likelihood matrix
Estimate SE
deviance_loo 3191.75 21.79
p_loo 5.65 -
As expected, we get the same value for the information criterion. Let’s take a quick look at the difference of the LOO’s between the mixture model and the single Negative Binomial model.
[16]:
mix_loo.loo - single_loo.loo
[16]:
-89.70834371476167
Remember that for historical reasons,
\begin{align} \text{WAIC} \approx -2\,\text{elpd}, \\[1em] \text{LOO} \approx -2\,\text{elpd}. \\[1em] \end{align}
The bigger the elpd is, the smaller the Kullback-Leibler divergence is, so the better the model is. So, a bigger elpd means a smaller WAIC or LOO. So, the smaller the WAIC or LOO is, the closer the model is to the true generative model. This WAIC and LOO are smaller for the mixture model than for the single Negative Binomial model, so it is a better model.
Computing the weights¶
We can directly compute the Akaike weights from the values of the LOO, using
\begin{align} w_i = \frac{\exp\left[-(\text{LOO}_i-\text{LOO}_j)/2\right]}{1 + \exp\left[-(\text{LOO}_i-\text{LOO}_j)/2\right]}. \end{align}
[17]:
d_loo = mix_loo.loo - single_loo.loo
w_single = np.exp(d_loo/2) / (1 + np.exp(d_loo/2))
w_mix = 1 - w_single
print(' Mixture model weight:', w_mix)
print('Single Neg. Binom. model weight:', w_single)
Mixture model weight: 1.0
Single Neg. Binom. model weight: 3.3119263617346847e-20
In agreement with our posterior predictive checks, the mixture model is far more predictive than the single negative binomial model.
As I mentioned above, ArviZ offers more a sophisticated means of computing the weights using stacking. The results tend to be less extreme (and therefore more conservative) that directly computing the Akaike weights. We can use the az.compare()
function to do the calculation. We will do it using the LOO (WAIC is default, so we use the ic
kwarg). The first input is a dictionary containing the MCMC samples, where the keys of the dictionary are the names of the models.
[18]:
az.compare({"single": samples, "mix": samples_mix}, ic="loo", scale="deviance")
/Users/bois/opt/anaconda3/lib/python3.8/site-packages/arviz/stats/stats.py:146: UserWarning: The default method used to estimate the weights for each model,has changed from BB-pseudo-BMA to stacking
warnings.warn(
[18]:
rank | loo | p_loo | d_loo | weight | se | dse | warning | loo_scale | |
---|---|---|---|---|---|---|---|---|---|
mix | 0 | 3191.747473 | 5.653665 | 0.000000 | 0.986065 | 21.791815 | 0.00000 | False | deviance |
single | 1 | 3281.455817 | 2.056165 | 89.708344 | 0.013935 | 17.284973 | 17.51406 | False | deviance |
The mixture model is still dominant. (Again, we are not going into the details of the stacking calculation, but you can read about it in this paper by Yao and coworkers.)
[19]:
bebi103.stan.clean_cmdstan()
Computing environment¶
[20]:
%load_ext watermark
%watermark -v -p numpy,pandas,cmdstanpy,arviz,bokeh,bebi103,jupyterlab
print("cmdstan :", bebi103.stan.cmdstan_version())
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
numpy : 1.19.2
pandas : 1.2.1
cmdstanpy : 0.9.67
arviz : 0.11.0
bokeh : 2.2.3
bebi103 : 0.1.2
jupyterlab: 2.2.6
cmdstan : 2.25.0