(c) 2018 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This document was prepared at Caltech with financial support from the Donna and Benjamin M. Rosen Bioengineering Center.
This tutorial was generated from an Jupyter notebook. You can download the notebook here.
import numpy as np
import pandas as pd
import bebi103
import bokeh.io
bokeh.io.output_notebook()
Our process for building models has been as follows.
After building the model, we can then sample out of the posterior (or summarize the posterior in other ways, such as finding the MAP) to get an understanding of the parameter values.
In this tutorial, we will look at how we can assess how well our model performs after we have the data. We will then discuss how to come up with other models is we are not satisfied with the performance of this model.
We will again use the data from Singer and coworkers, where they performed RNA FISH to determing the copy numbers of RNA transcripts of specific genes in individual cells in their samples. As a reminder, here are the data sets for the respective genes. (The data set can be downloaded here.)
# Load DataFrame
df = pd.read_csv('../data/singer_transcript_counts.csv', comment='#')
# Make a list of plots
plots = [bebi103.viz.ecdf(df[gene],
plot_height=125,
title=gene)
for gene in df]
plots[-1].xaxis.axis_label = 'mRNA count'
# Show them as a grid
bokeh.io.show(bokeh.layouts.gridplot(plots, ncols=1))
In tutorial 7a we devised a model for the transcript counts of a given gene based on observations about bursty gene expression. We chose a Negative Binomial likelihood and then assigned appropriate priors.
\begin{align} &\alpha \sim \text{LogNorm}(0,2), \\[1em] &b \sim \text{LogNorm}(2, 3), \\[1em] &\beta = 1/b,\\[1em] &n_i \sim \text{NegBinom}(\alpha, \beta) \;\forall i. \end{align}
We used this model to perform parameter estimates for the burst size $b$ and burst frequency $\alpha$ for the rest gene. How can we assess if the model is actually a reasonable model for the data? This question is quite similar to one we have asked earlier in the course: How can we tell if our model generate all of the possible data sets we could imagine? To address this question, we performed prior predictive checks. For the question of how well the model might be able to describe the data, we use a similar procedure of posterior predictive checks.
We perform posterior predictive checks in much the same way we perform prior predictive checks. To draw a data that would be generated by our model after we have seen the data, we draw a set of parameter values out of the posterior (as opposed to out of the prior for prior predictive checks) and use those in the likelihood to draw a data set. Fortunately, we already have the draws of parameter values out of the posterior. That is what sampling with MCMC provides!
Once we have our draws of data sets out of the posterior predictive distribution, we can make plots of these and compare them to the measured data set.
We are now tasked with adding draws from the posterior predictive distribution to our Stan code. We do this in the generated quantities
block in much the same way we did for prior predictive checks. Remember that the generated quantities
block is computed for each sample out of the posterior. So, we use the posterior values of the parameters in the likelihood to get the posterior predictive sample.
model_code = """
data {
int N;
int n[N];
}
parameters {
real<lower=0> alpha;
real<lower=0> b;
}
transformed parameters {
real beta_ = 1.0 / b;
}
model {
// Priors
alpha ~ lognormal(0.0, 2.0);
b ~ lognormal(2.0, 3.0);
// Likelihood
n ~ neg_binomial(alpha, beta_);
}
generated quantities {
int n_ppc[N];
// Draw posterior predictive data set
for (i in 1:N) {
n_ppc[i] = neg_binomial_rng(alpha, beta_);
}
}
"""
sm = bebi103.stan.StanModel(model_code=model_code)
Now, when we draw our samples using Stan, we also get our posterior predictive check samples.
data = dict(N=len(df),
n=df['Rest'].values.astype(int))
samples = sm.sampling(data=data)
We should always do diagnostic checks whenever we make draws.
bebi103.stan.check_all_diagnostics(samples)
The checks all look good. At the very least in addition to our diagnostic checks, we should also make a plot of the samples, so let's make a corner plot.
bokeh.io.show(bebi103.viz.corner(samples, pars=['alpha', 'b']))
There are also no obvious issues with the corner plot, so the sampling seems to be in good order.
Now, we can analyze our posterior predictive draws. So we understand how they are stored, let's convert the samples to a data frame and take a look.
df_mcmc = bebi103.stan.to_dataframe(samples)
df_mcmc.head()
Each entry in the prior predictive checks is its own column in the data frame. We can extract them using the bebi103.stan.extract_array()
function.
df_n_ppc = bebi103.stan.extract_array(samples, 'n_ppc')
df_n_ppc.head()
The column index_1
gives the index of the array of outputs, and chain_idx
gives the index of the chain from this it was acquired. To see what types of data we get, we can make a plot of all of the ECDFs as we did for prior predictive checks. It helps for comparison to then overlay the ECDF of the acquired data. We have a lot of posterior predictive data points, so we will plot every 40th sample (thereby plotting posterior predictive 100 ECDFs).
# Plot measured data set
p = bebi103.viz.ecdf(df['Rest'].values,
x_axis_label='n',
color='orange',
level='overlay')
# Plot posterior predictive ECDFs
for i in df_n_ppc['chain_idx'].unique()[::40]:
p = bebi103.viz.ecdf(df_n_ppc.loc[df_n_ppc['chain_idx']==i, 'n_ppc'],
alpha=0.1, p=p)
bokeh.io.show(p)
Based on this plot, the posterior predictive distribution seems to encompass what was actually measured. We conclude that at least that the observed data are captured with the model.
A better plot (which also does not involve plotting so many glyphs to choke your browser) would be to plot the percentiles of the ECDFs from the posterior predictive distribtion. This functionality is included in the bebi103.viz.plot_predictive_ecdf()
function.
bokeh.io.show(bebi103.viz.predictive_ecdf(samples, name='n_ppc', data=df['Rest']))
In this plot, the middle dark blue line is the median, and then the shades of blue expand to encompass the middle 20th, 40th, 60th, and 80th precentiles of posterior predictive ECDFs. This is hard to see in the above plot without zooming. A better use of space would be to plot the difference of the ECDF compared to the median of the posterior predictive ECDFs. This is accomplished using the diff=True
kwarg.
bokeh.io.show(bebi103.viz.predictive_ecdf(samples, name='n_ppc', data_line=False,
data=df['Rest'], diff=True))
It is much clearer now. We have a few points outside of the 80th percentile envelope, which is expected. We can be satisfied, then, that the model is capturing the real data.
Let us now look at the Rex1 gene using the same analysis.
data = dict(N=len(df),
n=df['Rex1'].values.astype(int))
# Draw samples
samples = sm.sampling(data=data)
# Check diagnostics
bebi103.stan.check_all_diagnostics(samples)
# Make a corner plot
bokeh.io.show(bebi103.viz.corner(samples, pars=['alpha', 'b']))
The diagnostics and the corner plot check out. Let's make the predictive ECDF plot.
bokeh.io.show(bebi103.viz.predictive_ecdf(samples, name='n_ppc', data_line=False,
data=df['Rex1'], diff=True))
Yikes! The model fails pretty spectacularly to capture the measured data. We suspected this might be the case just looking at the ECDFs from the measured data. Now that we know the model does not describe the data well, we need to come up with another model. This is a good time to think about principled model building.
This recipe is almost complete. You should also do some checks to make sure the model is identifiable, that the sampler can handle it, and also that the data can inform parameters. We will discuss techniques to do these things in tutorial 10.
If you want to read a more detailed treatment about principled modeling, check out this blog post from Michael Betancourt.
So, let's proceed to build upon the model. We will propose a mixture model where we have two different cell populations, each with a different pulse size and pulse frequency.
\begin{align} &\alpha_i \sim \text{LogNorm}(0,2) \text{ for } i \in [1, 2] \\[1em] &b_i \sim \text{LogNorm}(2, 3) \text{ for } i \in [1, 2], \\[1em] &\beta_i = 1/b_i,\\[1em] &w \sim \text{Beta}(1, 1), \\[1em] &n_i \sim w \, \text{NegBinom}(\alpha_1, \beta_1) + (1-w)\,\text{NegBinom}(\alpha_2, \beta_2). \end{align}
Note that this model reduces to the original model when $\alpha_1 = \alpha_2$ and $\beta_1 = \beta_2$. In that case, $w$ can be anything; it is nonidentifiable. Nonetheless, a model with this added complexity does reduce to the simpler model for a well-defined special case.
We'll code up the model in Stan, again putting in posterior predictive checks.
model_code_mix = """
data {
int N;
int n[N];
}
parameters {
vector<lower=0>[2] alpha;
vector<lower=0>[2] b;
real<lower=0, upper=1> w;
}
transformed parameters {
vector[2] beta_ = 1.0 ./ b;
}
model {
// Priors
alpha ~ lognormal(0.0, 2.0);
b ~ lognormal(2.0, 3.0);
w ~ beta(1.0, 1.0);
// Likelihood
for (i in 1:N) {
target += log_mix(w,
neg_binomial_lpmf(n[i] | alpha[1], beta_[1]),
neg_binomial_lpmf(n[i] | alpha[2], beta_[2]));
}
}
generated quantities {
int n_ppc[N];
// Posterior predictive checks
for (i in 1:N) {
if (uniform_rng(0.0, 1.0) < w) {
n_ppc[i] = neg_binomial_rng(alpha[1], beta_[1]);
}
else {
n_ppc[i] = neg_binomial_rng(alpha[2], beta_[2]);
}
}
}
"""
sm_mix = bebi103.stan.StanModel(model_code=model_code_mix)
Recall that there is some difficulty sampling out of this mixture model because of nonidentifiability related to label switching. Our strategy is to sample, and then to sample again, starting all of the chains at the mean values from the first sampling. We'll write a function to do this.
def sample_mix(data, **kwargs):
"""Sample a mixture model."""
samples = sm_mix.sampling(data=data, chains=1, **kwargs)
df_mcmc = bebi103.stan.to_dataframe(samples)
params = ['alpha[1]', 'alpha[2]', 'b[1]', 'b[2]', 'w']
param_means = df_mcmc.loc[df_mcmc['chain']==1, params].mean()
def init_fun():
"""Initialization function for sample at mean of one mode."""
return {'alpha': [param_means['alpha[1]'], param_means['alpha[2]']],
'b': [param_means['b[1]'], param_means['b[2]']],
'w': param_means['w']}
# Get the samples
return sm_mix.sampling(data=data, init=init_fun, **kwargs)
Now we'll use the function to get our samples. We'll check diagnostics and make a corner plot.
samples_mix = sample_mix(data)
# Check diagnostics
bebi103.stan.check_all_diagnostics(samples_mix)
# Make a corner plot
bokeh.io.show(bebi103.viz.corner(samples_mix,
pars=['alpha[1]', 'b[1]', 'alpha[2]', 'b[2]', 'w']))
No problems with the diagnostics, and the corner plots looks good. Now let's make a posterior predictive ECDF plot.
bokeh.io.show(bebi103.viz.predictive_ecdf(samples_mix, name='n_ppc', data_line=False,
data=df['Rex1'], diff=True))
The data set is well covered by the posterior predictive distribution. Note that the spread in the posterior predictive distribution goes well beyond the observed data. This suggests that the model have more than enough flexibility to cover the observed data. There may be a simpler model to describe the generative process, but it is important that any model we use is firmly grounded in a theory for the generative process. A mixture of two negative binomials is such a model, so these results are good.
data['n'] = df['Rest'].values.astype(int)
samples_mix = sample_mix(data, seed=4092374)
# Check diagnostics
bebi103.stan.check_all_diagnostics(samples_mix)
We got a few divergences, and the samples for $w$ have not yet converged. So, let's run it again with longer warmup and a little thinning to get $w$ to converge and also with a larger adapt_delta
to mitigate divergences.
samples_mix = sample_mix(data, warmup=2000, iter=4000, thin=2,
control=dict(adapt_delta=0.99), seed=4092374)
# Check diagnostics
bebi103.stan.check_all_diagnostics(samples_mix)
That looks better for the divergences, but the R-hat for $w$ still looks problematic. Let's take a look at the corner plot.
# Make a corner plot
bokeh.io.show(bebi103.viz.corner(samples_mix,
pars=['alpha[1]', 'b[1]', 'alpha[2]', 'b[2]', 'w']))
In looking at this corner plot, you will need to zoom to see the shapes of the distributions.
We see $w$ tending strongly toward zero or one, and essentially uniform in between. Bimodality in a parameter, even if it is a real and correct feature of a model, will throw off the R-hat calculation. So, these samples look legitimate.
The distribution of the values of $w$ suggest that the two distributions in the mixture are difficult to identify. $w$ close to one or zero suggests a single distribution dominates, and $w$ uniform suggests that the two distributions have almost the same parameters and the $w$ is therefore indifferent. In fact, if we look at the plot of $\alpha_1$ and $\alpha_2$, we see that their values are very close to one another. The same is true for $b_1$ and $b_2$. This suggests that there is actually a single negative binomial, and that the second negative binomial is difficult to identify.
This underscores the importance of a careful graphical analysis of MCMC samples. We also see the utility in constructing the more complex model (in this case a mixture model) so that it reduces to the simpler model in certain limits. The mixture weight $w$ either tended toward zero or one, and was uniform in between, suggesting either indifference to the weighting in the mixture model (with the parameter values from the two mixtures being similar, as we saw in the plots of the samples of $\alpha_1$ and $\alpha_2$ and $b_1$ and $b_2$) or strong preference for a single distribution of the mixture.
So, it is clear from our graphical analysis that the Rest gene's expression levels is best described by a single Negative Binomial distribution.
We have seen the utility of graphics posterior predictive checks in ensuring that the model can appropriately describe the acquired data set. Importantly, as we build models, we need to start simple, and then build complexity, each time having the simple model of a limiting or special case of the more complex model.
Sometimes, it is not so easy to graphically compare models and we need quantitative comparisons. We will discuss methods to do this in the next tutorial.
%load_ext watermark
%watermark -v -p numpy,pandas,pystan,bokeh,bebi103,jupyterlab