Regression with MCMC

Data set download


[1]:
import pandas as pd

import cmdstanpy
import arviz as az

import bokeh_catplot
import bebi103

import holoviews as hv
hv.extension('bokeh')
bebi103.hv.set_defaults()

import bokeh.io
bokeh.io.output_notebook()
Loading BokehJS ...

In a previous lesson, we plotted posterior distributions of models for microtubule size as a function of the size of droplets encapsulating them using data from Good, et al., Science, 342, 856-860, 2013. Here, we will perform a similar analysis using MCMC. We start by loading in the data set and making a quick plot.

[2]:
# Load in Data Frame
df = pd.read_csv('../data/good_invitro_droplet_data.csv', comment='#')

hv.Scatter(
    data=df,
    kdims='Droplet Diameter (um)',
    vdims='Spindle Length (um)',
)
[2]:

Updated generative model

We will estimate parameters for the conserved tubulin model. In this model, the spindle length \(l\) is a function of droplet diameter \(d\) according to

\begin{align} l(d;\gamma, \phi) = \frac{\gamma d}{\left(1+(\gamma d/\phi)^3\right)^{\frac{1}{3}}}. \end{align}

We will model the variation off of this curve as Normally distributed (assuming homoscedasticity), such that our likelihood is defined as

\begin{align} l_i \mid d_i, \gamma, \phi, \sigma \sim \text{Norm}(l(d_i;\gamma, \phi), \sigma) \;\forall i. \end{align}

We will choose weakly informative priors (in a departure from our approach the previous time we studied this data set in which we use Jeffreys priors). We will build the prior for the three parameters \(\phi\), \(\gamma\), and \(\sigma\), taking each to be independent of the others.

Let’s start with \(\sigma\). It is possible that the spindle size is very carefully controlled. It is also possible that it could be highly variable. So, we can choose a Half Normal prior for \(\sigma\) with a large scale parameter of 10 µm; \(\sigma \sim \text{HalfNorm}(10)\). For \(\gamma\), we know the physically is must be between zero and one (as we have worked out in previous lessons), so we will assume a prior on is close to uniform that domain, but with probability density dropping right at zero and one; \(\gamma \sim \text{Beta}(1.1, 1.1)\). For \(\phi\), the typical largest spindle size, we will take a broad distribution with a mean of about 20 µm; \(\phi \sim \text{LogNorm}(\ln 20, 0.75)\).

We thus have our complete generative model.

\begin{align} &\phi \sim \text{LogNorm}(\ln 20, 0.75),\\[1em] &\gamma \sim \text{Beta}(1.1, 1.1), \\[1em] &\sigma \sim \text{HalfNorm}(10),\\[1em] &\mu_i = \frac{\gamma d_i}{\left(1+(\gamma d_i/\phi)^3\right)^{\frac{1}{3}}}, \\[1em] &l_i \mid d_i, \gamma, \phi, \sigma \sim \text{Norm}(\mu_i, \sigma) \;\forall i. \end{align}

Using Stan to sample

The Stan code we will use is below.

functions {
  real ell_theor(real d, real phi, real gamma_) {
    real denom_ratio = (gamma_ * d / phi)^3;
    return gamma_ * d / (1 + denom_ratio)^(1.0 / 3.0);
  }
}


data {
  int N;
  real d[N];
  real ell[N];
}


parameters {
  real phi;
  real gamma_;
  real<lower=0> sigma;
}


model {
  phi ~ lognormal(log(20.0), 0.75);
  gamma_ ~ beta(1.1, 1.1);
  sigma ~ normal(0.0, 10.0);

  for (i in 1:N) {
    ell[i] ~ normal(ell_theor(d[i], phi, gamma_), sigma);
  }
}

I pause to point out a few syntactical elements.

  1. Note how I used gamma_ as the variable name for gamma. This is because gamma is used as a distribution name in Stan.

  2. I have used the functions block in this Stan program. The definition of the function is real ell_theor(real d, real phi, real gamma_) { }, where the code to be executed in the function is between the braces. This is how you declare a function in Stan. The first word, real specifies what data type is expected to be returned. Then comes the name of the function, followed by its arguments in parentheses preceded by their data type.

  3. Within the function, I had to declare denom_ratio to be real. Remember, Stan is statically typed, so every variable needs a declaration of its type.

  4. Also within the function, note that the raise-to-power operator is ^, not ** as in Python.

  5. The return statement is like Python; return followed by a space and then an expression for the value to be returned.

  6. There is no Half-Normal distribution in Stan. It is implemented by putting a bound on the variable (in this case real<lower=0> sigma;) and then model sigma as being Normally distributed with location parameter zero.

All right! Let’s do some sampling!

[3]:
sm = cmdstanpy.CmdStanModel(stan_file='spindle.stan')

data = dict(
    N=len(df),
    d=df["Droplet Diameter (um)"].values,
    ell=df["Spindle Length (um)"].values,
)

samples = sm.sample(
    data=data,
    chains=4,
    sampling_iters=1000
)

samples = az.from_cmdstanpy(samples)
INFO:cmdstanpy:stan to c++ (/Users/bois/Dropbox/git/bebi103_course/2020/b/content/lessons/lesson_04/spindle.hpp)
INFO:cmdstanpy:compiling c++
INFO:cmdstanpy:compiled model file: /Users/bois/Dropbox/git/bebi103_course/2020/b/content/lessons/lesson_04/spindle
INFO:cmdstanpy:start chain 1
INFO:cmdstanpy:start chain 2
INFO:cmdstanpy:finish chain 1
INFO:cmdstanpy:start chain 3
INFO:cmdstanpy:finish chain 2
INFO:cmdstanpy:start chain 4
INFO:cmdstanpy:finish chain 3
INFO:cmdstanpy:finish chain 4

With samples in hand, let’s make a plot of the posterior samples. Remember that when we brute force plotted the posterior, we had to analytically marginalize out \(\sigma\) (which we could do because we used a Jeffreys prior). Here, we can just plot our samples of \(\phi\) and \(\gamma\) and ignore the \(\sigma\) samples.

[4]:
df_mcmc = bebi103.stan.posterior_to_dataframe(samples)

hv.Points(
    data=df_mcmc,
    kdims=[('phi', 'ϕ'), ('gamma_', 'γ')],
).opts(
    alpha=0.1,
    size=2,
)
[4]:

We can further marginalize the distribution to get the CDFs of the respective parameters.

[5]:
p1 = bokeh_catplot.ecdf(
    df_mcmc["phi"].values, x_axis_label="φ (µm)", plot_height=250, plot_width=300
)
p2 = bokeh_catplot.ecdf(
    df_mcmc["gamma_"].values, x_axis_label="γ", plot_height=250, plot_width=300
)

bokeh.io.show(bokeh.layouts.gridplot([p1, p2], ncols=2))

We now have a complete picture of the posterior, computing directly from our samples.

Here, we will eschew plotting regression lines because this is not a preferred method of visualization and defer that to when we do posterior predictive checks.

[6]:
bebi103.stan.clean_cmdstan()

Computing environment

[7]:
%load_ext watermark
%watermark -v -p numpy,pandas,cmdstanpy,arviz,bokeh,bokeh_catplot,holoviews,bebi103,jupyterlab
CPython 3.7.6
IPython 7.11.1

numpy 1.18.1
pandas 0.24.2
cmdstanpy 0.8.0
arviz 0.6.1
bokeh 1.4.0
bokeh_catplot 0.1.7
holoviews 1.12.7
bebi103 0.0.50
jupyterlab 1.2.5