(c) 2018 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This document was prepared at Caltech with financial support from the Donna and Benjamin M. Rosen Bioengineering Center.
This tutorial was generated from an Jupyter notebook. If you want to use the interactivity to explore probability distributions, you will need to download the notebook and run it on your machine. You can download the notebook here.
import numpy as np
import scipy.stats as st
import scipy.special
import bokeh.io
import bokeh.plotting
import bokeh.application
import bokeh.application.handlers
bokeh.io.output_notebook()
Because this tutorial consists mainly of a list of probability distributions and their stories and properties, it is useful to have a little index of them up front.
This tutorial also features interactive plotting, which needs a running Jupyter notebook to utilize. You should therefore download and run the notebook. When you launch your notebook, take note of the URL. You can find this at the top of your browser. It will be something like localhost:8888
. Before you work through the notebook, define your notebook_url in the code cell below.
notebook_url = 'localhost:8888'
In this class, we have talked about the three levels of building a model in the biological sciences.
Take together, these models comprise a generative model, which describes how the data were generated. Specifying your model amounts to choosing a probability distributions that describe the process of data generation. In some cases, you need to derive the likelihood (or even numerically compute it when it cannot be written in closed form). In most practical cases, though, your model is composed of standard probability distributions. These distributions all have stories associated with them. Importantly, if your data and model match the story of a distribution, you know that this is the distribution to choose for your likelihood.
Before we begin talking about distributions, let's remind ourselves what probability distributions are. We cut some corners in our definitions here, but these definitions are functional for most of our data analysis purposes.
A probability mass function (PMF), $f(x)$, describes the probability of a discrete variable obtaining value $x$. The variable $x$ takes on discrete values, so the normalization condition is
\begin{align} \sum_x f(x) = 1. \end{align}
A probability density function (PDF), which we shall call $f(x)$, is defined such that the probability that the value of a continuous variable $x$ is $a \le x \le b$ is
\begin{align} \int_a^b \mathrm{d}x\,f(x). \end{align}
A cumulative distribution function (CDF), denoted $F(x)$ is defined such that $F(x)$ is the probability that the value of a variable $X$ is less than or equal to $x$. For a discrete distribution
\begin{align} F(k) = \sum_{k'=k_\mathrm{min}}^k f(k'), \end{align}
where $k_\mathrm{min}$ is the minimal value the variable can take, and for a continuous distribution,
\begin{align} F(x) = \int_{-\infty}^x \mathrm{d}x'\,f(x'). \end{align}
If a probability mass or density function depends on parameters, say $n$ and $\theta$, we write it as $f(x;n, \theta)$. There does not seem to be consensus on the best notation, and you may see this same quantity written as $P(x\mid n, \theta)$, for example.
Given that we know a probability distribution, we can take samples out of it. This means that we can randomly draw numbers and the probability that we draw a certain number $x$ is proportional to the PMF or PDF, $f(x)$. Sampling out of a distribution is often easier than computing the distribution over a range of values because many of those values are zero.
The numpy.random
module and Stan are power tools for sampling out of distributions. For each distribution I describe, I show how to specify them using NumPy and Stan (and also with the scipy.stats
module).
In what follows, I will present some commonly used probability distributions and their associated stories. I give the following.
numpy.random
, scipy.stats
, and Stan.I omit analytical expression for the CDF because CDFs are often expressed as special functions, such as regularized incomplete beta functions or error functions. They can also be easily looked up. We mainly use them for comparing ECDFs in plots, and we can use the scipy.stats
module to just compute them.
Along the way, we'll define other terms, such as Bernoulli trial and Poisson process. I will use the scipy.stats
module (imported as st
) to compute the PDF/PMF and CDF. The API for this module will be apparent as I use it.
Finally, there is often more than one way to define the parameters of a distribution. I will be using the definitions and variable names that are used in Stan.
I use Bokeh to construct interactive visualizations of the distribution. The bebi103.viz
module has a function to create Bokeh apps to visualize the distributions, but I include the function here in this notebook to avoid dependencies. Please excuse the long code block.
def distribution_plot_app(x_min=None, x_max=None, scipy_dist=None,
transform=None, custom_pdf=None, custom_pmf=None, custom_cdf=None,
params=None, n=400, plot_height=200, plot_width=300, x_axis_label='x',
title=None):
"""
Build interactive Bokeh app displaying a univariate
probability distribution.
Parameters
----------
x_min : float
Minimum value that the random variable can take in plots.
x_max : float
Maximum value that the random variable can take in plots.
scipy_dist : scipy.stats distribution
Distribution to use in plotting.
transform : function or None (default)
A function of call signature `transform(*params)` that takes
a tuple or Numpy array of parameters and returns a tuple of
the same length with transformed parameters.
custom_pdf : function
Function with call signature f(x, *params) that computes the
PDF of a distribution.
custom_pmf : function
Function with call signature f(x, *params) that computes the
PDF of a distribution.
custom_cdf : function
Function with call signature F(x, *params) that computes the
CDF of a distribution.
params : list of dicts
A list of parameter specifications. Each entry in the list gives
specifications for a parameter of the distribution stored as a
dictionary. Each dictionary must have the following keys.
name : str, name of the parameter
start : float, starting point of slider for parameter (the
smallest allowed value of the parameter)
end : float, ending point of slider for parameter (the
largest allowed value of the parameter)
value : float, the value of the parameter that the slider
takes initially. Must be between start and end.
step : float, the step size for the slider
n : int, default 400
Number of points to use in making plots of PDF and CDF for
continuous distributions. This should be large enough to give
smooth plots.
plot_height : int, default 200
Height of plots.
plot_width : int, default 300
Width of plots.
x_axis_label : str, default 'x'
Label for x-axis.
title : str, default None
Title to be displayed above the PDF or PMF plot.
Returns
-------
output : Bokeh app
An app to visualize the PDF/PMF and CDF. It can be displayed
with bokeh.io.show(). If it is displayed in a notebook, the
notebook_url kwarg should be specified.
"""
if None in [x_min, x_max]:
raise RuntimeError('`x_min` and `x_max` must be specified.')
if scipy_dist is None:
fun_c = custom_cdf
if (custom_pdf is None and custom_pmf is None) or custom_cdf is None:
raise RuntimeError('For custom distributions, both PDF/PMF and'
+ ' CDF must be specified.')
if custom_pdf is not None and custom_pmf is not None:
raise RuntimeError('Can only specify custom PMF or PDF.')
if custom_pmf is None:
discrete = False
fun_p = custom_pdf
else:
discrete = True
fun_p = custom_pmf
elif ( custom_pdf is not None
or custom_pmf is not None
or custom_cdf is not None):
raise RuntimeError(
'Can only specify either custom or scipy distribution.')
else:
fun_c = scipy_dist.cdf
if hasattr(scipy_dist, 'pmf'):
discrete = True
fun_p = scipy_dist.pmf
else:
discrete = False
fun_p = scipy_dist.pdf
if discrete:
p_y_axis_label = 'PMF'
else:
p_y_axis_label = 'PDF'
if params is None:
raise RuntimeError('`params` must be specified.')
def _plot_app(doc):
p_p = bokeh.plotting.figure(plot_height=plot_height,
plot_width=plot_width,
x_axis_label=x_axis_label,
y_axis_label=p_y_axis_label,
title=title)
p_c = bokeh.plotting.figure(plot_height=plot_height,
plot_width=plot_width,
x_axis_label=x_axis_label,
y_axis_label='CDF')
# Link the axes
p_c.x_range = p_p.x_range
# Make sure CDF y_range is zero to one
p_c.y_range = bokeh.models.Range1d(-0.05, 1.05)
# Make array of parameter values
param_vals = np.array([param['value'] for param in params])
if transform is not None:
param_vals = transform(*param_vals)
# Set up data for plot
if discrete:
x = np.arange(int(np.ceil(x_min)),
int(np.floor(x_max))+1)
x_size = x[-1] - x[0]
x_c = np.empty(2*len(x))
x_c[::2] = x
x_c[1::2] = x
x_c = np.concatenate(((max(x[0] - 0.05*x_size, x[0] - 0.95),),
x_c,
(min(x[-1] + 0.05*x_size, x[-1] + 0.95),)))
x_cdf = np.concatenate(((x_c[0],), x))
else:
x = np.linspace(x_min, x_max, n)
x_c = x_cdf = x
# Compute PDF and CDF
y_p = fun_p(x, *param_vals)
y_c = fun_c(x_cdf, *param_vals)
if discrete:
y_c_plot = np.empty_like(x_c)
y_c_plot[::2] = y_c
y_c_plot[1::2] = y_c
y_c = y_c_plot
# Set up data sources
source_p = bokeh.models.ColumnDataSource(data={'x': x,
'y_p': y_p})
source_c = bokeh.models.ColumnDataSource(data={'x': x_c,
'y_c': y_c})
# Plot PDF and CDF
p_c.line('x', 'y_c', source=source_c, line_width=2)
if discrete:
p_p.circle('x', 'y_p', source=source_p, size=5)
p_p.segment(x0='x',
x1='x',
y0=0,
y1='y_p',
source=source_p,
line_width=2)
else:
p_p.line('x', 'y_p', source=source_p, line_width=2)
def _callback(attr, old, new):
param_vals = tuple([slider.value for slider in sliders])
if transform is not None:
param_vals = transform(*param_vals)
# Compute PDF and CDF
source_p.data['y_p'] = fun_p(x, *param_vals)
y_c = fun_c(x_cdf, *param_vals)
if discrete:
y_c_plot = np.empty_like(x_c)
y_c_plot[::2] = y_c
y_c_plot[1::2] = y_c
y_c = y_c_plot
source_c.data['y_c'] = y_c
sliders = [bokeh.models.Slider(start=param['start'],
end=param['end'],
value=param['value'],
step=param['step'],
title=param['name'])
for param in params]
for slider in sliders:
slider.on_change('value', _callback)
# Add the plot to the app
widgets = bokeh.layouts.widgetbox(sliders)
grid = bokeh.layouts.gridplot([p_p, p_c], ncols=2)
doc.add_root(bokeh.layouts.column(widgets, grid))
handler = bokeh.application.handlers.FunctionHandler(_plot_app)
return bokeh.application.Application(handler)
I plot the CDFs for discrete distributions as "staircases." As an example, here is a plot of the CDF of the Binomial distribution with parameters $N=10$ and $\theta = 0.5$.
x = np.arange(0, 11)
x_size = x[-1] - x[0]
x_c = np.empty(2*len(x))
x_c[::2] = x
x_c[1::2] = x
x_c = np.concatenate(((max(x[0] - 0.05*x_size, x[0] - 0.95),),
x_c,
(min(x[-1] + 0.05*x_size, x[-1] + 0.95),)))
x_cdf = np.concatenate(((x_c[0],), x))
y = st.binom.cdf(x_cdf, 10, 0.5)
y_c = np.empty_like(x_c)
y_c[::2] = y
y_c[1::2] = y
p = bokeh.plotting.figure(plot_height=250,
plot_width=350,
x_axis_label='n',
y_axis_label='F(n; 10, 0.5)')
p.line(x_c, y_c, line_width=2)
bokeh.io.show(p)
The CDF appears to be multivalued at the vertical lines of the staircase. It is not. Furthermore, the lines at zero and one on the CDF axis should extend out to $-\infty$ and $\infty$, respectively along the horizontal axis. Strictly speaking, it should be plotted as follows.
x = np.arange(0, 11)
y = st.binom.cdf(x, 10, 0.5)
p = bokeh.plotting.figure(plot_height=250,
plot_width=350,
x_axis_label='n',
y_axis_label='F(n; 10, 0.5)')
p.segment(x[:-1], y[:-1], x[1:], y[:-1], line_width=2)
p.ray(0, 0, angle=np.pi, length=0, line_width=2)
p.ray(x[-1], 1, angle=0, length=0, line_width=2)
p.circle([0], [0], fill_color='white')
p.circle(x[1:], y[:-1], fill_color='white')
p.circle(x, y)
bokeh.io.show(p)
However, since it is understood that the CDF is not multivalued, there should be no ambiguity in plotting the staircase, and indeed staircase style CDFs are commonly used. The staircase has less clutter and I find it is easier to look at and interpret. Furthemore, we know that all CDFs extend toward $x=-\infty$ with a value of zero and toward $x=\infty$ with a value of one. So, again, there is no ambiguity in cutting off the infinitely long tails of the CDF.
Story. A Bernoulli trial is an experiment that has two outcomes that can be encoded as success ($y=1$) or failure ($y = 0$). The result $y$ of a Bernoulli trial is Bernoulli distributed.
Example. Check to see if a given bacterium is competent, given that it has probability $\theta$ of being competent.
Parameter. The Bernoulli distribution is parametrized by a single value, $\theta$, the probability that the trial is successful.
Support. The Bernoulli distribution may be nonzero only for zero and one.
Probability mass function. \begin{align} f(y;\theta) = \left\{ \begin{array}{ccc} 1-\theta & & y = 0 \\[0.5em] \theta & & y = 1. \end{array} \right. \end{align}
Usage
Package | Syntax |
---|---|
NumPy | np.random.choice([0, 1], p=[1-theta, theta]) |
SciPy | scipy.stats.bernoulli(theta) |
Stan | bernoulli(theta) |
params = [dict(name='θ', start=0, end=1, value=0.5, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=1,
scipy_dist=st.bernoulli,
params=params,
x_axis_label='y',
title='Bernoulli')
bokeh.io.show(app, notebook_url=notebook_url)
Story. We perform a series of Bernoulli trials with probability of success $\theta$ until we get a success. The number of failures $y$ before the success is Geometrically distributed.
Example. Consider actin polymerization. At each time step, an actin monomer may add to the filament ("failure"), or an actin monomer may fall off ("success") with (usually very low) probability $\theta$. The length of actin filaments are Geometrically distributed.
Parameter. The Geometric distribution is parametrized by a single value, $\theta$, the probability that the Bernoulli trial is successful.
Support. The Geometric distribution, as defined here, is has support on the nonnegative integers.
Probability mass function.
\begin{align} f(y;\theta) = (1-\theta)^y \, \theta. \end{align}
Package | Syntax |
---|---|
NumPy | np.random.geometric(theta) |
SciPy | scipy.stats.geom(theta, loc=-1) |
Stan | neg_binomial(1, theta/(1-theta)) |
Related distributions.
Notes.
loc=-1
kwarg.params = [dict(name='θ', start=0, end=1, value=0.5, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=20,
scipy_dist=st.geom,
params=params,
transform=lambda theta: (theta, -1),
x_axis_label='y',
title='Geometric')
bokeh.io.show(app, notebook_url=notebook_url)
Story. We perform a series of Bernoulli trials with probability $\beta/(1+\beta)$ of success. The number of failures, $y$, before we get $\alpha$ successes is Negative Binomially distributed. An equivalent story is that the sum of $\alpha$ independent and identically Gamma distributed variables is Negative Binomial distributed.
Example. Bursty gene expression can give mRNA count distributions that are Negative Binomially distributed. Here, "success" is that a burst in gene expression stops. In this case, the parameter $1/\beta$ is the mean number of transcripts in a burst of expression. The parameter $\alpha$ is related to the frequency of the bursts. If multiple bursts are possible within the lifetime of mRNA, then $\alpha > 1$. Then, the number of "failures" is the number of mRNA transcripts that are made in the characteristic lifetime of mRNA.
Parameters. There are two parameters: $\alpha$, the desired number of successes, and $\beta$, which is the mean of the $\alpha$ identical Gamma distributions that give the Negative Binomial. The probability of success of each Bernoulli trial is given by $\beta/(1+\beta)$.
Support. The Negative-Binomial distribution is supported on the set of nonnegative integers.
Probability mass function. \begin{align} \\ \phantom{blah} f(y;\alpha,\beta) = \begin{pmatrix} y+\alpha-1 \\ \alpha-1 \end{pmatrix} \left(\frac{\beta}{1+\beta}\right)^\alpha \left(\frac{1}{1+\beta}\right)^y. \\ \phantom{blah} \end{align} Here, we use a combinatorial notation; \begin{align} \\ \phantom{blah} \begin{pmatrix} y+\alpha-1 \\ \alpha-1 \end{pmatrix} = \frac{(y+\alpha-1)!}{(\alpha-1)!\,y!}. \\ \phantom{blah} \end{align} Generally speaking, $\alpha$ need not be an integer, so we may write the PMF as \begin{align} \\ \phantom{blah} f(y;\alpha,\beta) = \frac{\Gamma(y+\alpha)}{\Gamma(\alpha) \, y!}\,\left(\frac{\beta}{1+\beta}\right)^\alpha \left(\frac{1}{1+\beta}\right)^y. \end{align}
Usage
Package | Syntax |
---|---|
NumPy | np.random.negative_binomial(alpha, beta/(1+beta)) |
SciPy | scipy.stats.nbinom(alpha, beta/(1+beta)) |
Stan | neg_binomial(alpha, beta) |
Stan with $(\mu, \phi)$ parametrization | neg_binomial_2(mu, phi) |
Related distributions.
Notes.
neg_binomial_2
.# With α, β parametrization
params = [dict(name='α', start=1, end=20, value=5, step=1),
dict(name='β', start=0, end=5, value=1, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=50,
scipy_dist=st.nbinom,
params=params,
transform=lambda alpha, beta: (alpha, beta/(1+beta)),
x_axis_label='y',
title='Negative Binomial')
bokeh.io.show(app, notebook_url=notebook_url)
# With μ, φ parametrization
params = [dict(name='μ', start=1, end=20, value=5, step=1),
dict(name='φ', start=0, end=5, value=1, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=50,
scipy_dist=st.nbinom,
params=params,
transform=lambda mu, phi: (phi, phi/(mu + phi)),
x_axis_label='y',
title='Negative Binomial')
bokeh.io.show(app, notebook_url=notebook_url)
Story. We perform $N$ Bernoulli trials, each with probability $\theta$ of success. The number of successes, $n$, is Binomially distributed.
Example. Distribution of plasmids between daughter cells in cell division. Each of the $N$ plasmids as a chance $\theta$ of being in daughter cell 1 ("success"). The number of plasmids, $n$, in daughter cell 1 is binomially distributed.
Parameters. There are two parameters: the probability $\theta$ of success for each Bernoulli trial, and the number of trials, $N$.
Support. The Binomial distribution is supported on the set of nonnegative integers.
Probability mass function.
\begin{align} f(n;N,\theta) = \begin{pmatrix} N \\ n \end{pmatrix} \theta^n (1-\theta)^{N-n}. \end{align}
Package | Syntax |
---|---|
NumPy | np.random.binomial(N, theta) |
SciPy | scipy.stats.binom(N, theta) |
Stan | binomial(N, theta) |
params = [dict(name='N', start=1, end=20, value=5, step=1),
dict(name='θ', start=0, end=1, value=0.5, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=20,
scipy_dist=st.binom,
params=params,
x_axis_label='n',
title='Binomial')
bokeh.io.show(app, notebook_url=notebook_url)
Story. Rare events occur with a rate $\lambda$ per unit time. There is no "memory" of previous events; i.e., that rate is independent of time. A process that generates such events is called a Poisson process. The occurrence of a rare event in this context is referred to as an arrival. The number $n$ of arrivals in unit time is Poisson distributed.
Example. The number of mutations in a strand of DNA per unit length (since mutations are rare) are Poisson distributed.
Parameter. The single parameter is the rate $\lambda$ of the rare events occurring.
Support. The Poisson distribution is supported on the set of nonnegative integers.
Probability mass function. \begin{align} f(n;\lambda) = \frac{\lambda^n}{n!}\,\mathrm{e}^{-\lambda}. \end{align}
Usage
Package | Syntax |
---|---|
NumPy | np.random.poisson(lam) |
SciPy | scipy.stats.poisson(lam) |
Stan | poisson(lam) |
params = [dict(name='λ', start=1, end=20, value=5, step=0.1)]
app = distribution_plot_app(x_min=0,
x_max=40,
scipy_dist=st.poisson,
params=params,
x_axis_label='n',
title='Poisson')
bokeh.io.show(app, notebook_url=notebook_url)
Story. Consider an urn with $a$ white balls and $b$ black balls. Draw $N$ balls from this urn without replacement. The number white balls drawn, $n$, is Hypergeometrically distributed.
Example. There are $a+b$ finches on an island, and $a$ of them are tagged (and therefore $b$ of them are untagged). You capture $N$ finches. The number of tagged finches $n$ is Hypergeometrically distributed, $f(n;N, a, b)$, as defined below.
Parameters. There are three parameters: the number of draws $N$, the number of white balls $a$, and the number of black balls $b$.
Support. The Hypergeometric distribution is supported on the set of integers between $\max(0, N-b)$ and $\min(N, a)$, inclusive.
Probability mass function. \begin{align} f(n;N, a, b) = \frac{\begin{pmatrix}a\\n\end{pmatrix}\begin{pmatrix}b\\N-n\end{pmatrix}} {\begin{pmatrix}a+b\\N\end{pmatrix}}. \end{align}
Usage
Package | Syntax |
---|---|
NumPy | np.random.hypergeometric(a, b, N) |
SciPy | scipy.stats.hypergeom(a+b, a, N) |
Stan | hypergeometric(N, a, b) |
Related distributions.
Notes.
scipy.stats.hypergeom
expects,
\begin{align}
\\ \phantom{blah}
f(n;M, a, N) = \frac{\begin{pmatrix}a\\n\end{pmatrix}\begin{pmatrix}M-a\\N-n\end{pmatrix}}
{\begin{pmatrix}M\\n\end{pmatrix}}.
\\ \phantom{blah}
\end{align}numpy.random
has a different parametrization than in the scipy.stats
module. The numpy.random.hypergeom()
function uses the same parametrization as Stan, except the parameters are given in the order a, b, N
, not N, a, b
, as in Stan. params = [dict(name='N', start=1, end=20, value=10, step=1),
dict(name='a', start=1, end=20, value=10, step=1),
dict(name='b', start=1, end=20, value=10, step=1)]
app = distribution_plot_app(x_min=0,
x_max=40,
scipy_dist=st.hypergeom,
params=params,
transform=lambda N, a, b: (a+b, a, N),
x_axis_label='n',
title='Hypergeometric')
bokeh.io.show(app, notebook_url=notebook_url)
Story. A probability is assigned to each of a set of discrete outcomes.
Example. A hen will peck at grain A with probability $\theta_A$, grain B with probability $\theta_B$, and grain C with probability $\theta_C$.
Parameters. The distribution is parametrized by the probabilities assigned to each event. We define $\theta_y$ to be the probability assigned to outcome $y$. The set of $\theta_y$'s are the parameters, and are constrained by
\begin{align} \sum_y \theta_y = 1. \end{align}
Support. If we index the categories with sequential integers from 1 to N, the distribution is supported for integers 1 to N, inclusive.
Probability mass function. \begin{align} f(y;\{\theta_y\}) = \theta_y. \end{align}
Usage (with theta
length n
)
Package | Syntax |
---|---|
NumPy | np.random.choice(len(theta), p=theta) |
SciPy | scipy.stats.rv_discrete(values=(range(len(theta)), theta)).rvs() |
Stan | categorical(theta) |
scipy.stats
module using scipy.stats.rv_discrete()
. The categories need to be encoded by an index. For interactive plotting purposes, below, we need to specify a custom PMF and CDF.numpy.random.choice()
, specifying the values of $\theta$ using the p
kwarg.def categorical_pmf(x, θ1, θ2, θ3):
thetas = np.array([θ1, θ2, θ3, 1-θ1-θ2-θ3])
if (thetas < 0).any():
return np.array([np.nan]*len(x))
return thetas[x-1]
def categorical_cdf_indiv(x, thetas):
if x < 1:
return 0
elif x >= 4:
return 1
else:
return np.sum(thetas[:int(x)])
def categorical_cdf(x, θ1, θ2, θ3):
thetas = np.array([θ1, θ2, θ3, 1-θ1-θ2-θ3])
if (thetas < 0).any():
return np.array([np.nan]*len(x))
return np.array([categorical_cdf_indiv(x_val, thetas) for x_val in x])
params = [dict(name='θ1', start=0, end=1, value=0.2, step=0.01),
dict(name='θ2', start=0, end=1, value=0.3, step=0.01),
dict(name='θ3', start=0, end=1, value=0.1, step=0.01)]
app = distribution_plot_app(x_min=1,
x_max=4,
custom_pmf=categorical_pmf,
custom_cdf=categorical_cdf,
params=params,
x_axis_label='category',
title='Discrete categorical')
bokeh.io.show(app, notebook_url=notebook_url)
Story. A set of discrete outcomes that can be indexed with sequential integers each has equal probability, like rolling a fair die.
Example. A monkey can choose from any of $n$ bananas with equal probability.
Parameters. The distribution is parametrized by the high and low allowed values.
Support. The Discrete Uniform distribution is supported on the set of integers ranging from $y_\mathrm{low}$ to $y_\mathrm{high}$, inclusive.
Probability mass function. \begin{align} f(y;y_\mathrm{low}, y_\mathrm{high}) = \frac{1}{y_\mathrm{high} - y_\mathrm{low} + 1} \end{align}
Usage
Package | Syntax |
---|---|
NumPy | np.random.randint(low, high+1) |
SciPy | scipy.stats.randint(low, high+1) |
Stan | categorical(theta) , theta array with all equal values |
Related distributions.
Notes.
scipy.stats.randint
. The high
parameter is not inclusive; i.e., the set of allowed values includes the low
parameter, but not the high
. The same is true for numpy.random.randint()
, which is used for sampling out of this distribution.params = [dict(name='low', start=0, end=10, value=0, step=1),
dict(name='high', start=0, end=10, value=10, step=1)]
app = distribution_plot_app(x_min=0,
x_max=10,
scipy_dist=st.randint,
params=params,
transform=lambda low, high: (low, high+1),
x_axis_label='y',
title='Discrete continuous')
bokeh.io.show(app, notebook_url=notebook_url)
Story. Any outcome in a given range has equal probability.
Example. Anything in which all possibilities are equally likely. This is, perhaps surprisingly, rarely encountered.
Parameters. The Uniform distribution is not defined on an infinite or semi-infinite domain, so lower and upper bounds, $\alpha$ and $\beta$, respectively, are necessary parameters.
Support. The Uniform distribution is supported on the interval $[\alpha, \beta]$.
Probability density function.
\begin{align} f(y;\alpha, \beta) = \left\{ \begin{array}{ccc} \frac{1}{\beta - \alpha} & & \alpha \le y \le \beta \\[0.5em] 0 & & \text{otherwise.} \end{array} \right. \end{align}
Package | Syntax |
---|---|
NumPy | np.random.uniform(alpha, beta) |
SciPy | scipy.stats.uniform(alpha, beta) |
Stan | uniform(alpha, beta) |
params = [dict(name='α', start=-2, end=3, value=0, step=0.01),
dict(name='β', start=-2, end=3, value=1, step=0.01)]
app = distribution_plot_app(x_min=-2,
x_max=3,
scipy_dist=st.uniform,
params=params,
transform=lambda a, b: (a, b-a),
title='Uniform')
bokeh.io.show(app, notebook_url=notebook_url)
Story. Any quantity that emerges as the sum of a large number of subprocesses tends to be Gaussian distributed provided none of the subprocesses is very broadly distributed.
Example. We measure the length of many C. elegans eggs. The lengths are Gaussian distributed. Many biological measurements, like the height of people, are (approximately) Gaussian distributed. Many processes contribute to setting the length of an egg or the height of a person.
Parameters. The Gaussian distribution has two parameters, the mean $\mu$, which determines the location of its peak, and the standard deviation $\sigma$, which is strictly positive (the $\sigma\to 0$ limit defines a Dirac delta function) and determines the width of the peak.
Support. The Gaussian distribution is supported on the set of real numbers.
Probability density function.
\begin{align} f(y;\mu, \sigma) = \frac{1}{\sqrt{2\pi \sigma^2}}\,\mathrm{e}^{-(y-\mu)^2/2\sigma^2}. \end{align}
Package | Syntax |
---|---|
NumPy | np.random.normal(mu, sigma) |
SciPy | scipy.stats.norm(mu, sigma) |
Stan | normal(mu, sigma) |
Related distributions. The Gaussian distribution is a limiting distribution in the sense of the central limit theorem, but also in that many distributions have a Gaussian distribution as a limit. This is seen by formally taking limits of, e.g., the Gamma, Student-t, Binomial distributions, which allows direct comparison of parameters.
Notes.
params = [dict(name='µ', start=-0.5, end=0.5, value=0, step=0.01),
dict(name='σ', start=0.1, end=1.0, value=0.2, step=0.01)]
app = distribution_plot_app(x_min=-2,
x_max=2,
scipy_dist=st.norm,
params=params,
x_axis_label='y',
title='Gaussian, a.k.a. Normal')
bokeh.io.show(app, notebook_url=notebook_url)
Story. The Half-Normal distribution is a Gaussian distribution with zero mean truncated to only have nonzero probability for positive real numbers.
Parameters. Strictly speaking, the Half-Normal is parametrized by a single positive parameter $\sigma$. We could, however, translate it so that the truncated Gaussian has a maximum at $\mu$ and only values greater than or equal to $\mu$ have nonzero probability.
Probability density function.
\begin{align} f(y;\mu, \sigma) = \left\{\begin{array}{ccc} \frac{2}{\sqrt{2\pi \sigma^2}}\,\mathrm{e}^{-(y-\mu)^2/2\sigma^2} & & y \ge \mu\\ 0 && \text{otherwise.} \end{array}\right. \end{align}
Support. The Half-Normal distribution is supported on the set $[\mu, \infty)$. Usually, we have $\mu = 0$, in which case the Half-Normal distribution is supported on the set of nonnegative real numbers.
Usage
Package | Syntax |
---|---|
NumPy | mu + np.abs(np.random.normal(0, sigma)) |
SciPy | scipy.stats.halfnorm(mu, sigma) |
Stan sampling | real<lower=mu> y; y ~ normal(mu, sigma) |
Stan rng | real<lower=mu> y; y = mu + abs(normal_rng(0, sigma)) |
params = [dict(name='µ', start=-0.5, end=0.5, value=0, step=0.01),
dict(name='σ', start=0.1, end=1.0, value=0.2, step=0.01)]
app = distribution_plot_app(x_min=-0.5,
x_max=2,
scipy_dist=st.halfnorm,
params=params,
x_axis_label='y',
title='Half-Normal')
bokeh.io.show(app, notebook_url=notebook_url)
Story. If $\ln y$ is Gaussian distributed, $y$ is Log-Normally distributed.
Example. A measure of fold change in gene expression can be Log-Normally distributed.
Parameters. As for a Gaussian, there are two parameters, the mean, $\mu$, and the variance $\sigma^2$. Note that $\mu$ is the mean of $\ln y$, not of $y$ itself. That is, $\langle \ln y \rangle_\text{LogNorm} = \mu$. Similarly, $\sigma^2$ is the variance in $\ln y$, i.e, $\langle (\ln y -\mu)^2\rangle_\text{LogNorm} = \sigma^2$.
Support. The Log-Normal distribution is supported on the set of positive real numbers.
Probability density function.
\begin{align} f(y;\mu, \sigma) = \frac{1}{y\sqrt{2\pi \sigma^2}}\,\mathrm{e}^{-(\ln y - \mu)^2/2\sigma^2}. \end{align}
Package | Syntax |
---|---|
NumPy | np.random.lognormal(mu, sigma) |
SciPy | scipy.stats.lognorm(sigma, loc=0, scale=np.exp(mu)) |
Stan | lognormal(mu, sigma) |
scipy.stats
, use a shape parameter equal to $\sigma$, a location parameter of zero, and a scale parameter given by $\mathrm{e}^\mu$. For example, to compute the PDF, you would use scipy.stats.lognorm.pdf(x, sigma, loc=0, scale=np.exp(mu))
.numpy.random
module matches what we have defined above and what is defined in Stan.params = [dict(name='µ', start=0.01, end=0.5, value=0.1, step=0.01),
dict(name='σ', start=0.1, end=1.0, value=0.2, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=4,
scipy_dist=st.lognorm,
params=params,
transform=lambda mu, sigma: (sigma, 0, np.exp(mu)),
x_axis_label='y',
title='Log-Normal')
bokeh.io.show(app, notebook_url=notebook_url)
Story. If $X_1$, $X_2$, $\ldots$, $X_n$ are Gaussian distributed, $X_1^2 + X_2^2 + \cdots + X_n^2$ is $\chi^2$-distributed. See also the story of the Gamma distribution, below.
Example. The sample variance of $\nu-1$ independent and identically distributed Gaussian random variables, after scaling, is Chi-square distributed. This is the most common use case of the Chi-square distribution.
Parameters. There is only one parameter, the degrees of freedom $\nu$.
Support. The Chi-square distribution is supported on the positive real numbers.
Probability density function. \begin{align} f(y;\nu) \equiv \chi^2_n(x;\nu) = \frac{1}{2^{\nu/2}\,\Gamma\left(\frac{\nu}{2}\right)}\, x^{\frac{\nu}{2}-1}\,\mathrm{e}^{-y/2}. \end{align}
Usage
Package | Syntax |
---|---|
NumPy | np.random.chisquare(nu) |
SciPy | scipy.stats.chi2(nu) |
Stan | chi_square(nu) |
params = [dict(name='ν', start=1, end=20, value=10, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=40,
scipy_dist=st.chi2,
params=params,
x_axis_label='y',
title='Chi-square')
bokeh.io.show(app, notebook_url=notebook_url)
Story. The story of the Student-t distribution largely derives from its relationships with other distributions. One way to think about it is as a Gaussian-like distribution with heavier tails.
Parameters. The Student-t distribution is peaked, and its peak is located at $\mu$. The peak's width is dictated by parameter $\sigma$. Finally, we define the "degrees of freedom" as $\nu$. This last parameter imparts the distribution with heavy tails.
Support. The Student-t distribution is supported on the set of real numbers.
Probability density function.
\begin{align} f(y;\mu, \sigma, \nu) = \frac{\Gamma\left(\frac{\nu+1}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right)\sqrt{\pi \nu \sigma^2}}\, \left(1 + \frac{(y-\mu)^2}{\nu\sigma^2}\right)^{-\frac{\nu+1}{2}}. \end{align}
Package | Syntax |
---|---|
NumPy | mu + sigma * np.random.standard_t(nu) |
SciPy | scipy.stats.t(nu, mu, sigma) |
Stan | student_t(nu, mu, sigma) |
numpy.random
module. You can still draw out of the Student-t distribution by performing a transformation on the samples out of the standard Student-t distribution, as shown in the usage, above.params = [dict(name='ν', start=1, end=50, value=10, step=0.01),
dict(name='µ', start=-0.5, end=0.5, value=0, step=0.01),
dict(name='σ', start=0.1, end=1.0, value=0.2, step=0.01)]
app = distribution_plot_app(x_min=-2,
x_max=2,
scipy_dist=st.t,
params=params,
x_axis_label='y',
title='Student-t')
bokeh.io.show(app, notebook_url=notebook_url)
Story. The intercept on the x-axis of a beam of light coming from the point $(\mu, \sigma)$ is Cauchy distributed. This story is popular in physics (and is one of the first examples of Bayesian inference in Sivia's book), but is not particularly useful. You can think of it as a peaked distribution with enormously heavy tails.
Parameters. The Cauchy distribution is peaked, and its peak is located at $\mu$. The peak's width is dictated by parameter $\sigma$.
Support. The Cauchy distribution is supported on the set of real numbers.
Probability density function.
\begin{align} f(y;\mu, \sigma) = \frac{1}{\pi \sigma}\, \frac{1}{1 + (y-\mu)^2/\sigma^2}. \end{align}
Package | Syntax |
---|---|
NumPy | mu + sigma * np.random.standard_cauchy() |
SciPy | scipy.stats.cauchy(mu, sigma) |
Stan | cauchy(mu, sigma) |
numpy.random
module only has the Standard Cauchy distribution ($\mu = 0$ and $\sigma = 1$), but you can draw out of a Cauchy distribution using the transformation shown in the NumPy usage above.params = [dict(name='µ', start=-0.5, end=0.5, value=0, step=0.01),
dict(name='σ', start=0.1, end=1.0, value=0.2, step=0.01)]
app = distribution_plot_app(x_min=-2,
x_max=2,
scipy_dist=st.cauchy,
params=params,
x_axis_label='y',
title='Cauchy')
bokeh.io.show(app, notebook_url=notebook_url)
Story. This is the waiting time for an arrival from a Poisson process. I.e., the inter-arrival time of a Poisson process is Exponentially distributed.
Example. The time between conformational switches in a protein is Exponentially distributed (under simple mass action kinetics).
Parameter. The single parameter is the average arrival rate, $\beta$. Alternatively, we can use $\tau=1/\beta$ as the parameter, in this case a characteristic arrival time.
Support. The Exponential distribution is supported on the set of nonnegative real numbers.
Probability density function. \begin{align} f(y;\beta) = \beta\, \mathrm{e}^{-\beta y}. \end{align}
Related distributions.
Usage
Package | Syntax |
---|---|
NumPy | np.random.exponential(1/beta) |
SciPy | scipy.stats.expon(loc=0, scale=1/beta) |
Stan | exponential(beta) |
scipy.stats
module also has a location parameter, which shifts the distribution left and right. For our purposes, you can ignore that parameter, but be aware that scipy.stats
requires it. The scipy.stats
Exponential distribution is parametrized in terms of the interarrival time, $\tau$, and not $\beta$.numpy.random.exponential()
function does not need nor accept a location parameter. It is also parametrized in terms of $\tau$.params = [dict(name='β', start=0.1, end=1, value=0.25, step=0.01)]
app = distribution_plot_app(0,
30,
st.expon,
params=params,
transform=lambda x: (0, 1/x),
x_axis_label='y',
title='Exponential')
bokeh.io.show(app, notebook_url=notebook_url)
Story. The amount of time we have to wait for $\alpha$ arrivals of a Poisson process. More concretely, if we have events, $X_1$, $X_2$, $\ldots$, $X_\alpha$ that are exponentially distributed, $X_1 + X_2 + \cdots + X_\alpha$ is Gamma distributed.
Example. Any multistep process where each step happens at the same rate. This is common in molecular rearrangements.
Parameters. The number of arrivals, $\alpha$, and the rate of arrivals, $\beta$.
Support. The Gamma distribution is supported on the set of positive real numbers.
Probability density function.
\begin{align} f(y;\alpha, \beta) = \frac{1}{\Gamma(\alpha)}\,\frac{(\beta y)^\alpha}{y}\,\mathrm{e}^{-\beta y}. \end{align}
Related distributions.
Usage
Package | Syntax |
---|---|
NumPy | np.random.gamma(alpha, 1/beta) |
SciPy | scipy.stats.gamma(alpha, loc=0, scale=1/beta) |
Stan | gamma(alpha, beta) |
params = [dict(name='α', start=1, end=5, value=2, step=0.01),
dict(name='β', start=0.1, end=5, value=2, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=10,
scipy_dist=st.gamma,
params=params,
transform=lambda a, b: (a, 0, 1/b),
x_axis_label='y',
title='Gamma')
bokeh.io.show(app, notebook_url=notebook_url)
Story. If $x$ is Gamma distributed, then $1/x$ is Inverse Gamma distributed.
Parameters. The number of arrivals, $\alpha$, and the rate of arrivals, $\beta$.
Support. The Inverse Gamma distribution is supported on the set of positive real numbers.
Probability density function.
\begin{align} f(y;\alpha, \beta) = \frac{1}{\Gamma(\alpha)}\,\frac{\beta^\alpha}{y^{(\alpha+1)}}\,\mathrm{e}^{-\beta/ y}. \end{align}
Package | Syntax |
---|---|
NumPy | 1 / np.random.gamma(alpha, 1/beta) |
SciPy | scipy.stats.invgamma(alpha, loc=0, scale=beta) |
Stan | inv_gamma(alpha, beta) |
numpy.random
module does not have a function to sample directly from the Inverse Gamma distribution, but it can be achieved by sampling out of a Gamma distribution as shown in the NumPy usage above.params = [dict(name='α', start=0.01, end=2, value=0.5, step=0.01),
dict(name='β', start=0.1, end=2, value=1, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=20,
scipy_dist=st.invgamma,
params=params,
transform=lambda alpha, beta: (alpha, 0, beta),
x_axis_label='y',
title='Inverse Gamma')
bokeh.io.show(app, notebook_url=notebook_url)
Story. Distribution of $x = y^\alpha$ if $y$ is exponentially distributed. For $\alpha > 1$, the longer we have waited, the more likely the event is to come, and vice versa for $\alpha < 1$.
Example. This is a model for aging. The longer an organism lives, the more likely it is to die.
Parameters. There are two parameters, both strictly positive: the shape parameter $\alpha$, which dictates the shape of the curve, and the scale parameter $\sigma$, which dictates the rate of arrivals of the event.
Support. The Weibull distribution has support on the positive real numbers.
Probability density function.
\begin{align} f(y;\alpha, \sigma) = \frac{\alpha}{\sigma}\left(\frac{y}{\sigma}\right)^{\alpha - 1}\, \mathrm{e}^{-(y/\sigma)^\alpha}. \end{align}
Package | Syntax |
---|---|
NumPy | np.random.weibull(alpha) * sigma |
SciPy | scipy.stats.weibull_min(alpha, loc=0, scale=sigma) |
Stan | weibull(alpha, sigma) |
Related distributions.
Notes.
params = [dict(name='α', start=0.1, end=5, value=1, step=0.01),
dict(name='σ', start=0.1, end=3, value=1.5, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=8,
scipy_dist=st.weibull_min,
params=params,
transform=lambda a, s: (a, 0, s),
x_axis_label='y',
title='Weibull')
bokeh.io.show(app, notebook_url=notebook_url)
Story. Say you wait for two multistep Poisson processes to happen. The individual steps of each process happen at the same rate, but the first multistep process requires $\alpha$ steps and the second requires $\beta$ steps. The fraction of the total waiting time taken by the first process is Beta distributed.
Parameters. There are two parameters, both strictly positive: $\alpha$ and $\beta$, defined in the above story.
Support. The Beta distribution has support on the interval [0, 1].
Probability density function. \begin{align} f(\theta;\alpha, \beta) = \frac{\theta^{\alpha-1}(1-\theta)^{\beta-1}}{B(\alpha,\beta)}, \end{align} where \begin{align} B(\alpha,\beta) = \frac{\Gamma(\alpha)\,\Gamma(\beta)}{\Gamma(\alpha + \beta)} \end{align} is the Beta function.
Usage
Package | Syntax |
---|---|
NumPy | np.random.beta(alpha, beta) |
SciPy | scipy.stats.beta(alpha, beta) |
Stan | beta(alpha, beta) |
Related distributions.
Notes.
params = [dict(name='α', start=0.01, end=10, value=1, step=0.01),
dict(name='β', start=0.01, end=10, value=1, step=0.01)]
app = distribution_plot_app(x_min=0,
x_max=1,
scipy_dist=st.beta,
params=params,
x_axis_label='θ',
title='Beta')
bokeh.io.show(app, notebook_url=notebook_url)
So far, we have looked a univariate distributions, but we will consider multivariate distributions in class, and you will encounter them in your research. First, we consider a discrete multivariate distribution, the Multinomial.
Story. This is a generalization of the Binomial distribution. Instead of a Bernoulli trial consisting of two outcomes, each trial has $K$ outcomes. The probability of getting $n_1$ of outcome 1, $n_2$ of outcome 2, ..., and $y_K$ of outcome $K$ out of a total of $N$ trials is Multinomially distributed.
Example. There are two alleles in a population, A and a. Each individual may have genotype AA, Aa, or aa. The probability distribution describing having $y_1$ AA individuals, $y_2$ Aa individuals, and $n_3$ aa individuals in a population of $N$ total individuals is Multinomially distributed.
Parameters. $N$, the total number of trials, and $\boldsymbol{\theta} = \{\theta_1, \theta_2, \ldots \theta_k\}$, the probabilities of each outcome. Note that $\sum_i \theta_i = 1$ and there is a further restriction that $\sum_i y_i = N$.
Support. The K-nomial distribution is supported on $\mathbb{N}^K$.
Usage
The usage below assumes that theta
is a length K array.
Package | Syntax |
---|---|
NumPy | np.random.multinomial(N, theta) |
SciPy | scipy.stats.multinomial(N, theta) |
Stan sampling | multinomial(theta) |
Stan rng | multinomial_rng(theta, N) |
\begin{align} f(\mathbf{y};\mathbf{\theta}, N) = \frac{N!}{y_1!\,y_2!\cdots y_k!}\,\theta_1^{y_1}\,\theta_2^{y_2}\cdots \theta_K^{y_K}. \end{align}
Related distributions.
Notes.
N
is implied.We will consider two continuous multivariate distributions here, the multivariate Gaussian and the Dirichlet. Generally plotting multivariate PDFs is difficult, but bivariate PDFs may be conveniently plotted as contour plots. In our investigation of the multivariate Gaussian distribution, I will also demonstrate how to make plots of bivariate PDFs.
Story. This is a generalization of the univariate Gaussian.
Example. Finch beaks are measured for beak depth and beak length. The resulting distribution of depths and length is Gaussian distributed. In this case, the Gaussian is bivariate, with $\mu = (\mu_\mathrm{d}, \mu_\mathrm{l})$ and \begin{align} \mathsf{\Sigma} = \begin{pmatrix} \sigma_\mathrm{d}^2 & \sigma_\mathrm{dl} \\ \sigma_\mathrm{dl} & \sigma_\mathrm{l}^2 \end{pmatrix}. \end{align}
Parameters. There is one vector-valued parameter, $\boldsymbol{\mu}$, and a matrix-valued parameter, $\mathsf{\Sigma}$, referred to respectively as the mean and covariance matrix. The covariance matrix is symmetric and strictly positive definite.
Support. The K-variate Gaussian distribution is supported on $\mathbb{R}^K$.
Probability density function.
\begin{align} f(\mathbf{y};\boldsymbol{\mu}, \mathsf{\Sigma}) = \frac{1}{\sqrt{(2\pi)^K \det \mathsf{\Sigma}}}\,\mathrm{exp}\left[-\frac{1}{2}(\mathbf{y} - \boldsymbol{\mu})^T \cdot \mathsf{\Sigma}^{-1}\cdot (\mathbf{y} - \boldsymbol{\mu})\right]. \end{align}
mu
is a length K array, Sigma
is a K×K symmetric positive definite matrix, and L
is a K×K lower-triangular matrix with strictly positive values on teh diagonal that is a Cholesky factor.Package | Syntax |
---|---|
NumPy | np.random.multivariate_normal(mu, Sigma) |
SciPy | scipy.stats.multivariate_normal(mu, Sigma) |
Stan | multi_normal(mu, Sigma) |
NumPy Cholesky | np.random.multivariate_normal(mu, np.dot(L, L.T)) |
SciPy Cholesky | scipy.stats.multivariate_normal(mu, np.dot(L, L.T)) |
Stan Cholesky | multi_normal_cholesky(mu, L) |
Related distributions.
Notes.
Story. Probability distribution for positive definite correlation matrices, or equivalanently for their Cholesky factors (which is what we use in practice).
Parameter. There is one positive scalar parameter, $\eta$, which tunes the strength of the correlations. If $\eta = 1$, the density is uniform over all correlation matrix. If $\eta > 1$, matrices with a stronger diagonal (and therefore smaller correlations) are favored. If $\eta < 1$, the diagonal is weak and correlations are favored.
Support. The LKJ distribution is supported over the set of $K\times K$ Cholesky factors of real symmetric positive definite matrices.
Probability density function. We cannot write the PDF in closed form.
Usage
Package | Syntax |
---|---|
NumPy | not available |
SciPy | not available |
Stan | lkj_corr_cholesky(eta) |
Notes.
The most common use case is as a prior for a covariance matrix. Note that LKJ distribution gives Cholesky factors for correlation matrices, not covariance matrices. To get the covariance Cholesky factor from the correlation Cholesky factor, you need to multiply the correlation Cholesky factor by a diagonal constructed from the variances of the individual variates. Here is an example for a multivariate Gaussian in Stan.
parameters {
// Vector of means
vector[K] mu;
// Cholesky factor for the correlation matrix
cholesky_factor_corr[K] L_Omega;
// Sqrt of variances for each variate
vector<lower=0>[K] L_std;
}
model {
// Cholesky factor for covariance matrix
matrix[K, K] L_Sigma = diag_pre_multiply(L_std, L_Omega);
// Prior on Cholesky decomposition of correlation matrix
L_Omega ~ lkj_corr_cholesky(1);
// Prior on standard deviations for each variate
L_std ~ normal(0, 2.5);
// Likelihood
y ~ multi_normal_cholesky(mu, L_Sigma);
}
Story. The Dirichlet distribution is a generalization of the Beta distribution. It is a probability distribution describing probabilities of outcomes. Instead of describing probability of one of two outcomes of a Bernoulli trial, like the Beta distribution does, it describes probability of $K-1$ of $K$ outcomes. The Beta distribution is the special case of $K = 2$.
Parameters. The parameters are $\alpha_1, \alpha_2, \ldots \alpha_K$, all strictly positive, defined analogously to $\alpha$ and $\beta$ of the Beta distribution.
Support. The Dirichlet distribution has support on the interval [0, 1] such that $\sum_{i=1}^K \theta_i = 1$.
Probability density function. \begin{align} f(\boldsymbol{\theta};\boldsymbol{\alpha}) = \frac{1}{B(\boldsymbol{\alpha})}\,\prod_{i=1}^K \theta_i^{\alpha_i-1}, \end{align} where \begin{align} B(\boldsymbol{\alpha}) = \frac{\prod_{i=1}^K\Gamma(\alpha_i)}{\Gamma\left(\sum_{i=1}^K \alpha_i\right)} \end{align} is the multivariate Beta function.
Usage
Package | Syntax |
---|---|
NumPy | np.random.dirichlet(alpha) |
SciPy | scipy.stats.dirichlet(alpha) |
Stan | dirichlet(alpha) |
Related distributions.
Notes
data {
int<lower=1> K;
}
parameters {
vector<lower=0>[K] alpha;
positive_ordered[K] lambda;
}
transformed parameters {
simplex[K] theta = lambda / sum(lambda);
}
model {
target += gamma_lpdf(lambda | alpha, 1);
}
This document serves as a catalog of probability distributions that will be useful for you in your statistical modeling. As we will see, the mathematical expression of the PDF is not often needed. What is most important in your modeling is that you know the story of the distribution.
%load_ext watermark
%watermark -v -p numpy,scipy,bokeh,jupyterlab