(c) 2018 Justin Bois. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This document was prepared at Caltech with financial support from the Donna and Benjamin M. Rosen Bioengineering Center.
This tutorial exercise was generated from an Jupyter notebook. You can download the notebook here. Use this downloaded Jupyter notebook to fill out your responses.
What is the difference between the prior distribution and posterior distribution?
Why are prior predictive checks an important step in the process of building a generative model?
Why is it necessary to compute summaries of the posterior? In other words, why can't we just write down the posterior (which is what the process of building a model is anyway) and be done with it?
When performing parameter estimation by optimization, after we find the MAP, why do we locally approximate the posterior near the MAP as a (possibly multivariate) Gaussian?