(c) 2016 Justin Bois. This work is licensed under a Creative Commons Attribution License CC-BY 4.0. All code contained herein is licensed under an MIT license.
This tutorial exercise was generated from an Jupyter notebook. You can download the notebook here. You can also view it here Use this downloaded Jupyter notebook to fill out your responses.
Describe how the some of the nonlinear regression we did this past week is a maximum likelihood estimation.
Why couldn't we use scipy.optimize.leastsq()
on the Singer data from Tutorial 4?
Why is it important to "burn in" walkers when performing a MCMC calculation?
Say we used MCMC to sample a posterior distribution that had 6 parameters, $P(a_1, a_2, a_3, a_4, a_5, a_6 \mid D, I)$. From the MCMC samples, how can we get samples for the marginalized distribution $P(a_3 \mid D, I)$?