Logo

Lessons

  • 0. Preparing for the course
  • 1. Probability and the logic of scientific reasoning
    • Probability as the logic of science
    • Notation of parts of Bayes’s Theorem
    • Marginalization
    • Bayes’s theorem as a model for learning
    • Probability distributions
  • 2. Plotting posteriors
  • 3. Marginalization by numerical quadrature
  • 4. Conjugacy
  • E1. To be completed after lesson 4
  • 5. Introduction to Bayesian modeling
  • 6. Parameter estimation by optimization
  • E2. To be completed after lesson 6
  • 7. Introduction to Markov chain Monte Carlo
  • 8. AWS setup and usage
  • 9. Introduction to MCMC with Stan
  • 10. Mixture models and label switching with MCMC
  • 11. Regression with MCMC
  • E3. To be completed after lesson 11
  • 12. Display of MCMC results
  • 13. Model building with prior predictive checks
  • 14. Posterior predictive checks
  • E4. To be completed after lesson 14
  • 15. Collector’s box of distributions
  • 16. MCMC diagnostics
  • 17. A diagnostics case study: Artificial funnel of hell
  • E5. To be completed after lesson 17
  • 18. Model comparison
  • 19. Model comparison in practice
  • E6. To be completed after lesson 19
  • 20. Hierarchical models
  • 21. Implementation of hierarchical models
  • E7. To be completed after lesson 21
  • 22. Principled analysis pipelines
  • 23: Simulation based calibration and related checks in practice
  • E8. To be completed after lesson 23
  • 24. Introduction to Gaussian processes
  • 25. Implementation of Gaussian processes
  • E9. To be completed after lesson 25
  • 26: Variational Bayesian inference
  • 27: Wrap-up

Recitations

  • R1. Review of MLE
  • R2: Review of probability
  • R3. Choosing priors
  • R4. Stan installation and use of AWS
  • R5. A Bayesian modeling case study: Ant traffic jams
  • R6. Practice model building
  • R7. Introduction to Hamiltonian Monte Carlo
  • R8: Discussion of HW 10 project proposals
  • R9: Sampling discrete parameters with Stan

Homework

  • 0. Configuring your team
  • 1. Intuitive generative modeling
  • 2. Analytical and graphical methods for analysis of the posterior
  • 3. Maximum a posteriori parameter estimation
  • 4. Sampling with MCMC
  • 5. Inference with Stan I
  • 6. Practice building and assessing Bayesian models
  • 7. Model comparison
  • 8. Hierarchical models
  • 9. Principled pipelines and hierarchical modeling of noise
  • 10. The grand finale
  • 11. Course feedback

Schedule

  • Schedule overview
  • Homework due dates
  • Lesson exercise due dates
  • Weekly schedule

Policies

  • Meetings
  • Lab sessions
  • Lessons and lesson exercises
  • The BE/Bi 103 GitHub group
  • Homework
  • Grading
  • Collaboration policy and Honor Code
  • Excused absences and extensions
  • Course communications
  • “Ediquette”

Resources

  • Software
  • Reading/tutorials
BE/Bi 103 b
  • »
  • 1. Probability and the logic of scientific reasoning
  • View page source

1. Probability and the logic of scientific reasoning¶

  • Probability as the logic of science
  • Notation of parts of Bayes’s Theorem
  • Marginalization
  • Bayes’s theorem as a model for learning
  • Probability distributions
Next Previous

Last updated on Mar 16, 2021.

© 2021 Justin Bois and BE/Bi 103 b course staff. With the exception of pasted graphics, where the source is noted, this work is licensed under a Creative Commons Attribution License CC-BY 4.0 license. All code contained herein is licensed under an MIT license.

This document was prepared at Caltech with financial support from the Donna and Benjamin M. Rosen Bioengineering Center.



Built with Sphinx using a theme provided by Read the Docs.