Applied Bayesian Statistics
Bayesian statistics differ from their classical "frequentist" counterpart in how one interprets the idea that quantities of interest which are not directly measured may or may not be the result of random experiments. The genius of Thomas Bayes, an 18th-century Presbyterian vicar from Southern England, was to propose an astoundingly simple solution to a fundamentally ill-posed and often intractable problem, a solution which is consistent with the axioms of probability. The price to pay for this conceptual simplicity is in requiring beliefs in prior information about the quantities of interest. In addition, in modern implementations, analyzing the statistical properties of Bayes's solutions is itself non-trivial, implying another, computational price to pay. In this brief practical introduction, we will discuss these philosophical and computational considerations, including what the choice of priors really implies in terms of modeling and belief in the scientific method. We will review what tools are at our disposal for implementing Bayesian statistics in practice, particularly in the context of linear models, where a supervised machine-learning interpretation is straightforward. We will present examples of implementations in various disciplines and contexts. This will allow us to discuss the tenuous notion of hyperparameters, and the distinction between ordinary and hierarchical Bayesian modeling. We will discuss how Bayesian posterior predictive distributions differ from Bayesian prediction via reconstruction. Time permitting, we will discuss the value in using validation metrics to gauge the honesty and efficiency of Bayesian uncertainty quantification.