{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "(model_averaging)=\n", "# Model Averaging\n", "\n", ":::{post} Aug 2024\n", ":tags: model comparison, model averaging\n", ":category: intermediate\n", ":author: Osvaldo Martin\n", ":::" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "papermill": { "duration": 4.910288, "end_time": "2020-11-29T12:13:07.788552", "exception": false, "start_time": "2020-11-29T12:13:02.878264", "status": "completed" }, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running on PyMC v5.16.2+24.g799c98f41\n" ] } ], "source": [ "import os\n", "\n", "import arviz as az\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "import pymc as pm\n", "\n", "print(f\"Running on PyMC v{pm.__version__}\")" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "papermill": { "duration": 0.058811, "end_time": "2020-11-29T12:13:07.895012", "exception": false, "start_time": "2020-11-29T12:13:07.836201", "status": "completed" }, "tags": [] }, "outputs": [], "source": [ "rng = np.random.seed(2741)\n", "az.style.use(\"arviz-darkgrid\")" ] }, { "cell_type": "markdown", "metadata": { "papermill": { "duration": 0.068882, "end_time": "2020-11-29T12:13:08.020372", "exception": false, "start_time": "2020-11-29T12:13:07.951490", "status": "completed" }, "tags": [] }, "source": [ "When confronted with more than one model we have several options. One of them is to perform model selection as exemplified by the PyMC examples {ref}`pymc:model_comparison` and the {ref}`GLM-model-selection`, usually is a good idea to also include posterior predictive checks in order to decide which model to keep. Discarding all models except one is equivalent to affirm that, among the evaluated models, one is correct (under some criteria) with probability 1 and the rest are incorrect. In most cases this will be an overstatment that ignores the uncertainty we have in our models. This is somewhat similar to computing the full posterior and then just keeping a point-estimate like the posterior mean; we may become overconfident of what we really know. You can also browse the {doc}`blog/tag/model-comparison` tag to find related posts. \n", "\n", "An alternative to this dilema is to perform model selection but to acknoledge the models we discared. If the number of models are not that large this can be part of a technical discussion on a paper, presentation, thesis, and so on. If the audience is not technical enough, this may not be a good idea.\n", "\n", "Yet another alternative, the topic of this example, is to perform model averaging. The idea is to weight each model by its merit and generate predictions from each model, proportional to those weights. There are several ways to do this, including the three methods that will be briefly discussed in this notebook. You will find a more thorough explanation in the work by {cite:t}`Yao_2018` and {cite:t}`Yao_2022`. \n", "\n", "\n", "## Pseudo Bayesian model averaging\n", "\n", "Bayesian models can be weighted by their marginal likelihood, which is known as Bayesian Model Averaging. While this is theoretically appealing, it is problematic in practice: on the one hand the marginal likelihood is highly sensitive to the specification of the prior, in a way that parameter estimation is not, and on the other, computing the marginal likelihood is usually a challenging task. Additionally, Bayesian model averaging is flawed in the $\\mathcal{M}$-open setting in which the true data-generating process is not one of the candidate models being fit {cite:t}`Yao_2018`. A more robust approach is to compute the expected log pointwise predictive density (ELPD).\n", "\n", "$$\n", "\\sum_i^N \\log \\int \\ p(y_i \\mid \\theta) \\; p(\\theta \\mid y) d\\theta\n", "$$\n", "\n", "where $N$ is the number of data points, $y_i$ is the i-th data point, $\\theta$ are the parameters of the model, $p(y_i \\mid \\theta)$ is the likelihood of the i-th data point given the parameters, and $p(\\theta \\mid y)$ is the posterior distribution.\n", "\n", "Once we have computed the ELPD for each model we can compute weights by doing\n", "\n", "$$w_i = \\frac {e^{dELPD_i}} {\\sum_j^M e^{dELPD_i}}$$\n", "\n", "Where $dELPD_i$ is the difference between the model with the best ELPD and the i-th model.\n", "\n", "This approach is called pseudo Bayesian model averaging, or Akaike-like weighting and is an heuristic to compute the relative probability of each model (given a fixed set of models). Note that we exponetiate to \"revert\" the effect of the logarithm in the ELPD formula and the denominator is a normalization term to ensure that the weights sum up to one. With a pinch of salt, we can interpret these weights as the probability of each model explaining the data.\n", "\n", "So far so good, but the ELPD is a theoretical quantity, and in practice we need to approximate it. To do so ArviZ offers two methods\n", "\n", "* WAIC, Widely Applicable Information Criterion\n", "* LOO, Pareto-Smooth-Leave-One-Out-Cross-Validation.\n", "\n", "Both requiere and InferenceData with the log-likelihood group and are equally fast to compute. We recommend using LOO because it has better practical properties, and better diagnostics (so we known when we are having issues with the ELPD estimation).\n", "\n", "## Pseudo Bayesian model averaging with Bayesian Bootstrapping\n", "\n", "The above formula for computing weights is a nice and simple approach, but with one major caveat: it does not take into account the uncertainty in the computation of the ELPD. We could compute the standard error of the ELPD value (assuming a Gaussian approximation) and modify the above formula accordingly. Or we can do something more robust, like using a [Bayesian Bootstrapping](http://www.sumsar.net/blog/2015/04/the-non-parametric-bootstrap-as-a-bayesian-model/) to estimate, and incorporate this uncertainty.\n", "\n", "## Stacking\n", "\n", "The third approach we will discuss is known as _stacking of predictive distributions_ by {cite:t}`Yao_2018`. We want to combine several models in a metamodel in order to minimize the divergence between the meta-model and the _true_ generating model. When using a logarithmic scoring rule this is equivalent to:\n", "\n", "$$\\max_{w} \\frac{1}{n} \\sum_{i=1}^{n}log\\sum_{k=1}^{K} w_k p(y_i \\mid y_{-i}, M_k)$$\n", "\n", "Where $n$ is the number of data points and $K$ the number of models. To enforce a solution we constrain $w$ to be $w_k \\ge 0$ and $\\sum_{k=1}^{K} w_k = 1$. \n", "\n", "The quantity $p(y_i \\mid y_{-i}, M_k)$ is the leave-one-out predictive distribution for the $M_k$ model. Computing it requires fitting each model $n$ times, each time leaving out one data point. Fortunately, this is exactly what LOO approximates in a very efficient way. So we can use LOO and stacking together. To be fair, we can also use WAIC, even when WAIC approximates the ELPD in a different way.\n", "\n", "## Weighted posterior predictive samples\n", "\n", "Once we have computed the weights, using any of the above 3 methods, we can use them to get weighted posterior predictive samples. We will illustrate how to do it using the body fat dataset {cite}`penrose1985`. This dataset has measurements from 251 individuals, including their weight, height, the circumference of the abdomen, the circumference of the wrist etc. Our purpose is to predict the percentage of body fat, as estimated by the siri variable, also available from the dataset.\n", "\n", "Let's start by loading the data" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "papermill": { "duration": 1.114901, "end_time": "2020-11-29T12:13:09.196103", "exception": false, "start_time": "2020-11-29T12:13:08.081202", "status": "completed" }, "tags": [] }, "outputs": [ { "data": { "text/html": [ "
\n", " | siri | \n", "age | \n", "weight | \n", "height | \n", "neck | \n", "chest | \n", "abdomen | \n", "hip | \n", "thigh | \n", "knee | \n", "ankle | \n", "biceps | \n", "forearm | \n", "wrist | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | \n", "12.3 | \n", "23 | \n", "70.1 | \n", "172 | \n", "36.2 | \n", "93.1 | \n", "85.2 | \n", "94.5 | \n", "59.0 | \n", "37.3 | \n", "21.9 | \n", "32.0 | \n", "27.4 | \n", "17.1 | \n", "
1 | \n", "6.1 | \n", "22 | \n", "78.8 | \n", "184 | \n", "38.5 | \n", "93.6 | \n", "83.0 | \n", "98.7 | \n", "58.7 | \n", "37.3 | \n", "23.4 | \n", "30.5 | \n", "28.9 | \n", "18.2 | \n", "
2 | \n", "25.3 | \n", "22 | \n", "70.0 | \n", "168 | \n", "34.0 | \n", "95.8 | \n", "87.9 | \n", "99.2 | \n", "59.6 | \n", "38.9 | \n", "24.0 | \n", "28.8 | \n", "25.2 | \n", "16.6 | \n", "
3 | \n", "10.4 | \n", "26 | \n", "84.0 | \n", "184 | \n", "37.4 | \n", "101.8 | \n", "86.4 | \n", "101.2 | \n", "60.1 | \n", "37.3 | \n", "22.8 | \n", "32.4 | \n", "29.4 | \n", "18.2 | \n", "
4 | \n", "28.7 | \n", "24 | \n", "83.8 | \n", "181 | \n", "34.4 | \n", "97.3 | \n", "100.0 | \n", "101.9 | \n", "63.2 | \n", "42.2 | \n", "24.0 | \n", "32.2 | \n", "27.7 | \n", "17.7 | \n", "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 7 seconds.\n", "Sampling: [siri]\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "da2c4d9bc89345c2834b03d04dead94b", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "with pm.Model() as model_0:\n", " alpha = pm.Normal(\"alpha\", mu=0, sigma=1)\n", " beta = pm.Normal(\"beta\", mu=0, sigma=1)\n", " sigma = pm.HalfNormal(\"sigma\", 5)\n", "\n", " mu = alpha + beta * d[\"abdomen\"]\n", "\n", " siri = pm.Normal(\"siri\", mu=mu, sigma=sigma, observed=d[\"siri\"])\n", "\n", " idata_0 = pm.sample(idata_kwargs={\"log_likelihood\": True}, random_seed=rng)\n", " pm.sample_posterior_predictive(idata_0, extend_inferencedata=True, random_seed=rng)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Auto-assigning NUTS sampler...\n", "Initializing NUTS using jitter+adapt_diag...\n", "Multiprocess sampling (4 chains in 4 jobs)\n", "NUTS: [alpha, beta, sigma]\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "213552d5ddeb4dcb9b1758e941e77887", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 36 seconds.\n", "Sampling: [siri]\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "91f277f5f79f4e48a6f6b0df6d7b52df", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "with pm.Model() as model_1:\n", " alpha = pm.Normal(\"alpha\", mu=0, sigma=1)\n", " beta = pm.Normal(\"beta\", mu=0, sigma=1, shape=3)\n", " sigma = pm.HalfNormal(\"sigma\", 5)\n", "\n", " mu = alpha + pm.math.dot(beta, d[[\"wrist\", \"height\", \"weight\"]].T)\n", "\n", " siri = pm.Normal(\"siri\", mu=mu, sigma=sigma, observed=d[\"siri\"])\n", "\n", " idata_1 = pm.sample(idata_kwargs={\"log_likelihood\": True}, random_seed=rng)\n", " pm.sample_posterior_predictive(idata_1, extend_inferencedata=True, random_seed=rng)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before LOO (or WAIC) to compare and or average models we should check that we do not have sampling issues and posterior predictive checks are resonable. For the sake of brevity we are going to skip these steps and instead jump to the model averaging.\n", "\n", "First we need to call `az.compare` to compute the LOO values for each model and the weights using `stacking`. These are the default options, if you want to perform pseudo Bayesian model averaging you can use the `method='BB-pseudo-BMA'` that includes the Bayesian Bootstrap estimation of the uncertainty in the ELPD.\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | rank | \n", "elpd_loo | \n", "p_loo | \n", "elpd_diff | \n", "weight | \n", "se | \n", "dse | \n", "warning | \n", "scale | \n", "
---|---|---|---|---|---|---|---|---|---|
model_1 | \n", "0 | \n", "-817.216895 | \n", "3.626704 | \n", "0.000000 | \n", "0.639236 | \n", "10.496342 | \n", "0.000000 | \n", "False | \n", "log | \n", "
model_0 | \n", "1 | \n", "-825.344978 | \n", "1.832909 | \n", "8.128083 | \n", "0.360764 | \n", "9.970768 | \n", "8.698358 | \n", "False | \n", "log | \n", "
<xarray.Dataset>\n", "Dimensions: (siri_dim_2: 251, sample: 3999)\n", "Coordinates:\n", " * siri_dim_2 (siri_dim_2) int64 0 1 2 3 4 5 6 ... 244 245 246 247 248 249 250\n", " * sample (sample) object MultiIndex\n", " * chain (sample) int64 2 3 1 2 1 2 3 3 3 3 3 3 ... 3 3 3 1 0 0 2 0 2 1 2\n", " * draw (sample) int64 682 691 550 397 831 520 ... 638 997 295 483 606 9\n", "Data variables:\n", " siri (siri_dim_2, sample) float64 17.75 16.43 14.7 ... 30.98 27.67\n", "Attributes:\n", " created_at: 2024-08-23T16:10:41.836182+00:00\n", " arviz_version: 0.20.0.dev0\n", " inference_library: pymc\n", " inference_library_version: 5.16.2+24.g799c98f41
<xarray.Dataset>\n", "Dimensions: (siri_dim_0: 251)\n", "Coordinates:\n", " * siri_dim_0 (siri_dim_0) int64 0 1 2 3 4 5 6 ... 244 245 246 247 248 249 250\n", "Data variables:\n", " siri (siri_dim_0) float64 12.3 6.1 25.3 10.4 ... 33.6 29.3 26.0 31.9\n", "Attributes:\n", " created_at: 2024-08-23T16:10:41.440917+00:00\n", " arviz_version: 0.20.0.dev0\n", " inference_library: pymc\n", " inference_library_version: 5.16.2+24.g799c98f41