Quantile regression#

This notebook explores how to predict intervals with available techniques in scikit-learn.

We cover a subset of available techniques. For instance, conformal predictions handle this specific task - see packages like MAPIE for broader coverage: scikit-learn-contrib/MAPIE.

Predicting intervals with linear models#

This section revisits linear models and shows how to predict intervals with quantile regression.

First, let’s load our penguins dataset for our regression task.

# When using JupyterLite, uncomment and install the `skrub` package.
%pip install skrub
import matplotlib.pyplot as plt
import skrub

skrub.patch_display()
/home/runner/work/traces-sklearn/traces-sklearn/.pixi/envs/docs/bin/python: No module named pip
Note: you may need to restart the kernel to use updated packages.
import pandas as pd

penguins = pd.read_csv("../datasets/penguins_regression.csv")
penguins
Processing column   1 / 2
Processing column   2 / 2

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").

In this dataset, we predict the body mass of a penguin given its flipper length.

X = penguins[["Flipper Length (mm)"]]
y = penguins["Body Mass (g)"]

In our study of linear models, we saw that LinearRegression minimizes the mean squared error and predicts the conditional mean of the target.

Here, we fit this model and predict several data points between the minimum and maximum flipper length.

from sklearn.linear_model import LinearRegression

model_estimate_mean = LinearRegression()
model_estimate_mean.fit(X, y)
LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
import numpy as np

X_test = pd.DataFrame(
    {"Flipper Length (mm)": np.linspace(X.min(axis=None), X.max(axis=None), 100)}
)
y_pred_mean = model_estimate_mean.predict(X_test)
_, ax = plt.subplots()
penguins.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)", ax=ax, alpha=0.5)
ax.plot(
    X_test["Flipper Length (mm)"],
    y_pred_mean,
    color="tab:orange",
    label="predicted mean",
    linewidth=3,
)
ax.legend()
plt.show()
../../_images/fcf4489a978f157f940a3b727413ead76ad6cf40fe01fe813f394bde264cccb8.png

We discussed how mean estimators become sensitive to outliers. Sometimes we prefer a more robust estimator like the median.

Here, QuantileRegressor minimizes the mean absolute error and predicts the conditional median.

from sklearn.linear_model import QuantileRegressor

model_estimate_median = QuantileRegressor(quantile=0.5)
model_estimate_median.fit(X, y)
y_pred_median = model_estimate_median.predict(X_test)
_, ax = plt.subplots()
penguins.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)", ax=ax, alpha=0.5)
ax.plot(
    X_test["Flipper Length (mm)"],
    y_pred_mean,
    color="tab:orange",
    label="predicted mean",
    linewidth=3,
)
ax.plot(
    X_test["Flipper Length (mm)"],
    y_pred_median,
    color="tab:green",
    label="predicted median",
    linewidth=3,
    linestyle="--",
)
ax.legend()
plt.show()
../../_images/c35411ddce35043e40d1bf2281c217ef9cbcac6edef6a1fd069c46525deb6a92.png

For confidence intervals, we want to predict specific quantiles. We generalize quantile regression beyond the median. The pinball loss generalizes the mean absolute error for any quantile.

The quantile parameter sets which quantile to predict. For an 80% prediction interval, we predict the 10th and 90th percentiles.

model_estimate_10 = QuantileRegressor(quantile=0.1)
model_estimate_90 = QuantileRegressor(quantile=0.9)

model_estimate_10.fit(X, y)
model_estimate_90.fit(X, y)

y_pred_10 = model_estimate_10.predict(X_test)
y_pred_90 = model_estimate_90.predict(X_test)
_, ax = plt.subplots()
penguins.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)", ax=ax, alpha=0.5)
ax.plot(
    X_test["Flipper Length (mm)"],
    y_pred_mean,
    color="tab:orange",
    label="predicted mean",
    linewidth=3,
)
ax.plot(
    X_test["Flipper Length (mm)"],
    y_pred_median,
    color="tab:green",
    label="predicted median",
    linewidth=3,
    linestyle="--",
)
ax.fill_between(
    X_test["Flipper Length (mm)"],
    y_pred_10,
    y_pred_90,
    alpha=0.2,
    label="80% coverage interval",
)
ax.legend()
plt.show()
../../_images/0f0468ed05a9e4919c2bcebef37f8e1980a740fc0ada971eab02bb1108693584.png

Predicting intervals with tree-based models#

Exercise:

Now repeat the previous experiment using HistGradientBoostingRegressor. Read the documentation to find the parameters that optimize the right loss function.

Plot the conditional mean, median and 80% prediction interval.

# Write your code here.