Losses

Losses#

This notebook explores linear models and loss functions in depth. We use the previous regression problem that models the relationship between penguins’ flipper length and body mass.

# When using JupyterLite, you will need to uncomment and install the `skrub` package.
%pip install skrub
import matplotlib.pyplot as plt
import skrub

skrub.patch_display()  # make nice display for pandas tables
/home/runner/work/traces-sklearn/traces-sklearn/.pixi/envs/docs/bin/python: No module named pip
Note: you may need to restart the kernel to use updated packages.
import pandas as pd

data = pd.read_csv("../datasets/penguins_regression.csv")
data
Processing column   1 / 2
Processing column   2 / 2

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").

data.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)")
plt.show()
../../_images/fcad6a2c1f9ccdacf41dcd32a1a6614c816485b11d0c8dae4ee659641c5d5e05.png

The data shows a clear linear relationship between flipper length and body mass. We use body mass as our target variable and flipper length as our feature.

X, y = data[["Flipper Length (mm)"]], data["Body Mass (g)"]

In the previous notebook, we used scikit-learn’s LinearRegression to learn model parameters from data with fit and make predictions with predict.

from sklearn.linear_model import LinearRegression

model = LinearRegression()
model.fit(X, y)
predicted_target = model.predict(X)
ax = data.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)")
ax.plot(
    X, predicted_target, label=model.__class__.__name__, color="tab:orange", linewidth=4
)
ax.legend()
plt.show()
../../_images/af45d7a9b56c9c5d5b6a6af091ddf0190050c09f3ea58968a9e333b84f867674.png

The linear regression model minimizes the error between true and predicted targets. A general term to describe this error is “loss function”. For the linear regression, from scikit-learn, it specifically minimizes the least squared error:

\[ loss = (y - \hat{y})^2 \]

or equivalently:

\[ loss = (y - X \beta)^2 \]

Let’s visualize this loss function:

def se_loss(true_target, predicted_target):
    loss = (true_target - predicted_target) ** 2
    return loss
import numpy as np

xmin, xmax = -2, 2
xx = np.linspace(xmin, xmax, 100)
plt.plot(xx, se_loss(0, xx), label="SE loss")
plt.legend()
plt.show()
../../_images/7698e87cd10a491fc85a9a16b39c4a0e363ad5ad29a988edd7c0a3525dc6ed8c.png

The bell shape of the loss function heavily penalizes large errors, which significantly impacts the model fit.

EXERCISE

  1. Add an outlier to the dataset: a penguin with 230 mm flipper length and 300 g body mass

  2. Plot the updated dataset

  3. Fit a LinearRegression model on this dataset, using sample_weight to give the outlier 10x more weight than other samples

  4. Plot the model predictions

How does the outlier affect the model?

# Write your code here.

Instead of squared loss, we now use the Huber loss through scikit-learn’s HuberRegressor. We fit this model similarly to our previous approach.

from sklearn.linear_model import HuberRegressor

sample_weight = np.ones_like(y)
sample_weight[-1] = 10
model = HuberRegressor()
model.fit(X, y, sample_weight=sample_weight)
predicted_target = model.predict(X)
# -

ax = data.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)")
ax.plot(X, predicted_target, label=model.__class__.__name__, color="black", linewidth=4)
plt.legend()
plt.show()
../../_images/e1fc70edea0bb4a1e5fa3dd1ad51b4e72330224e4fb60e707d1c175853a52a91.png

The Huber loss gives less weight to outliers compared to least squares.

EXERCISE

  1. Read the HuberRegressor documentation

  2. Create a huber_loss function similar to se_loss

  3. Create an absolute loss function

Explain why outliers affect Huber regression less than ordinary least squares.

# Write your code here.

Huber and absolute losses penalize outliers less severely. This makes outliers less influential when finding the optimal \(\beta\) parameters. The HuberRegressor estimates the median rather than the mean.

For other quantiles, scikit-learn offers the QuantileRegressor. It minimizes the pinball loss to estimate specific quantiles. Here’s how to estimate the median:

from sklearn.linear_model import QuantileRegressor

model = QuantileRegressor(quantile=0.5)
model.fit(X, y, sample_weight=sample_weight)
predicted_target = model.predict(X)
ax = data.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)")
ax.plot(X, predicted_target, label=model.__class__.__name__, color="black", linewidth=4)
ax.legend()
plt.show()
../../_images/71a222a3257bd3348c411b6170e2848150048eb07c948c839924db6feb57a5df.png

The QuantileRegressor enables estimation of confidence intervals:

model = QuantileRegressor(quantile=0.5, solver="highs")
model.fit(X, y, sample_weight=sample_weight)
predicted_target_median = model.predict(X)

model.set_params(quantile=0.90)
model.fit(X, y, sample_weight=sample_weight)
predicted_target_90 = model.predict(X)

model.set_params(quantile=0.10)
model.fit(X, y, sample_weight=sample_weight)
predicted_target_10 = model.predict(X)
ax = data.plot.scatter(x="Flipper Length (mm)", y="Body Mass (g)")
ax.plot(
    X,
    predicted_target_median,
    label=f"{model.__class__.__name__} - median",
    color="black",
    linewidth=4,
)
ax.plot(
    X,
    predicted_target_90,
    label=f"{model.__class__.__name__} - 90th percentile",
    color="tab:orange",
    linewidth=4,
)
ax.plot(
    X,
    predicted_target_10,
    label=f"{model.__class__.__name__} - 10th percentile",
    color="tab:green",
    linewidth=4,
)
ax.legend(loc="center left", bbox_to_anchor=(1, 0.5))
plt.show()
../../_images/2599e549500075588e6a277de0c755863d36a93408ebb535b4039023d9c6b79d.png

This plot shows an 80% confidence interval around the median using the 10th and 90th percentiles.