imblearn.metrics.sensitivity_specificity_support

imblearn.metrics.sensitivity_specificity_support(y_true, y_pred, labels=None, pos_label=1, average=None, warn_for=('sensitivity', 'specificity'), sample_weight=None)[source][source]

Compute sensitivity, specificity, and support for each class

The sensitivity is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The sensitivity quantifies the ability to avoid false negatives_[1].

The specificity is the ratio tn / (tn + fp) where tn is the number of true negatives and fn the number of false negatives. The specificity quantifies the ability to avoid false positives_[1].

The support is the number of occurrences of each class in y_true.

If pos_label is None and in binary classification, this function returns the average sensitivity and specificity if average is one of 'weighted'.

Parameters:

y_true : ndarray, shape (n_samples, )

Ground truth (correct) target values.

y_pred : ndarray, shape (n_samples, )

Estimated targets as returned by a classifier.

labels : list, optional

The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order.

pos_label : str or int, optional (default=1)

The class to report if average='binary' and the data is binary. If the data are multiclass, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only.

average : str or None, optional (default=None)

If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:

'binary':

Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary.

'micro':

Calculate metrics globally by counting the total true positives, false negatives and false positives.

'macro':

Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

'weighted':

Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.

'samples':

Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score).

warn_for : tuple or set, for internal use

This determines which warnings will be made in the case that this function is being used to return only one of its metrics.

sample_weight : ndarray, shape (n_samples, )

Sample weights.

Returns:

sensitivity : float (if average = None) or ndarray, shape (n_unique_labels, )

specificity : float (if average = None) or ndarray, shape (n_unique_labels, )

support : int (if average = None) or ndarray, shape (n_unique_labels, )

The number of occurrences of each label in y_true.

References

[R29]Wikipedia entry for the Sensitivity and specificity

Examples

>>> import numpy as np
>>> from imblearn.metrics import sensitivity_specificity_support
>>> y_true = np.array(['cat', 'dog', 'pig', 'cat', 'dog', 'pig'])
>>> y_pred = np.array(['cat', 'pig', 'dog', 'cat', 'cat', 'dog'])
>>> sensitivity_specificity_support(y_true, y_pred, average='macro')
(0.33333333333333331, 0.66666666666666663, None)
>>> sensitivity_specificity_support(y_true, y_pred, average='micro')
(0.33333333333333331, 0.66666666666666663, None)
>>> sensitivity_specificity_support(y_true, y_pred, average='weighted')
(0.33333333333333331, 0.66666666666666663, None)