imblearn.metrics.classification_report_imbalanced¶
-
imblearn.metrics.
classification_report_imbalanced
(y_true, y_pred, labels=None, target_names=None, sample_weight=None, digits=2, alpha=0.1)[source][source]¶ Build a classification report based on metrics used with imbalanced dataset
Specific metrics have been proposed to evaluate the classification performed on imbalanced dataset. This report compiles the state-of-the-art metrics: precision/recall/specificity, geometric mean, and index balanced accuracy of the geometric mean.
Parameters: y_true : ndarray, shape (n_samples, )
Ground truth (correct) target values.
y_pred : ndarray, shape (n_samples, )
Estimated targets as returned by a classifier.
labels : list, optional
The set of labels to include when
average != 'binary'
, and their order ifaverage is None
. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average.target_names : list of strings, optional
Optional display names matching the labels (same order).
sample_weight : ndarray, shape (n_samples, )
Sample weights.
digits : int, optional (default=2)
Number of digits for formatting output floating point values
alpha : float, optional (default=0.1)
Weighting factor.
Returns: report : string
Text summary of the precision, recall, specificity, geometric mean, and index balanced accuracy.
Examples
>>> import numpy as np >>> from imblearn.metrics import classification_report_imbalanced >>> y_true = [0, 1, 2, 2, 2] >>> y_pred = [0, 0, 2, 2, 1] # doctest : +NORMALIZE_WHITESPACE >>> target_names = ['class 0', 'class 1', 'class 2'] # doctest : +NORMALIZE_WHITESPACE >>> print(classification_report_imbalanced(y_true, y_pred, target_names=target_names)) pre rec spe f1 geo iba sup class 0 0.50 1.00 0.75 0.67 0.71 0.48 1 class 1 0.00 0.00 0.75 0.00 0.00 0.00 1 class 2 1.00 0.67 1.00 0.80 0.82 0.69 3 avg / total 0.70 0.60 0.90 0.61 0.63 0.51 5