imblearn.metrics.specificity_score¶
- 
imblearn.metrics.specificity_score(y_true, y_pred, labels=None, pos_label=1, average='binary', sample_weight=None)[source][source]¶
- Compute the specificity - The specificity is the ratio - tp / (tp + fn)where- tpis the number of true positives and- fnthe number of false negatives. The specificity is intuitively the ability of the classifier to find all the positive samples.- The best value is 1 and the worst value is 0. - Parameters: - y_true : ndarray, shape (n_samples, ) - Ground truth (correct) target values. - y_pred : ndarray, shape (n_samples, ) - Estimated targets as returned by a classifier. - labels : list, optional - The set of labels to include when - average != 'binary', and their order if- average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average.- pos_label : str or int, optional (default=1) - The class to report if - average='binary'and the data is binary. If the data are multiclass, this will be ignored; setting- labels=[pos_label]and- average != 'binary'will report scores for that label only.- average : str or None, optional (default=None) - If - None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:- 'binary':
- Only report results for the class specified by - pos_label. This is applicable only if targets (- y_{true,pred}) are binary.
- 'micro':
- Calculate metrics globally by counting the total true positives, false negatives and false positives. 
- 'macro':
- Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 
- 'weighted':
- Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 
- 'samples':
- Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from - accuracy_score).
 - warn_for : tuple or set, for internal use - This determines which warnings will be made in the case that this function is being used to return only one of its metrics. - sample_weight : ndarray, shape (n_samples, ) - Sample weights. - Returns: - specificity : float (if - average= None) or ndarray, shape (n_unique_labels, )- Examples - >>> import numpy as np >>> from imblearn.metrics import specificity_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> specificity_score(y_true, y_pred, average='macro') 0.66666666666666663 >>> specificity_score(y_true, y_pred, average='micro') 0.66666666666666663 >>> specificity_score(y_true, y_pred, average='weighted') 0.66666666666666663 >>> specificity_score(y_true, y_pred, average=None) array([ 0.75, 0.5 , 0.75])