imblearn.under_sampling.AllKNN

class imblearn.under_sampling.AllKNN(ratio='auto', return_indices=False, random_state=None, size_ngh=None, n_neighbors=3, kind_sel='all', n_jobs=-1)[source][source]

Class to perform under-sampling based on the AllKNN method.

Parameters:

ratio : str, dict, or callable, optional (default=’auto’)

Ratio to use for resampling the data set.

  • If str, has to be one of: (i) 'minority': resample the minority class; (ii) 'majority': resample the majority class, (iii) 'not minority': resample all classes apart of the minority class, (iv) 'all': resample all classes, and (v) 'auto': correspond to 'all' with for over-sampling methods and 'not minority' for under-sampling methods. The classes targeted will be over-sampled or under-sampled to achieve an equal number of sample with the majority or minority class.
  • If dict, the keys correspond to the targeted classes. The values correspond to the desired number of samples.
  • If callable, function taking y and returns a dict. The keys correspond to the targeted classes. The values correspond to the desired number of samples.

return_indices : bool, optional (default=False)

Whether or not to return the indices of the samples randomly selected from the majority class.

random_state : int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

size_ngh : int, optional (default=None)

Size of the neighbourhood to consider to compute the average distance to the minority point samples.

Deprecated since version 0.2: size_ngh is deprecated from 0.2 and will be replaced in 0.4 Use n_neighbors instead.

n_neighbors : int or object, optional (default=3)

If int, size of the neighbourhood to consider to compute the average distance to the minority point samples. If object, an estimator that inherits from sklearn.neighbors.base.KNeighborsMixin that will be used to find the k_neighbors.

kind_sel : str, optional (default=’all’)

Strategy to use in order to exclude samples.

  • If 'all', all neighbours will have to agree with the samples of interest to not be excluded.
  • If 'mode', the majority vote of the neighbours will be used in order to exclude a sample.

n_jobs : int, optional (default=-1)

The number of thread to open when it is possible.

Notes

The method is based on [R34].

Supports mutli-class resampling.

References

[R34](1, 2) I. Tomek, “An Experiment with the Edited Nearest-Neighbor Rule,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 6(6), pp. 448-452, June 1976.

Examples

>>> from collections import Counter
>>> from sklearn.datasets import make_classification
>>> from imblearn.under_sampling import AllKNN 
>>> X, y = make_classification(n_classes=2, class_sep=2,
... weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
... n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
>>> print('Original dataset shape {}'.format(Counter(y)))
Original dataset shape Counter({1: 900, 0: 100})
>>> allknn = AllKNN(random_state=42)
>>> X_res, y_res = allknn.fit_sample(X, y)
>>> print('Resampled dataset shape {}'.format(Counter(y_res)))
Resampled dataset shape Counter({1: 887, 0: 100})

Methods

fit(X, y) Find the classes statistics before to perform sampling.
fit_sample(X, y) Fit the statistics and resample the data directly.
get_params([deep]) Get parameters for this estimator.
sample(X, y) Resample the dataset.
set_params(**params) Set the parameters of this estimator.
__init__(ratio='auto', return_indices=False, random_state=None, size_ngh=None, n_neighbors=3, kind_sel='all', n_jobs=-1)[source][source]

Methods

__init__([ratio, return_indices, ...])
fit(X, y) Find the classes statistics before to perform sampling.
fit_sample(X, y) Fit the statistics and resample the data directly.
get_params([deep]) Get parameters for this estimator.
sample(X, y) Resample the dataset.
set_params(**params) Set the parameters of this estimator.
fit(X, y)[source]

Find the classes statistics before to perform sampling.

Parameters:

X : ndarray, shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

y : ndarray, shape (n_samples, )

Corresponding label for each sample in X.

Returns:

self : object,

Return self.

fit_sample(X, y)[source]

Fit the statistics and resample the data directly.

Parameters:

X : ndarray, shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

y : ndarray, shape (n_samples, )

Corresponding label for each sample in X.

Returns:

X_resampled : ndarray, shape (n_samples_new, n_features)

The array containing the resampled data.

y_resampled : ndarray, shape (n_samples_new)

The corresponding label of X_resampled

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:

deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

sample(X, y)[source]

Resample the dataset.

Parameters:

X : ndarray, shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

y : ndarray, shape (n_samples, )

Corresponding label for each sample in X.

Returns:

X_resampled : ndarray, shape (n_samples_new, n_features)

The array containing the resampled data.

y_resampled : ndarray, shape (n_samples_new)

The corresponding label of X_resampled

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self