Comments (8)
Hi,
thank you! I think there are multiple things to consider here. First, you cannot evaluate an oversampler alone, you always need a classifier trained on the oversampled dataset. The evaluation should be done in a cross-validation manner, that is, you repeatedly split the dataset to train and test set, oversample the training set, fit a classifier to the oversampled training set and predict the test set. This can be achieved in a couple of lines of code:
import numpy as np
import smote_variants as sv
import imblearn.datasets as imb_datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import roc_auc_score
libras= imb_datasets.fetch_datasets()['libras_move']
X= libras['data']
y= libras['target']
classifier= DecisionTreeClassifier(max_depth=3, random_state=5)
aucs= []
# cross-validation
for train, test in StratifiedKFold(random_state=5).split(X, y):
# splitting
X_train, X_test= X[train], X[test]
y_train, y_test= y[train], y[test]
# oversampling
X_train_samp, y_train_samp= sv.SMOTE(n_neighbors=3, random_state=5).sample(X_train, y_train)
classifier.fit(X_train_samp, y_train_samp)
# prediction
y_pred= classifier.predict_proba(X_test)
# evaluation
aucs.append(roc_auc_score(y_test, y_pred[:,1]))
print('AUC', np.mean(aucs))
You can add any number of further classifier or oversampler to this evaluation loop. One must be careful that all the oversamplers and classifiers should be evaluated on the very same database folds for comparability.
On the other hand, one needs to consider that many oversampling techniques have various parameters that can be tuned. So, it is usually not enough to evaluate SMOTE with one single parameter settings, it needs to be evaluated with many different parameter settings, again, in a cross-validation manner, before one could say that one oversampler works better than another.
Also, classifiers can have a bunch of parameters to tune, thus, in order to carry out a proper comparison, one needs to evaluate oversamplers with many parameter combinations and subsequently applied classifiers with many parameter combinations.
This is the only way to draw fair and valid consequences.
Now, if you foresee this process, it is a decent amount of oversampling and classification jobs to be executed, and each of them needs to be done in a proper cross-validation.
You have basically two options. Option #1 is to extend the sample code above to evaluate many oversamplers with many parameter combinations followed by classifiers with many parameter combinations in each step of the loop, and then unify the results.
Alternatively, option #2, you can use the evaluate_oversamplers
function of the package, exactly in the way it is provided in the sample codes, as the evaluate_oversamplers
function does exactly what I have outlined above. All the results coming out from the evaluate_oversamplers
function are properly cross-validated scores.
Just to emphasize: It is incorrect that the evaluate_oversamplers
function samples the entire dataset. It repeatedly samples all the cross-validation folds of the dataset, uses the training set for training and the test set for evaluation.
So, as a summary, just like with most of the machine learning code on github, the oversamplers implemented in smote_variants process and sample all the data you feed them. If you want to do cross-validation by hand, you need to split the dataset in whatever way, just like in the sample code above. Alternatively you can use the built-in evaluation functions and carry out all of this work in one single line of code.
from smote_variants.
Thanks a ton! It is really helpful.. I will do the needful combinations of hyperparameter tuning.
But Is it correct that smote_variant package is working only when importing dataset from imblearn.datasets??
from smote_variants.
No problem. No, it is not correct. smote_variants
is working with any dataset represented as a matrix of explanatory variables (X
) and a vector of corresponding class labels (y
).
from smote_variants.
ok thanks.. It would be really helpful if you provide the link of implementation of SMOTE from scratch not by any package
from smote_variants.
Well, the point of having packages is exactly to avoid and prevent implementation from scratch. Also, as this is an open source package, you can find the implementation of all the oversampling techniques in it, from scratch. Particularly, the SMOTE algorithm is implemented here:
smote_variants/smote_variants/_smote_variants.py
Lines 1349 to 1368 in 888109f
from smote_variants.
Hi @shwetashrmgithub , can we close this issue?
from smote_variants.
yeah sure! @gykovacs but one last thing i tried to map other metrics like f1 score and precision on the first given code by you to compare various oversampling algo. but that's showing some error.. could you pl look into that.. thanks
from smote_variants.
@shwetashrmgithub , well, the evaluation function roc_auc_score
is a standard sklearn
function, I think any other metrics should work, but care must be taken that other metrics, like sklearn.metrics.f1_score
take class labels as arguments and not probability scores, so you need to do it this way:
from sklearn.metrics import f1_score
scores = []
# prediction
y_pred= classifier.predict(X_test)
# evaluation
scores.append(f1_score(y_test, y_pred))
from smote_variants.
Related Issues (20)
- DEAGO : negative values for categorical features inside the data HOT 3
- Minimum number of rows in a class HOT 1
- when use SOMO,Why did the two types of samples not reach a balance and the number did not change HOT 2
- provided out is the wrong size for the reduction
- Categorical Variables HOT 1
- How to vary the "proportion" parameter - MulticlassOversampling class
- Why I get this error when I use smote_variants? HOT 9
- Could I apply this package to the time-series raw data?
- Question HOT 2
- Question: Combining these with Undersampling HOT 3
- Question: Regarding time complexity of Oversamplers and "Noise Filters" HOT 1
- GridSearchCV classifier parameters: int vs list HOT 3
- Implement 'verbose' parameter (feature request) HOT 2
- sv.MulticlassOversampling error for getattr() function HOT 2
- Error: Dimension of X_train and y_train is not the same ! HOT 2
- OversamplingClassifier does not work with probability-based metrics HOT 3
- Support for python 3.11 HOT 1
- Remove warnings
- Can smote_variants deal with 3_class data?
- I got this error when I used polynom_fit_SMOTE.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from smote_variants.