Comments (4)
I see from your code
rreg = LogisticRegression(class_weight="balanced")
so actually any
parameter from
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
class sklearn.linear_model.LogisticRegression(penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)
can be used , for example
solver='liblinear', penalty='l1',
but only not C=3.0
???
def test_autofeat(dataset, feateng_steps=2):
# load data
X, y, units = load_classification_dataset(dataset)
# split in training and test parts
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=12)
# run autofeat
afreg = AutoFeatClassifier(verbose=1, feateng_steps=feateng_steps, units=units)
# fit autofeat on less data, otherwise ridge reg model with xval will overfit on new features
X_train_tr = afreg.fit_transform(X_train, y_train)
X_test_tr = afreg.transform(X_test)
print("autofeat new features:", len(afreg.new_feat_cols_))
print("autofeat Acc. on training data:", accuracy_score(y_train, afreg.predict(X_train_tr)))
print("autofeat Acc. on test data:", accuracy_score(y_test, afreg.predict(X_test_tr)))
# train rreg on transformed train split incl cross-validation for parameter selection
print("# Logistic Regression")
rreg = LogisticRegression(class_weight="balanced")
param_grid = {"C": np.logspace(-4, 4, 10)}
from autofeat.
Just take the features that are returned in the dataframe when calling the .fit_transform(X, y)
function of the autofeat model and then train your own classification model with whatever parameters and metric you want.
from autofeat.
but features selection should be done for some specific metric to make it efficient for this metric
for example if features selected for accuracy then they can not be good for F1 metric?
from autofeat.
The features are not per se selected based on any metric but only based on the coefficients of the model. The models themselves are fitted according to their own optimization function (e.g. a linear regression model always minimizes the mean squared error, however, the scoring metric used to evaluate a trained model by default is R^2). The metric comes into play when selecting the hyperparameters (e.g. C) of the model in the internal cross-validation, however, this should not make a huge difference as the amount of regularization that is required is dependent more on how noisy your dataset is, not which metric you use to score it.
from autofeat.
Related Issues (20)
- Data validation error when using Buckingham's Pi Theorem on Classification task HOT 1
- Is it possible to use autofeat without exceeding memory of the system? HOT 3
- possible point for verification HOT 3
- How to transform new data? HOT 1
- Speed up tranform() HOT 6
- MemoryError: Unable to allocate 2.05 GiB for an array with shape (501, 550174) and data type float64
- pandas corr is too slow; use numpy instead HOT 1
- Correlation matrix can have inconsistent column and row names HOT 1
- Allow user to pass dict of Pint objects/ureg
- Input contains NaN, infinity or a value too large for dtype('float32') on fit_transform HOT 2
- ufunc '_lambdifygenerated' did not contain a loop with signature matching types (<class 'numpy.dtype[float32]'>, <class 'numpy.dtype[float32]'>) -> None HOT 6
- How to choose sin(x) and cos(x) etl. as features? HOT 1
- Scaling and Autofeat HOT 2
- [enhancement] add predict_proba for classifiers HOT 11
- TypeError: unsupported operand type(s) for |: 'type' and 'NoneType' HOT 1
- ValueError: Input X contains NaN. HOT 6
- Documentation Enhancement for getting model, features, coefficients HOT 1
- AutoFeatLight.fit_trasnform() take 2 positional arguments 3 given HOT 2
- Reproducibility issue HOT 1
- In toydata, autofeat finds the correct function (square) only under some circumstances HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autofeat.