Comments (1)
This is great, thanks for the helpful feedback @DimmestP. I'll try to find some time to think about how the points can be addressed (and would welcome pull requests in the meantime!).
Some really quick thoughts:
Also using a non-medical dataset would expand the usability
I have mixed feelings about switching to a non-medical dataset (though I admit partly because of my own bias towards health data!). Wouldn't any dataset we choose have some kind of topic? I dislike "toy datasets" like Iris etc, so I'd be happy to switch but preferably to something interesting.
Generally needs more programming tasks.
Agreed, definitely more work needed here. I intentionally tried to reduce time spent on data pre-processing because it is covered in an earlier workshop, but I agree that evaluation, tasks, etc would be good topics.
The course really could do with highlighting the benefits of random forests and gradient boosting. This can only be done by adding more features sooner.
For me this is a tough one. I have found the vizualisation aspect of the workshop to be important, and it's not ideal that the ability to visualize models diminishes as number of features increases.
Ideally I'd like it if we could (1) keep visualization and (2) work out how to incorporate more features when needed (e.g. to demonstrate improved performance).
Perhaps ignore gradient boosting entirely. It is skimmed over so fast it doesn't convey any of the benefits or differences over random forests.
I agree the gradient boosting section needs work. I'd like to keep if possible, and add more detail.
At this point in the workshop, I usually take people to PubMed and point out some of the papers that have been published on this dataset using XGBoost. Not that they are exciting papers, but that prior to the workshop I think many people would believe those papers were doing something special.
Ideally the code should not be continually renaming the mdl variable, but create new variables for each model to help comparison
Definitely, there are a bunch of things like this that need cleaning up!
from machine-learning-trees-python.
Related Issues (18)
- Exercises for section 7 (gradient boosting)
- Exercises for section 8 (performance)
- Add formula/definition of gini impurity
- Consider adding section on variable importance
- Add pubmed link to papers using tree models for mortality prediction
- Add note that random forest restricts variables on a node level HOT 1
- Idea for an exercise: Build your own decision tree HOT 1
- Add logistic regression as a baseline in the evaluation/performance section
- Add calibration to the evaluation/performance section
- Discussion of regression should be moved to later in the workshop.
- Visualisation to explain boosting HOT 1
- Exercises for section 1 (introduction)
- Exercises for section 2 (decision tree)
- Exercises for section 3 (Variance) HOT 1
- Exercises for section 4 (boosting)
- Exercises for section 5 (bagging)
- Exercises for section 6 (random forest)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from machine-learning-trees-python.