Comments (11)
thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.
it may be this:
cv.split(range(10)):
train:[1,2,3,4] test:[5,6,7,8,9,10]
train:[1,2,3,4,5] test:[6,7,8,9,10]
train:[1,2,3] test:[4,5,6,7,8,9,10]
train:[1,2,3,4,5,6,7] test:[8,9,10]
from tscv.
The so-called Rep-Holdout seems lame to me. Use the following code instead:
n = LENGTH_OF_DATA
m = NUMBER_OF_FOLDS
window = (a, b)
cv = GapRollForward(min_train_size=a, min_test_size=n-b, roll_size=(b-a)//(m-1))
from tscv.
Oops! The last line should have been:
cv = GapRollForward(min_train_size=a,
min_test_size=n-b,
max_test_size=np.inf,
roll_size=(b-a)//(m-1))
from tscv.
Thank you for your fast reply. My problem with the GapRollForward
is that - without adjusting/tuning the various input arguments - it lead to overfitting when finetuning my time series forecasting task. E.g. I have 2.5 years and choose the first 2 years as the min_train_size and then do rolling cross validation from that point onwards. Although this rolling prediction closely resembles the actual prediction procedure, when I use it as part of hyperparameter tuning for the underlying method it naturally overfits to the data outside of the min_train_size
.
That is why I liked the RepHoldout; because if used as part of a tuning procedure it removes the bias induced by min_train_size
and results in a more even crossvalidation performance.
I'm not sure if I understand your suggestion correctly but I compared a small example here with the Rep-Holdout:
import numpy as np
from tscv import GapRollForward
# Number of Samples
n = 30
# Number of Folds
m = 5
# Windowsize
window = (1, 5)
cv = GapRollForward(min_train_size=window[0],
min_test_size=n-window[1],
max_test_size=np.inf,
roll_size=(window[1]-window[0])//(m-1))
for train, test in cv.split(range(n)):
print("train:", train, "test:", test)
train: [0] test: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29]
train: [0 1] test: [ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26 27 28 29]
train: [0 1 2] test: [ 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29]
train: [0 1 2 3] test: [ 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
28 29]
train: [0 1 2 3 4] test: [ 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
29]
and here is the output from my basic RepHoldout implementation:
cv =RepHoldout(nreps=5, train_size=4,
test_size=1, gap_size=0)
for train, test in cv.split(range(30)):
print("train:", train, "test:", test)
train: [23 24 25 26] test: [27]
train: [3 4 5 6] test: [7]
train: [5 6 7 8] test: [9]
train: [6 7 8 9] test: [10]
train: [14 15 16 17] test: [18]
from tscv.
That is what the max_train_size
parameter is for.
# Number of Samples
n = 30
# Number of Folds
m = 5
# Windowsize
window = (5, 25)
cv = GapRollForward(min_train_size=window[0],
min_test_size=n-window[1],
max_train_size=4,
roll_size=(window[1]-window[0])//(m-1))
for train, test in cv.split(range(n)):
print("train:", train, "test:", test)
train: [1 2 3 4] test: [5]
train: [6 7 8 9] test: [10]
train: [11 12 13 14] test: [15]
train: [16 17 18 19] test: [20]
train: [21 22 23 24] test: [25]
from tscv.
A few comments:
- The variable
window
refers to the "available window" in your Rep-Holdout. - It's a good idea to use balanced training and test set across all folds of cross-validation. You are doing it right.
from tscv.
Thank you for your comments. I like your code and see that it is very similar to the Rep-Holdout; just not randomized and instead truly rolling forward.
from tscv.
Yeah, randomization is generally to be avoided if they are not an essential part of an algorithm. In particular, in your example, Rep-Holdout can be seen as simple sampling with replacement, and my code systematic sampling, which often results lower variance and is thus preferred.
from tscv.
Thank you for the insights!
What are your thoughts on the paper I linked to then and the performance of the different methods?
from tscv.
thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.
it may be this:
cv.split(range(10)): train:[1,2,3,4] test:[5,6,7,8,9,10] train:[1,2,3,4,5] test:[6,7,8,9,10] train:[1,2,3] test:[4,5,6,7,8,9,10] train:[1,2,3,4,5,6,7] test:[8,9,10]
The method described in that paper is very ambiguous, but what you have seems most likely what the author meant. The last one WenjieZ posted was just a rolling block method. I do agree with him though on forgoing randomness and would rather just exhaust each fold as you have shown starting from [1] to [1,2,3,4,5,6,7,8,9] training versus test.
from tscv.
thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.
it may be this:
cv.split(range(10)):
train:[1,2,3,4] test:[5,6,7,8,9,10]
train:[1,2,3,4,5] test:[6,7,8,9,10]
train:[1,2,3] test:[4,5,6,7,8,9,10]
train:[1,2,3,4,5,6,7] test:[8,9,10]
Increase the max_test_size
parameter (default to 1) if this split is what you are looking for.
from tscv.
Related Issues (20)
- [Docs] Use this package for Nested Cross-Validation
- Intution on setting number of gaps HOT 1
- split.py depends on deprecated / newly private method `_safe_indexing` in scikit-learn 0.24.0 HOT 3
- Stratify? HOT 1
- Double count in `n_splits` in `GapLeavePOut` HOT 1
- Improve the user experience of `gap_train_test_split`
- Retrained version of GapWalkForward: GapRollForward HOT 1
- Continuous Integration
- Documentation
- Warning once is not enough HOT 1
- Documentation HOT 3
- GapWalkForward Issue with Scikit-learn 0.24.1 HOT 2
- Deprecation message for `GapWalkForward` HOT 1
- Publish on conda
- Error when Importing TSCV Gapwalkforward HOT 2
- Import error with latest sklearn version HOT 3
- Does this work with sklearn 1.2? HOT 4
- GapKFold CV not working with sklearn cross_val_predict HOT 1
- GapLeavePOut CV not working with sklearn cross_val_predict HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tscv.