Giter Club home page Giter Club logo

Comments (11)

fengchi863 avatar fengchi863 commented on July 23, 2024 1

thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.

it may be this:

cv.split(range(10)):
train:[1,2,3,4] test:[5,6,7,8,9,10]
train:[1,2,3,4,5] test:[6,7,8,9,10]
train:[1,2,3] test:[4,5,6,7,8,9,10]
train:[1,2,3,4,5,6,7] test:[8,9,10]

from tscv.

WenjieZ avatar WenjieZ commented on July 23, 2024

The so-called Rep-Holdout seems lame to me. Use the following code instead:

n = LENGTH_OF_DATA
m = NUMBER_OF_FOLDS
window = (a, b)
cv = GapRollForward(min_train_size=a, min_test_size=n-b, roll_size=(b-a)//(m-1))

from tscv.

WenjieZ avatar WenjieZ commented on July 23, 2024

Oops! The last line should have been:

cv = GapRollForward(min_train_size=a, 
                    min_test_size=n-b, 
                    max_test_size=np.inf, 
                    roll_size=(b-a)//(m-1))

from tscv.

georgeblck avatar georgeblck commented on July 23, 2024

Thank you for your fast reply. My problem with the GapRollForward is that - without adjusting/tuning the various input arguments - it lead to overfitting when finetuning my time series forecasting task. E.g. I have 2.5 years and choose the first 2 years as the min_train_size and then do rolling cross validation from that point onwards. Although this rolling prediction closely resembles the actual prediction procedure, when I use it as part of hyperparameter tuning for the underlying method it naturally overfits to the data outside of the min_train_size.

That is why I liked the RepHoldout; because if used as part of a tuning procedure it removes the bias induced by min_train_size and results in a more even crossvalidation performance.

I'm not sure if I understand your suggestion correctly but I compared a small example here with the Rep-Holdout:

import numpy as np
from tscv import GapRollForward

# Number of Samples
n = 30
# Number of Folds
m = 5
# Windowsize
window = (1, 5)

cv = GapRollForward(min_train_size=window[0], 
                    min_test_size=n-window[1], 
                    max_test_size=np.inf, 
                    roll_size=(window[1]-window[0])//(m-1))
for train, test in cv.split(range(n)):
    print("train:", train, "test:", test)

train: [0] test: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29]
train: [0 1] test: [ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26 27 28 29]
train: [0 1 2] test: [ 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29]
train: [0 1 2 3] test: [ 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
28 29]
train: [0 1 2 3 4] test: [ 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
29]

and here is the output from my basic RepHoldout implementation:

cv =RepHoldout(nreps=5, train_size=4,
                 test_size=1, gap_size=0)

for train, test in cv.split(range(30)):
    print("train:", train, "test:", test)

train: [23 24 25 26] test: [27]
train: [3 4 5 6] test: [7]
train: [5 6 7 8] test: [9]
train: [6 7 8 9] test: [10]
train: [14 15 16 17] test: [18]

from tscv.

WenjieZ avatar WenjieZ commented on July 23, 2024

That is what the max_train_size parameter is for.

# Number of Samples
n = 30
# Number of Folds
m = 5
# Windowsize
window = (5, 25)

cv = GapRollForward(min_train_size=window[0], 
                    min_test_size=n-window[1], 
                    max_train_size=4, 
                    roll_size=(window[1]-window[0])//(m-1))
for train, test in cv.split(range(n)):
    print("train:", train, "test:", test)

train: [1 2 3 4] test: [5]
train: [6 7 8 9] test: [10]
train: [11 12 13 14] test: [15]
train: [16 17 18 19] test: [20]
train: [21 22 23 24] test: [25]

from tscv.

WenjieZ avatar WenjieZ commented on July 23, 2024

A few comments:

  1. The variable window refers to the "available window" in your Rep-Holdout.
  2. It's a good idea to use balanced training and test set across all folds of cross-validation. You are doing it right.

from tscv.

georgeblck avatar georgeblck commented on July 23, 2024

Thank you for your comments. I like your code and see that it is very similar to the Rep-Holdout; just not randomized and instead truly rolling forward.

from tscv.

WenjieZ avatar WenjieZ commented on July 23, 2024

Yeah, randomization is generally to be avoided if they are not an essential part of an algorithm. In particular, in your example, Rep-Holdout can be seen as simple sampling with replacement, and my code systematic sampling, which often results lower variance and is thus preferred.

from tscv.

georgeblck avatar georgeblck commented on July 23, 2024

Thank you for the insights!
What are your thoughts on the paper I linked to then and the performance of the different methods?

from tscv.

chasse20 avatar chasse20 commented on July 23, 2024

thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.

it may be this:

cv.split(range(10)): train:[1,2,3,4] test:[5,6,7,8,9,10] train:[1,2,3,4,5] test:[6,7,8,9,10] train:[1,2,3] test:[4,5,6,7,8,9,10] train:[1,2,3,4,5,6,7] test:[8,9,10]

The method described in that paper is very ambiguous, but what you have seems most likely what the author meant. The last one WenjieZ posted was just a rolling block method. I do agree with him though on forgoing randomness and would rather just exhaust each fold as you have shown starting from [1] to [1,2,3,4,5,6,7,8,9] training versus test.

from tscv.

WenjieZ avatar WenjieZ commented on July 23, 2024

thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.

it may be this:

cv.split(range(10)):

train:[1,2,3,4] test:[5,6,7,8,9,10]

train:[1,2,3,4,5] test:[6,7,8,9,10]

train:[1,2,3] test:[4,5,6,7,8,9,10]

train:[1,2,3,4,5,6,7] test:[8,9,10]

Increase the max_test_size parameter (default to 1) if this split is what you are looking for.

from tscv.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.