Giter Club home page Giter Club logo

forecastxgb-r-package's Introduction

forecastxgb-r-package

The forecastxgb package provides time series modelling and forecasting functions that combine the machine learning approach of Chen, He and Benesty's xgboost with the convenient handling of time series and familiar API of Rob Hyndman's forecast. It applies to time series the Extreme Gradient Boosting proposed in Greedy Function Approximation: A Gradient Boosting Machine, by Jermoe Friedman in 2001. xgboost has become an important machine learning algorithm; nicely explained in this accessible documentation.

Travis-CI Build Status CRAN version CRAN RStudio mirror downloads

Warning: this package is under active development and is some way off a CRAN release (realistically, no some time in 2017). Currently the forecasting results with the default settings are, frankly, pretty rubbish, but there is hope I can get better settings. The API and default values of arguments should be expected to continue to change.

Installation

Only on GitHub, but plan for a CRAN release in November 2016. Comments and suggestions welcomed.

This implementation uses as explanatory features:

  • lagged values of the response variable
  • dummy variables for seasons.
  • current and lagged values of any external regressors supplied as xreg
devtools::install_github("ellisp/forecastxgb-r-package/pkg")

Usage

Basic usage

The workhorse function is xgbar. This fits a model to a time series. Under the hood, it creates a matrix of explanatory variables based on lagged versions of the response time series, and (optionally) dummy variables of some sort for seasons. That matrix is then fed as the feature set for xgboost to do its stuff.

Univariate

Usage with default values is straightforward. Here it is fit to Australian monthly gas production 1956-1995, an example dataset provided in forecast:

library(forecastxgb)
model <- xgbar(gas)

(Note: the "Stopping. Best iteration..." to the screen is produced by xgboost::xgb.cv, which uses cat() rather than message() to print information on its processing.)

By default, xgbar uses row-wise cross-validation to determine the best number of rounds of iterations for the boosting algorithm without overfitting. A final model is then fit on the full available dataset. The relative importance of the various features in the model can be inspected by importance_xgb() or, more conveniently, the summary method for objects of class xgbar.

summary(model)

Importance of features in the xgboost model:
    Feature         Gain        Cover   Frequency
 1:   lag12 5.097936e-01 0.1480752533 0.078475336
 2:   lag11 2.796867e-01 0.0731403763 0.042600897
 3:   lag13 1.043604e-01 0.0355137482 0.031390135
 4:   lag24 7.807860e-02 0.1320115774 0.069506726
 5:    lag1 1.579312e-02 0.1885383502 0.181614350
 6:   lag23 5.616290e-03 0.0471490593 0.042600897
 7:    lag9 2.510372e-03 0.0459623734 0.040358744
 8:    lag2 6.759874e-04 0.0436179450 0.053811659
 9:   lag14 5.874155e-04 0.0311432706 0.026905830
10:   lag10 5.467606e-04 0.0530535456 0.053811659
11:    lag6 3.820611e-04 0.0152243126 0.033632287
12:    lag4 2.188107e-04 0.0098697540 0.035874439
13:   lag22 2.162973e-04 0.0103617945 0.017937220
14:   lag16 2.042320e-04 0.0098118669 0.013452915
15:   lag21 1.962725e-04 0.0149638205 0.026905830
16:   lag18 1.810734e-04 0.0243994211 0.029147982
17:    lag3 1.709305e-04 0.0132850941 0.035874439
18:    lag5 1.439827e-04 0.0231837916 0.033632287
19:   lag15 1.313859e-04 0.0143560058 0.031390135
20:   lag17 1.239889e-04 0.0109696093 0.017937220
21: season7 1.049934e-04 0.0081041968 0.015695067
22:    lag8 9.773024e-05 0.0123299566 0.026905830
23:   lag19 7.733822e-05 0.0112879884 0.015695067
24:   lag20 5.425515e-05 0.0072648336 0.011210762
25:    lag7 3.772907e-05 0.0105354559 0.020179372
26: season4 4.067607e-06 0.0010709117 0.002242152
27: season5 2.863805e-06 0.0022286541 0.006726457
28: season6 2.628821e-06 0.0021707670 0.002242152
29: season9 9.226827e-08 0.0003762663 0.002242152
    Feature         Gain        Cover   Frequency

 35 features considered.
476 original observations.
452 effective observations after creating lagged features.

We see in the case of the gas data that the most important feature in explaining gas production is the production 12 months previously; and then other features decrease in importance from there but still have an impact.

Forecasting is the main purpose of this package, and a forecast method is supplied. The resulting objects are of class forecast and familiar generic functions work with them.

fc <- forecast(model, h = 12)
plot(fc)

plot of chunk unnamed-chunk-5

Note that prediction intervals are not currently available.

See the vignette for more extended examples.

With external regressors

External regressors can be added by using the xreg argument familiar from other forecast functions like auto.arima and nnetar. xreg can be a vector or ts object but is easiest to integrate into the analysis if it is a matrix (even a matrix with one column) with well-chosen column names; that way feature names persist meaningfully.

The example below, with data taken from the fpp package supporting Athanasopoulos and Hyndman's Forecasting Principles and Practice book, shows income being used to explain consumption. In the same way that the response variable y is expanded into lagged versions of itself, each column in xreg is expanded into lagged versions, which are then treated as individual features for xgboost.

library(fpp)
consumption <- usconsumption[ ,1]
income <- matrix(usconsumption[ ,2], dimnames = list(NULL, "Income"))
consumption_model <- xgbar(y = consumption, xreg = income)
summary(consumption_model)

Importance of features in the xgboost model:
        Feature        Gain       Cover   Frequency
 1:        lag2 0.253763903 0.082908446 0.124513619
 2:        lag1 0.219332682 0.114608734 0.171206226
 3: Income_lag0 0.115604367 0.183107958 0.085603113
 4:        lag3 0.064652150 0.093105742 0.089494163
 5:        lag8 0.055645114 0.099756152 0.066147860
 6: Income_lag8 0.050460959 0.049434715 0.050583658
 7: Income_lag1 0.047187235 0.088561295 0.050583658
 8: Income_lag6 0.040512834 0.029150964 0.050583658
 9:        lag6 0.031876878 0.044225227 0.054474708
10: Income_lag2 0.020355402 0.015739304 0.031128405
11: Income_lag5 0.018011250 0.036577256 0.035019455
12:        lag5 0.017930780 0.032143649 0.035019455
13:        lag7 0.016674036 0.034249612 0.027237354
14: Income_lag4 0.015952784 0.025714919 0.038910506
15: Income_lag7 0.009850701 0.021724673 0.019455253
16:        lag4 0.008819146 0.028929284 0.038910506
17: Income_lag3 0.008720737 0.013855021 0.019455253
18:     season4 0.003152234 0.001551762 0.003891051
19:     season3 0.001496807 0.004655287 0.007782101

 20 features considered.
164 original observations.
156 effective observations after creating lagged features.

We see that the two most important features explaining consumption are the two previous quarters' values of consumption; followed by the income in this quarter; and so on.

The challenge of using external regressors in a forecasting environment is that to forecast, you need values of the future external regressors. One way this is sometimes done is by first forecasting the individual regressors. In the example below we do this, making sure the data structure is the same as the original xreg. When the new value of xreg is given to forecast, it forecasts forward the number of rows of the new xreg.

income_future <- matrix(forecast(xgbar(usconsumption[,2]), h = 10)$mean, 
                        dimnames = list(NULL, "Income"))
plot(forecast(consumption_model, xreg = income_future))

plot of chunk unnamed-chunk-7

Options

Seasonality

Currently there are three methods of treating seasonality.

  • The current default method is to throw dummy variables for each season into the mix of features for xgboost to work with.
  • An alternative is to perform classic multiplicative seasonal adjustment on the series before feeding it to xgboost. This seems to work better.
  • A third option is to create a set of pairs of Fourier transform variables and use them as x regressors
No h provided so forecasting forward 24 periods.

plot of chunk unnamed-chunk-8

No h provided so forecasting forward 24 periods.

plot of chunk unnamed-chunk-8

No h provided so forecasting forward 24 periods.

plot of chunk unnamed-chunk-8

All methods perform quite poorly at the moment, suffering from the difficulty the default settings have in dealing with non-stationary data (see below).

Transformations

The data can be transformed by a modulus power transformation (as per John and Draper, 1980) before feeding to xgboost. This transformation is similar to a Box-Cox transformation, but works with negative data. Leaving the lambda parameter as 1 will effectively switch off this transformation.

No h provided so forecasting forward 24 periods.

plot of chunk unnamed-chunk-9

No h provided so forecasting forward 24 periods.

plot of chunk unnamed-chunk-9

Version 0.0.9 of forecastxgb gave lambda the default value of BoxCox.lambda(abs(y)). This returned spectacularly bad forecasting results. Forcing this to be between 0 and 1 helped a little, but still gave very bad results. So far there isn't evidence (but neither is there enough investigation) that a Box Cox transformation helps xgbar do its model fitting at all.

Non-stationarity

From experiments so far, it seems the basic idea of xgboost struggles in this context with extrapolation into a new range of variables not in the training set. This suggests better results might be obtained by transforming the series into a stationary one before modelling - a similar approach to that taken by forecast::auto.arima. This option is available by trend_method = "differencing" and seems to perform well - certainly better than without - and it will probably be made a default setting once more experience is available.

model <- xgbar(AirPassengers, trend_method = "differencing", seas_method = "fourier")
plot(forecast(model, 24))

plot of chunk unnamed-chunk-10

Future developments

Future work might include:

  • additional automated time-dependent features (eg dummy variables for trading days, Easter, etc)
  • ability to include xreg values that don't get lagged
  • some kind of automated multiple variable forecasting, similar to a vector-autoregression.
  • better choices of defaults for values such as lambda (for power transformations), K (for Fourier transforms) and, most likely to be effective, maxlag.

forecastxgb-r-package's People

Contributors

ellisp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

forecastxgb-r-package's Issues

prediction intervals

We're reluctant to add this xgboost functionality to forecastHybrid until

  1. this time series implementation has been proven to work in a wide variety of situations (eg against Mcomp and Tcomp data at a minimum) ; and
  2. we have prediction intervals of some sort.

We might be able to mimic the approach used by forecast::nnetar.

Passing params to xgboost

It appears passing param arguments to xgboost() and xgb.train() don't have any impact. For example,

> library(forecastxgb)
> set.seed(3)
> a <- xgbts(AirPassengers, params = list(eta = .0001))
Stopping. Best iteration: 64
> 
> set.seed(3)
> a <- xgbts(AirPassengers)
Stopping. Best iteration: 64

Any ideas what's going on here?

daily data - better treatment of series with high value of frequency

need a way to deal with issues like this, raised in #20 :

bla_2 <- ts(runif(1076, min = 5000, max = 10000), start = c(2013, yday("2013-12-03")), 
            frequency = 365.25)

bla_2_XGB_model <- xgbar(y = bla_2)

It's superficially the non-integer frequency, but more broadly we need a way of handling daily data that takes into account leap years, and has a more sophisticated way than 365 or 366 dummy variables. Could draw on http://robjhyndman.com/hyndsight/dailydata/.

Handle NAs

eg

gold_model <- xgbar(gold, maxlag = 100)
 Show Traceback
 
 Rerun with Debug
 Error in xgb.DMatrix(data, label = label) : 
  There are NAN in the matrix, however, you did not set missing=NAN

overfitting

the in-sample accuracy is astonishingly and suspiciously good. Needs thorough checking. It might be though that proper investigation of #6 reveals the strengths and weaknesses.

Better way to choose maxlag

The choice of maxlag is the most obvious way to improve overall performance. I see two ways ahead:

a. do some comprehensive testing of different values and work out a better default formula
b. let the user do it by brute force - some kind of cross-validation to choose the best value.

Probably will want to do both these - ie have good performing defaults, but also the option to determine the optimal value of the hyperparameter. Doing b. first will help with the a. anyway.

decompose with non-complete cycles creates NA problems

for example

> thedata <- subset(tourism, "quarterly")[[36]]
> mod1 <- xgbar(thedata$x, trend_method = "differencing", seas_method = "decompose")
 Show Traceback
 
 Rerun with Debug
 Error in xgb.DMatrix(data, label = label) : 
  There are NAN in the matrix, however, you did not set missing=NAN 

forecast.xgbar() is inaccessible with R 3.5.0

I have installed forecastxgb from GitHub, but forecast.xgbar() is unavailable even when the package is loaded into the workspace. The version of R being used is 3.5.0.

Reproducible example:

`install_github("ellisp/forecastxgb-r-package/pkg")
library(forecastxgb)

sample_ts <- ts(sample(8:30, replace = TRUE, size = 25))

xgbar_season <- xgbar(sample_ts)

fcast <- forecast.xgbar(xgbar_season)`

This returns the error:

Error in forecast.xgbar(xgbar_season) :
could not find function "forecast.xgbar"

xgbar() works fine. Additionally, the help files for the forecastxgb functions are unavailable and return an error saying that the forecastxgb.rdb file is corrupt.

Hyperparameter tuning for xgboost?

Can we also pass the params list to the xgbar function? I think this is a good functionality to include. I would also like to see custom objective functions included in the call to the xgbar function call. I think it is fairly easy to do this

Maxlag is negative for short time series

I tried running xgbar on a small time series -

y <- structure(c(11.3709584471467, 9.43530182860391, 10.3631284113373, 
10.632862604961, 10.404268323141, 9.89387548390852, 11.5115219974389, 
9.9053409615869, 12.018423713877, 9.93728590094758, 11.3048696542235, 
12.2866453927011, 8.61113929888766), .Tsp = c(1, 2, 12), class = "ts")

xgbar(y)

Executes with the following error -

Error in [.default(origy, -(1:(maxlag))) :
only 0's may be mixed with negative subscripts
In addition: Warning message:
In xgbar(y) :
y is too short for 24 to be the value of maxlag. Reducing maxlags to -2 instead.

I think this error can be fixed with a simple check for negative maxlag.

maxlag <- orign - f - round(f / 4)

if (maxlag < 0 ) {
    stop('Try a longer time-series as maxlag is negative')
}

lambda problem

from the CRAN checks:

 1. Error: Modulus transform works when lambda = 1 or 0 (@test-modulus-transform.R#21) 
  object 'y' not found
  1: expect_equal(y, JDMod(y, lambda = 1)) at testthat/test-modulus-transform.R:21
  2: compare(object, expected, ...)
  
  testthat results ================================================================
  OK: 25 SKIPPED: 0 FAILED: 1
  1. Error: Modulus transform works when lambda = 1 or 0 (@test-modulus-transform.R#21) 

How to predict step by step

Hi, I wonder to know how could I feed new data y to predict? Seem that the forecast function only use xgb model to predict next h ?

better treatment of seasons for short series

Building on the problems in #20 that have been fixed but probably not very well. for series of length < (f * 3 + 1), or perhaps even some higher amount, should probably not introduce seasonal dummy variables.

default value for h in forecast.xgbts()

forecast.xgbts() throws a warning if no h is provided and defaults to 24. You might want to save the frequency of the input time series in the xgbts object and default to 2 * frequency(inputSeries) as used in the "forecast" package.

> a <- xgbts(AirPassengers)
Stopping. Best iteration: 43
> forecast(a)
No h provided so forecasting forward 24 periods.
          Jan      Feb      Mar      Apr      May      Jun      Jul      Aug      Sep      Oct
1961 454.0111 446.6804 444.8749 503.9522 535.9165 621.6365 621.3412 603.3748 556.0723 474.5930
1962 494.8933 477.6807 470.5114 553.3421 621.3992 621.3412 621.3412 621.3412 602.2322 522.1175
          Nov      Dec
1961 419.3743 450.0060
1962 427.1246 468.4285
> b <- auto.arima(AirPassengers)
> forecast(b)
         Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
Jan 1961       446.7582 431.6858 461.8306 423.7070 469.8094
Feb 1961       420.7582 402.5180 438.9984 392.8622 448.6542
Mar 1961       448.7582 427.8241 469.6923 416.7423 480.7741
Apr 1961       490.7582 467.4394 514.0770 455.0952 526.4212
May 1961       501.7582 476.2770 527.2395 462.7880 540.7284
Jun 1961       564.7582 537.2842 592.2323 522.7403 606.7761
Jul 1961       651.7582 622.4264 681.0900 606.8991 696.6173
Aug 1961       635.7582 604.6796 666.8368 588.2275 683.2889
Sep 1961       537.7582 505.0258 570.4906 487.6983 587.8181
Oct 1961       490.7582 456.4516 525.0648 438.2908 543.2256
Nov 1961       419.7582 383.9466 455.5698 364.9891 474.5273
Dec 1961       461.7582 424.5023 499.0141 404.7803 518.7361
Jan 1962       476.5164 431.4567 521.5761 407.6036 545.4292
Feb 1962       450.5164 400.9938 500.0390 374.7781 526.2547
Mar 1962       478.5164 424.9010 532.1318 396.5188 560.5141
Apr 1962       520.5164 463.0993 577.9335 432.7045 608.3283
May 1962       531.5164 470.5341 592.4987 438.2520 624.7808
Jun 1962       594.5164 530.1661 658.8667 496.1011 692.9317
Jul 1962       681.5164 613.9659 749.0670 578.2068 784.8261
Aug 1962       665.5164 594.9105 736.1223 557.5340 773.4988
Sep 1962       567.5164 493.9820 641.0508 455.0552 679.9776
Oct 1962       520.5164 444.1657 596.8671 403.7481 637.2847
Nov 1962       449.5164 370.4497 528.5831 328.5943 570.4385
Dec 1962       491.5164 409.8239 573.2089 366.5785 616.4543

the meaning of the model reults

library(forecastxgb)
model <- xgbar(gas)
model$y
model$y2
model$x
model$model
model$fitted
model$maxlag
model$seas_method
model$diffs
model$lambda
model$method
library(fpp)
consumption <- usconsumption[ ,1]
income <- matrix(usconsumption[ ,2], dimnames = list(NULL, "Income"))
consumption_model <- xgbar(y = consumption, xreg = income)
consumption_model$origxreg
consumption_model$ncolxreg
Can you explain the model$y、model$y2、model$x、model$model、model$fitted、model$maxlag、model$seas_method、model$diffs、model$lambda model$method、consumption_model$origxreg、consumption_model$ncolxreg? Thank you

When I run the demo code, an error occurried - "result would be too long a vector"

When I type the code - "model <- xgbar(gas)",
some information about errors and warnings came out:
"Error in begin_iteration:end_iteration :
result would be too long a vector
In addition: Warning messages:
1: 'early.stop.round' is deprecated.
Use 'early_stopping_rounds' instead.
See help("Deprecated") and help("xgboost-deprecated").
2: In min(cv$test.rmse.mean) :
no non-missing arguments to min; returning Inf
3: In min(which(cv$test.rmse.mean == min(cv$test.rmse.mean))) :
no non-missing arguments to min; returning Inf"
This's my first attempt in R and forecastxgb, so I have no idea about how to handle it.
Is it possible to help me ? Thank you.

MAXLAG XREGS

Hi,
is there a way to set different maxlags for xregs and for Y?
For instance, I want xregs to have a maxlag of 3 and Y to have a maxlag of 12.
Thanks!,
Nahuel

Training period

Hi.
I'd like know if is possible change the training period in function xgbar.

possibly use designmatrix package

It looks like you've already setup the coviates to feed into xgboost using lagged values of the timseries--a sensible approach. It could also make sense to included fixed effects for day of week/month/etc.

I started building a package designmatrix a few years back to generate xreg values to feed into forecasting models and anticipated using it with forecastHybrid eventually. It is barely off the ground, but the basic idea is to make it easy to generate covariates for day of week, weekend, month, quarter, etc. Eventually interactions and holidays for these would be nice as well. If you want to import it, it could serve as a good excuse for me to restart and to finish development. Take a look here: https://github.com/dashaub/designmatrix

Bug - cannot handle non-integer frequency

xgbar.R appears to assign incorrect number of rows to matrix x for some values of maxlag

The following line of code:
x[ , maxlag + 1] <- time(y2)
returns this error message:
Error in x[, maxlag + 1] <- time(y2) : number of items to replace is not a multiple of replacement length

It appears that the error is caused by x and y2 having inconsistent lengths from R's handling of indexing with decimals, in the event that (as seems to usually be the case) maxlag is a floating point number.

Consider the outcome if maxlag = 54.75 and orign = 120:

n <- orign - maxlag y2 <- ts(origy[-(1:(maxlag))], start = time(origy)[maxlag + 1], frequency = f)

n will be 120 - 54.75 = 65.25. In determining the length of y2 with the decimal indexing of maxlag, R rounds the index of 54.75 down to 54, which causes y2 to be of length 120 - 54 = 66.

However, when the matrix x is created, n = 65.25 is used for the number of rows. R rounds this number down to the nearest integer less than this value, 65, which creates a matrix with 65 rows:

ncolx <- ifelse(seas_method == "dummies", maxlag + f, maxlag + 1) x <- matrix(0, nrow = n, ncol = ncolx)

Thus, y2 is of length 66, and x has 65 rows, which causes a "number of items to replace is not a multiple of replacement length" error when this line is run:

x[ , maxlag + 1] <- time(y2)

short monthly series fail

added as a (failing) test

test_that("works with series of 35 with frequency 12", {
  bla_1 <- ts(runif(35, min = 5000, max = 10000), start = c(2013,12), frequency = 12)
  expect_error(bla_1_XGB_model <- xgbts(y = bla_1), NA)
})

One of the two problems brought up in #20.

Add Box Cox option

or even sign(x)(boxcox(abs(x))). At least do it and see if it helps as an option or not.

Error in x[, maxlag + 1] <- time(y2) & Error in x[, maxlag + 2:f] <- seasons

Hi,
thanks for this great package and the new approach option for forecasting time series.
But I've ran into two problems with two different time series, while others are working without any problems.
1:

Error in x[, maxlag + 1] <- time(y2) : 
  number of items to replace is not a multiple of replacement length

2:

Error in x[, maxlag + 2:f] <- seasons : 
  number of items to replace is not a multiple of replacement length
In addition: Warning message:
In xgbts(y = ...) :
  y is too short for cross-validation.  Will validate on the most recent 20 per cent instead.

Using the stlf function from the forecasting package works without any errors.
Can you explain me what causes the errors and how to avoid them to enable xgb forecasting?

Thanks in advance! 👍

"decompose" method doesn't work well in combination with differencing

for example, here are the four different seasonal adjustment methods with differencing on:

model5 <- xgbar(AirPassengers, maxlag = 24, trend_method = "differencing", seas_method = "dummies")
model6 <- xgbar(AirPassengers, maxlag = 24, trend_method = "differencing", seas_method = "decompose")
model7 <- xgbar(AirPassengers, maxlag = 24, trend_method = "differencing", seas_method = "fourier")
model8 <- xgbar(AirPassengers, maxlag = 24, trend_method = "differencing", seas_method = "none")

fc5 <- forecast(model5, h = 24)
fc6 <- forecast(model6, h = 24)
fc7 <- forecast(model7, h = 24)
fc8 <- forecast(model8, h = 24)

par(mfrow = c(2, 2), bty = "l")
plot(fc5, main = "dummies"); grid()
plot(fc6, main = "decompose"); grid()
plot(fc7, main = "fourier"); grid()
plot(fc8, main = "none"); grid()

image

Better handling of trends

Not currently satisfactory, as shown by

library(forecastxgb)
model <- xgbar(AirPassengers)
plot(forecast(model, h = 48))

image

fails when maxlag = 1

test case:

library(Mcomp)
thedata <- M1[[1]]
mod <- xgbts(thedata$x, maxlag = 1, nrounds_method = "cv")
fc <- forecast(mod, h = thedata$h)

error is in forecast.xgbts:

Error in `colnames<-`(`*tmp*`, value = c("lag1", "time")) : 
  length of 'dimnames' [2] not equal to array extent

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.