Giter Club home page Giter Club logo

tabnet's People

Contributors

cmcmaster1 avatar cregouby avatar dfalbel avatar egillax avatar sebffischer avatar svenvw avatar zerweck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

tabnet's Issues

Saving/Loading tabnet models

Hey, I'm wondering what the most effective way to save/load a tabnet model is. I know torch::torch_save() and torch::torch_load() exist. Not sure if there is a convenient way to save/load.

CRAN release

@cregouby I am planning to make a CRAN release next week. Do you think there's something else we might want to implement for this release? THanks!

predict() fails on a model trained with tabnet_pretrain.data.frame() followed by tabnet_fit.recipe()

Description

Some predictors in the tabnet model get their $blueprint$ptypes changed between the pretrained_model from integer and the result of model_fit.recipe( tabnet_moel = pretrained_model) to double. This is due to step_normalize() in the recipe that for sure is required to turn integers into double.

It makes the fitted_model unusable with the current recipes for predict() and for tabnet_explain().

Symptom

some predictors switch from to between pretrained_model$blueprint$ptypes and fitted_model$blueprint$ptypes that cause the following error

> model_explain <- tabnet_explain(model_fit, 
+                         new_data = supervised_baked_df)
Error: Can't convert from `funded_amnt` <double> to `funded_amnt` <integer> due to loss of precision.
* Locations: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,...
Run `rlang::last_error()` to see where the error occurred.
>

Reprex

library(tabnet)
library(tidymodels)
#> Registered S3 method overwritten by 'tune':
#>   method                   from   
#>   required_pkgs.model_spec parsnip
set.seed(123)

data("lending_club", package = "modeldata")
split <- initial_split(lending_club, strata = Class, prop = 9/10)
unsupervised <- training(split) %>% mutate(Class=NA)
supervised  <- testing(split)

prep_unsup <- recipe(Class ~ ., unsupervised) %>% step_normalize(all_numeric()) %>%  prep
unsupervised_baked_df <- prep_unsup %>% bake(new_data=NULL) %>% select(-Class)
pretrained_mod <- tabnet_pretrain(x=unsupervised_baked_df, y=rep(NULL, nrow(unsupervised_baked_df)),
                       epochs = 1, valid_split = 0.2, verbose = TRUE)
#> [Epoch 001] Loss: 5196720.679095 Valid loss: 2370932.957064

split <- initial_split(supervised, strata = Class)
train <- training(split) 
model_fit <- tabnet_fit(prep_unsup, train , tabnet_model = pretrained_mod, 
                                   valid_split = 0.2, epochs = 1, verbose=TRUE)
#> [Epoch 001] Loss: 0.953990 Valid loss: 0.548429

waldo::compare(pretrained_mod$blueprint$ptypes, model_fit$blueprint$ptypes)
#> `old$predictors$funded_amnt` is a double vector ()
#> `new$predictors$funded_amnt` is an integer vector ()
#> 
#> `old$predictors$delinq_2yrs` is a double vector ()
#> `new$predictors$delinq_2yrs` is an integer vector ()
#> 
#> `old$predictors$inq_last_6mths` is a double vector ()
#> `new$predictors$inq_last_6mths` is an integer vector ()
#> 
#> `old$predictors$acc_now_delinq` is a double vector ()
#> `new$predictors$acc_now_delinq` is an integer vector ()
#> 
#> `old$predictors$open_il_6m` is a double vector ()
#> `new$predictors$open_il_6m` is an integer vector ()
#> 
#> `old$predictors$open_il_12m` is a double vector ()
#> `new$predictors$open_il_12m` is an integer vector ()
#> 
#> `old$predictors$open_il_24m` is a double vector ()
#> `new$predictors$open_il_24m` is an integer vector ()
#> 
#> `old$predictors$total_bal_il` is a double vector ()
#> `new$predictors$total_bal_il` is an integer vector ()
#> 
#> `old$predictors$all_util` is a double vector ()
#> `new$predictors$all_util` is an integer vector ()
#> 
#> `old$predictors$inq_fi` is a double vector ()
#> `new$predictors$inq_fi` is an integer vector ()
#> 
#> And 7 more differences ...

Created on 2021-10-20 by the reprex package (v2.0.1)

vignettes need some polishing

Pretraining vignette : seems like

  • screenshot are obsolete,
  • text could be clearer,
  • some text is missing

A regression vignette is needed to clarify #58

Tensor Size Error

Hello everyone,

Could someone please help me understand what the issue is here? Here is a very simple example: https://www.dropbox.com/s/i9muap0v0aqa45x/data.Rda?dl=0

library(tabnet)
load('data.Rda')
fit=tabnet_fit(xx, yy)

Error in cpp_Function_apply(torch_variable_list(.env$variables)$ptr, .f_, :
Evaluation error: The size of tensor a (98) must match the size of tensor b (97) at non-singleton dimension 1
Exception raised from infer_size at ../aten/src/ATen/ExpandUtils.cpp:24 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x69 (0x7f6bf2b87b29 in /home/naghaeep/R/x86_64-pc-linux-gnu-library/3.6/torch/deps/./libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xd2 (0x7f6bf2b84ab2 in /home/naghaeep/R/x86_64-pc-linux-gnu-library/3.6/torch/deps/./libc10.so)
frame #2: at::infer_size(c10::ArrayRef, c10::ArrayRef) + 0x4d9 (0x7f6be1089069 in /home/naghaeep/R/x86_64-pc-linux-gnu-library/3.6/torch/deps/./libtorch_cpu.so)
frame #3: at::TensorIteratorBase::compute_shape(at::TensorIteratorConfig const&) + 0xde (0x7f6be10c383e in /home/na

And session info:
`> sessionInfo()
R version 3.6.3 (2020-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.1 LTS

Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.9.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.9.0

locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] tabnet_0.1.0

loaded via a namespace (and not attached):
[1] Rcpp_1.0.6 ps_1.6.0 fansi_0.4.2 withr_2.4.2
[5] utf8_1.2.1 crayon_1.4.1 R6_2.5.0 lifecycle_1.0.0
[9] hardhat_0.1.5 magrittr_2.0.1 coro_1.0.1 pillar_1.6.0
[13] rlang_0.4.10 callr_3.7.0 vctrs_0.3.7 ellipsis_0.3.1
[17] bit64_4.0.5 torch_0.3.0 glue_1.4.2 bit_4.0.4
[21] processx_3.5.1 compiler_3.6.3 pkgconfig_2.0.3 tibble_3.1.1

`

Rstudio on EC2 instance crashes when torch is called

Hi,

I am working on EC2 instance. When i call the tabnet_fit function, it crashes the R session saying all the workspace data is lost.

I am only using a few numeric columns and a target column from a reasonably small dataset.

Any help will be much appreciated. Thanks.

Release tabnet 0.3.0

Prepare for release:

  • Check current CRAN check results
  • Polish NEWS
  • devtools::build_readme()
  • urlchecker::url_check()
  • devtools::check(remote = TRUE, manual = TRUE)
  • devtools::check_win_devel()
  • rhub::check_for_cran()
  • revdepcheck::revdep_check(num_workers = 4)
  • Update cran-comments.md
  • Review pkgdown reference index for, e.g., missing topics
  • Draft blog post

Submit to CRAN:

  • usethis::use_version('minor')
  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • usethis::use_github_release()
  • usethis::use_dev_version()
  • Finish blog post
  • Tweet
  • Add link to blog post in pkgdown news menu

Release tabnet 0.1.0

Prepare for release:

  • Check that description is informative
  • Check licensing of included files
  • devtools::build_readme()
  • usethis::use_cran_comments()
  • devtools::check(remote = TRUE, manual = TRUE)
  • devtools::check_win_devel()
  • rhub::check_for_cran()
  • urlchecker::url_check()
  • Update cran-comments.md
  • Review pkgdown reference index for, e.g., missing topics
  • Draft blog post

Submit to CRAN:

  • usethis::use_version('minor')
  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • usethis::use_news_md()
  • usethis::use_github_release()
  • usethis::use_dev_version()
  • Update install instructions in README
  • Finish blog post
  • Tweet
  • Add link to blog post in pkgdown news menu

`tabnet_fit()` shall allow to continue a model training with changes hyperparameters.

This is needed in order to

  • allow model fine-tuning ( i.e. manually control a very low learn_rate )
  • restart training at a previous checkpoint before model starts to overfit
  • allow a future unsupervised training step

This requires splitting model Initialization from model supervised training, while maintaining the capability to save and restore the model at each step.

Release tabnet 0.2.0

Prepare for release:

  • Check current CRAN check results
  • Polish NEWS
  • devtools::build_readme()
  • urlchecker::url_check()
  • devtools::check(remote = TRUE, manual = TRUE)
  • devtools::check_win_devel()
  • rhub::check_for_cran()
  • revdepcheck::revdep_check(num_workers = 4)
  • Update cran-comments.md
  • Review pkgdown reference index for, e.g., missing topics
  • Draft blog post

Submit to CRAN:

  • usethis::use_version('minor')
  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • usethis::use_github_release()
  • usethis::use_dev_version()
  • Finish blog post
  • Tweet
  • Add link to blog post in pkgdown news menu

Add code coverage measure and badge

As we are the only repo around tabnet having published tests that pass ! We should be proud of it !
(Note that I've never done that before)

tabnet_config and tabnet_fit

It would seem natural that tabnet_config could be called outside of the tabnet_fit function, so hyperparameters are declared upfront. At the moment tabet_config creates a list, but the do.call function on "..." in the tabnet_fit function will make this a list of a list. We would still like to preserve the ability to change the hyperparameters within the tabnet_fit function.

This could be achieved by making config a named argument and updating it from "..." using modifyList. For example:

tabnet_pretrain.data.frame <- function(x, y, tabnet_model = NULL, config = tabnet_config(), ..., from_epoch = NULL) {
  processed <- hardhat::mold(x, y)
  config <- modifyList(config, list(...))
  tabnet_bridge(processed, config = config, tabnet_model, from_epoch, task="unsupervised")
}

Dataloader single worker and default `batch_size` makes R tabnet 4-15x slower than pytorch tabnet

R code:

library(data.table)
library(ROCR)
library(tabnet)
library(Matrix)


d_train <- fread("https://s3.amazonaws.com/benchm-ml--main/train-0.1m.csv", stringsAsFactors=TRUE)
d_test <- fread("https://s3.amazonaws.com/benchm-ml--main/test.csv")

## align cat. values (factors)
d_train_test <- rbind(d_train, d_test)
n1 <- nrow(d_train)
n2 <- nrow(d_test)
d_train <- d_train_test[1:n1,]
d_test <- d_train_test[(n1+1):(n1+n2),]


system.time({
  md <- tabnet_fit(dep_delayed_15min ~ . ,d_train, epochs = 10, verbose = TRUE)
})


phat <- predict(md, d_test, type = "prob")$.pred_Y
rocr_pred <- prediction(phat, d_test$dep_delayed_15min)
performance(rocr_pred, "auc")@y.values[[1]]

Python code:

from pytorch_tabnet.tab_model import TabNetClassifier
import torch

import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn import metrics


d_train = pd.read_csv("https://s3.amazonaws.com/benchm-ml--main/train-0.1m.csv")
d_test = pd.read_csv("https://s3.amazonaws.com/benchm-ml--main/test.csv")


d_all = pd.concat([d_train,d_test])

vars_cat = ["Month","DayofMonth","DayOfWeek","UniqueCarrier", "Origin", "Dest"]
vars_num = ["DepTime","Distance"]
for col in vars_cat:
  d_all[col] = preprocessing.LabelEncoder().fit_transform(d_all[col])

X_all = d_all[vars_num+vars_cat]
y_all = np.where(d_all["dep_delayed_15min"]=="Y",1,0)

cat_idxs = [ i for i, col in enumerate(X_all.columns) if col in vars_cat]
cat_dims = [ len(np.unique(X_all.iloc[:,i].values)) for i in cat_idxs]

X_train = X_all[0:d_train.shape[0]].to_numpy()
y_train = y_all[0:d_train.shape[0]]
X_test = X_all[d_train.shape[0]:(d_train.shape[0]+d_test.shape[0])].to_numpy()
y_test = y_all[d_train.shape[0]:(d_train.shape[0]+d_test.shape[0])]


md = TabNetClassifier(cat_idxs=cat_idxs,
                       cat_dims=cat_dims,
                       cat_emb_dim=1
)

%%time
md.fit( X_train=X_train, y_train=y_train,
    max_epochs=10, patience=0
)


y_pred = md.predict_proba(X_test)[:,1]
print(metrics.roc_auc_score(y_test, y_pred))

m5.2xlarge (8 cores):

R:

[Epoch 001] Loss: 0.495622
[Epoch 002] Loss: 0.455483
[Epoch 003] Loss: 0.450127
[Epoch 004] Loss: 0.449376
[Epoch 005] Loss: 0.448024
[Epoch 006] Loss: 0.447154
[Epoch 007] Loss: 0.446089
[Epoch 008] Loss: 0.444280
[Epoch 009] Loss: 0.443956
[Epoch 010] Loss: 0.443126
    user   system  elapsed
2927.067    6.196 1502.377
>
>
> phat <- predict(md, d_test, type = "prob")$.pred_Y
> rocr_pred <- prediction(phat, d_test$dep_delayed_15min)
> performance(rocr_pred, "auc")@y.values[[1]]
[1] 0.70621

Python:

No early stopping will be performed, last training weights will be used.
epoch 0  | loss: 0.48224 |  0:00:08s
epoch 1  | loss: 0.45447 |  0:00:16s
epoch 2  | loss: 0.45087 |  0:00:25s
epoch 3  | loss: 0.44885 |  0:00:33s
epoch 4  | loss: 0.44667 |  0:00:42s
epoch 5  | loss: 0.44576 |  0:00:50s
epoch 6  | loss: 0.44538 |  0:00:58s
epoch 7  | loss: 0.44727 |  0:01:07s
epoch 8  | loss: 0.4467  |  0:01:15s
epoch 9  | loss: 0.44514 |  0:01:24s
CPU times: user 5min 14s, sys: 555 ms, total: 5min 15s
Wall time: 1min 26s

In [23]:

In [23]: y_pred = md.predict_proba(X_test)[:,1]

In [24]: print(metrics.roc_auc_score(y_test, y_pred))
0.7031382841315941

Some of the parameter values will have different defaults in the R and Python libs, but still the difference in runtime is too much. More details of my experiments here: szilard/GBM-perf#52

Allow missing values in predictors during pretraining through a static NA_mask

I can't get comfortable with the idea of preventing NAs

stopifnot("Error: found missing values in the predictor data frame" = sum(is.na(x))==0)

Specifically, the ames dataset uses a trick of imputing NA with zeros for a lot of numerical predictors that can only biais the model:
As an example, the "Masonry veneer area" predictor

suppressPackageStartupMessages(library(tidymodels))
data("ames", package = "modeldata")
qplot(ames$Mas_Vnr_Area)
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Created on 2021-10-16 by the reprex package (v2.0.1)

No house has a "Masonry veneer area" of surface 0, they actually don't have. Idem for "Pool area" and a lot more predictors.
Knowing that we already apply a random_obfuscation mask to value in tabnet_pretrain(), we could maybe force a binary mask of the predictor NAs to be part of it.

Release tabnet 0.0.1

Prepare for release:

  • Check that description is informative
  • Check licensing of included files
  • devtools::build_readme()
  • usethis::use_cran_comments()
  • devtools::check(remote = TRUE, manual = TRUE)
  • devtools::check_win_devel()
  • rhub::check_for_cran()
  • urlchecker::url_check()
  • Update cran-comments.md

Submit to CRAN:

  • usethis::use_version('patch')
  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted ๐ŸŽ‰
  • usethis::use_news_md()
  • usethis::use_github_release()
  • usethis::use_dev_version()
  • Update install instructions in README

Functions comparisons in tests

With R 4.1 it looks like all.equal.function is deprecated and we see warnings like:

> testthat::expect_identical(torch::nn_cross_entropy_loss(), torch::nn_cross_entropy_loss())
Error: torch::nn_cross_entropy_loss() not identical to torch::nn_cross_entropy_loss().
Objects equal but not identical
In addition: Warning messages:
1: 'all.equal.default(<function>)' is deprecated.
Use 'all.equal(*)' instead.
See help("Deprecated") 
2: 'all.equal.default(<function>)' is deprecated.
Use 'all.equal(*)' instead.
See help("Deprecated") 
3: 'all.equal.default(<function>)' is deprecated.
Use 'all.equal(*)' instead.
See help("Deprecated") 
4: 'all.equal.default(<function>)' is deprecated.
Use 'all.equal(*)' instead.
See help("Deprecated") 
5: 'all.equal.default(<function>)' is deprecated.
Use 'all.equal(*)' instead.
See help("Deprecated") 

Fixing sparsemax and adding entmax

Just jotting down some insights here. The problem with the sparsemax implementation is that d <- input$size(-1) does not behave like it would in pytorch.

The size method defined in torch:

size = function(dim) {
      x <- cpp_tensor_dim(self$ptr)
      
      if (missing(dim))
        return(x)
      
      x[dim]
    }

If dim is set to -1, this gives a vector of sizes for all dimensions except the first dimension. This is different behaviour to pytorch, where this would give a single number, the size of the final dimension.

The following work-around can be put in sparsemax:

sparsemax <- torch::nn_module(
  "sparsemax",
  initialize = function(dim = -1) {
    self$dim <- dim
  },
  forward = function(input) {
    # Work-around:
    if (self$dim == -1) {
      self$dim <- input$dim()
    }
    sparsemax_function(input, self$dim)
  }
)

This works completely fine when I test it on my machine, but it does not work in testing when I have submitted a PR. I also have entmax working on my machine.

Failed to install 'tabnet' from GitHub on Windows (also, R session aborts on Mac)

I'm trying to install torch and tabnet. I've tried remotes::install_github("mlverse/tabnet") and was able to install both torch and tabnet on my mac. However, after executing code to fit a tabnet model, it causes the session to abort (whether I try it in RStudio or base R it causes it to abort/quit). So now I've tried to see if I could get it to work on my Windows computer at work. However, on Windows, it fails to install (using remotes::install_github). The error says

Error: Failed to install 'tabnet' from GitHub: (converted from warning) cannot remove prior installation of package 'utf8'

This is kind of a multifaceted issue since I also ran into a similar error with installing torch on Windows (the package that couldn't be removed was 'ps' rather than 'utf8'), and since I'm talking about one kind of problem on a Mac and another on Windows. Sorry about that. I'm not sure if you guys want to address torch installation issues as well.

Features of type logical yield unclear importance scores

When an input feature is logical, the resulting importance scores stored in .$fit$importances contain two distinct scores for the TRUE and FALSE level, which seems like a bug or at least unexpected behavior.
Given that this is not the case for other categorical or binary numeric features, I assume it might be the former.

Is it possible that common normalisation/preprocessing steps in the tidymodels framework prevents this from occurring in regular applications?

library(tabnet)

set.seed(2)
# Training data with logical feature --------------------------------------
xdat <- tibble::tibble(
  feat_factor = factor(sample(letters, 100, replace = TRUE)),
  feat_numeric = rnorm(100),
  feat_integer = sample(100, replace = TRUE),
  feat_logical = sample(c(TRUE, FALSE), 100, replace = TRUE),
  target = factor(sample(c("yes", "no"), 100, replace = TRUE))
)

model_fit <- tabnet_fit(target ~ ., data = xdat, epochs = 3)

# Distinct importance scores for TRUE and FALSE seem... odd
model_fit$fit$importances
#> # A tibble: 5 ร— 2
#>   variables         importance
#>   <chr>                  <dbl>
#> 1 feat_numeric           0.132
#> 2 feat_integer           0.154
#> 3 feat_logicalFALSE      0.309
#> 4 feat_logicalTRUE       0.141
#> 5 feat_factor            0.264

# Recoded to integer ------------------------------------------------------
xdat$feat_logical <- as.integer(xdat$feat_logical)

model_fit2 <- tabnet_fit(target ~ ., data = xdat)

# Importance scores as expected, one per input feature
model_fit2$fit$importances
#> # A tibble: 4 ร— 2
#>   variables    importance
#>   <chr>             <dbl>
#> 1 feat_numeric     0.0451
#> 2 feat_integer     0.736 
#> 3 feat_logical     0.0947
#> 4 feat_factor      0.125

Created on 2021-12-14 by the reprex package (v2.0.1)

finalize_workflow() on fitted workflow incl. tabnet

Hi there,
I am wondering if this is my fault or if the finalize_workflow() warpper is simply not adapted for tabnet models. In short, I try to run the following code which results in an error:

final_fit_tabnet <- 
  workflow(rec_tabnet, spec_tabnet) %>% 
    finalize_workflow(select_best(fit_tabnet, metric = "mae")) %>% 
    last_fit(time_split, metrics = performance_metrics)
Error in update.default(object = list(args = list(epochs = ~3, penalty = ~1e-06, : need an object with call component

fit_tabnet is the result of tune_race_anova(), containing all the tuning results.

Best,
Simon

tabnet_fit: error after training process finishes (cudnn_enabled == TRUE)

Error in (function (input, weight, bias, running_mean, running_var, training, : Expected tensor to have CPU Backend, but got tensor with CUDA Backend (while checking arguments for batch_norm_cpu) Exception raised from checkBackend at ../aten/src/ATen/TensorUtils.cpp:202 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f023e97bb89 in /home/key/libtorch/lib/libc10.so) frame #1: <unknown function> + 0xb516f6 (0x7f0229cbe6f6 in /home/key/libtorch/lib/libtorch_cpu.so) frame #2: at::checkBackend(char const*, c10::ArrayRef<at::Tensor>, c10::Backend) + 0x32 (0x7f0229cbe942 in /home/key/libtorch/lib/libtorch_cpu.so) frame #3: at::native::batch_norm_cpu(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double, double) + 0x102 (0x7f022a023d72 in /home/key/libtorch/lib/libtorch_cpu.so) frame #4: <unknown function> + 0x1368aef (0x7f022a4d5aef in /home/key/libtorch/lib/libtorch_cpu.so) frame #5: <unkno
19.
stop(structure(list(message = "Expected tensor to have CPU Backend, but got tensor with CUDA Backend (while checking arguments for batch_norm_cpu)\nException raised from checkBackend at ../aten/src/ATen/TensorUtils.cpp:202 (most recent call first):\nframe #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f023e97bb89 in /home/key/libtorch/lib/libc10.so)\nframe #1: <unknown function> + 0xb516f6 (0x7f0229cbe6f6 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #2: at::checkBackend(char const*, c10::ArrayRef<at::Tensor>, c10::Backend) + 0x32 (0x7f0229cbe942 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #3: at::native::batch_norm_cpu(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double, double) + 0x102 (0x7f022a023d72 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #4: <unknown function> + 0x1368aef (0x7f022a4d5aef in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #5: <unknown function> + 0x1360e98 (0x7f022a4cde98 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #6: <unknown function> + 0x1514469 (0x7f022a681469 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #7: at::native_batch_norm(at::Tensor const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, bool, double, double) + 0x118 (0x7f022a590be8 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #8: <unknown function> + 0x2989eae (0x7f022baf6eae in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #9: <unknown function> + 0x1360e98 (0x7f022a4cde98 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #10: <unknown function> + 0x1514469 (0x7f022a681469 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #11: at::native_batch_norm(at::Tensor const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, bool, double, double) + 0x118 (0x7f022a590be8 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #12: at::native::_batch_norm_impl_index(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double, double, bool) + 0x32d (0x7f022a0220ad in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #13: <unknown function> + 0x1592958 (0x7f022a6ff958 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #14: <unknown function> + 0x15f558f (0x7f022a76258f in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #15: <unknown function> + 0x15e5f7e (0x7f022a752f7e in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #16: <unknown function> + 0x14e1752 (0x7f022a64e752 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #17: at::_batch_norm_impl_index(at::Tensor const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, bool, double, double, bool) + 0x12d (0x7f022a54fedd in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #18: at::native::batch_norm(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, bool, double, double, bool) + 0x119 (0x7f022a020a69 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #19: <unknown function> + 0x1592818 (0x7f022a6ff818 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #20: <unknown function> + 0x15f540f (0x7f022a76240f in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #21: <unknown function> + 0x15e5f1e (0x7f022a752f1e in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #22: <unknown function> + 0x14e0b6f (0x7f022a64db6f in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #23: at::batch_norm(at::Tensor const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, c10::optional<at::Tensor> const&, bool, double, double, bool) + 0x110 (0x7f022a54f850 in /home/key/libtorch/lib/libtorch_cpu.so)\nframe #24: _lantern_batch_norm_tensor_tensor_tensor_tensor_tensor_bool_double_double_bool + 0x277 (0x7f023ef54f94 in /home/key/R/x86_64-redhat-linux-gnu-library/4.0/torch/deps/liblantern.so)\nframe #25: cpp_torch_namespace_batch_norm_input_Tensor_weight_Tensor_bias_Tensor_running_mean_Tensor_running_var_Tensor_training_bool_momentum_double_eps_double_cudnn_enabled_bool(Rcpp::XPtr<XPtrTorchTensor, Rcpp::PreserveStorage, &(void Rcpp::standard_delete_finalizer<XPtrTorchTensor>(XPtrTorchTensor*)), false>, Rcpp::XPtr<XPtrTorchTensor, Rcpp::PreserveStorage, &(void Rcpp::standard_delete_finalizer<XPtrTorchTensor>(XPtrTorchTensor*)), false>, Rcpp::XPtr<XPtrTorchTensor, Rcpp::PreserveStorage, &(void Rcpp::standard_delete_finalizer<XPtrTorchTensor>(XPtrTorchTensor*)), false>, Rcpp::XPtr<XPtrTorchTensor, Rcpp::PreserveStorage, &(void Rcpp::standard_delete_finalizer<XPtrTorchTensor>(XPtrTorchTensor*)), false>, Rcpp::XPtr<XPtrTorchTensor, Rcpp::PreserveStorage, &(void Rcpp::standard_delete_finalizer<XPtrTorchTensor>(XPtrTorchTensor*)), false>, bool, double, double, bool) + 0x172 (0x7f023f85dbc2 in /home/key/R/x86_64-redhat-linux-gnu-library/4.0/torch/libs/torchpkg.so)\nframe #26: _torch_cpp_torch_namespace_batch_norm_input_Tensor_weight_Tensor_bias_Tensor_running_mean_Tensor_running_var_Tensor_training_bool_momentum_double_eps_double_cudnn_enabled_bool + 0x182 (0x7f023f7209a2 in /home/key/R/x86_64-redhat-linux-gnu-library/4.0/torch/libs/torchpkg.so)\nframe #27: <unknown function> + 0xef528 (0x7f02b4f4a528 in /usr/lib64/R/lib/libR.so)\nframe #28: <unknown function> + 0xf0065 (0x7f02b4f4b065 in /usr/lib64/R/lib/libR.so)\nframe #29: <unknown function> + 0x12ee35 (0x7f02b4f89e35 in /usr/lib64/R/lib/libR.so)\nframe #30: Rf_eval + 0x80 (0x7f02b4f75850 in /usr/lib64/R/lib/libR.so)\nframe #31: <unknown function> + 0x119666 (0x7f02b4f74666 in /usr/lib64/R/lib/libR.so)\nframe #32: Rf_applyClosure + 0x268 (0x7f02b4f75568 in /usr/lib64/R/lib/libR.so)\nframe #33: Rf_eval + 0x2a2 (0x7f02b4f75a72 in /usr/lib64/R/lib/libR.so)\nframe #34: <unknown function> + 0xc052f (0x7f02b4f1b52f in /usr/lib64/R/lib/libR.so)\nframe #35: <unknown function> + 0x12ee35 (0x7f02b4f89e35 in /usr/lib64/R/lib/libR.so)\nframe #36: Rf_eval + 0x80 (0x7f02b4f75850 in /usr/lib64/R/lib/libR.so)\nframe #37: <unknown function> + 0x119666 (0x7f02b4f74666 in /usr/lib64/R/lib/libR.so)\nframe #38: Rf_applyClosure + 0x268 (0x7f02b4f75568 in /usr/lib64/R/lib/libR.so)\nframe #39: <unknown function> + 0x13714e (0x7f02b4f9214e in /usr/lib64/R/lib/libR.so)\nframe #40: Rf_eval + 0x80 (0x7f02b4f75850 in /usr/lib64/R/lib/libR.so)\nframe #41: <unknown function> + 0x119666 (0x7f02b4f74666 in /usr/lib64/R/lib/libR.so)\nframe #42: Rf_applyClosure + 0x268 (0x7f02b4f75568 in /usr/lib64/R/lib/libR.so)\nframe #43: <unknown function> + 0x13714e (0x7f02b4f9214e in /usr/lib64/R/lib/libR.so)\nframe #44: Rf_eval + 0x80 (0x7f02b4f75850 in /usr/lib64/R/lib/libR.so)\nframe #45: <unknown function> + 0x119666 (0x7f02b4f74666 in /usr/lib64/R/lib/libR.so)\nframe #46: Rf_applyClosure + 0x268 (0x7f02b4f75568 in /usr/lib64/R/lib/libR.so)\nframe #47: <unknown function> + 0x13714e (0x7f02b4f9214e in /usr/lib64/R/lib/libR.so)\nframe #48: Rf_eval + 0x80 (0x7f02b4f75850 in /usr/lib64/R/lib/libR.so)\nframe #49: <unknown function> + 0x119666 (0x7f02b4f74666 in /usr/lib64/R/lib/libR.so)\nframe #50: Rf_applyClosure + 0x268 (0x7f02b4f75568 in /usr/lib64/R/lib/libR.so)\nframe #51: <unknown function> + 0x13714e (0x7f02b4f9214e in /usr/lib64/R/lib/libR.so)\nframe #52: Rf_eval + 0x80 (0x7f02b4f75850 in /usr/lib64/R/lib/libR.so)\nframe #53: <unknown function> + 0x119666 (0x7f02b4f74666 in /usr/lib64/R/lib/libR.so)\nframe #54: Rf_applyClosure + 0x268 (0x7f02b4f75568 in /usr/lib64/R/lib/libR.so)\nframe #55: Rf_eval + 0x2a2 (0x7f02b4f75a72 in /usr/lib64/R/lib/libR.so)\nframe #56: <unknown function> + 0x11d368 (0x7f02b4f78368 in /usr/lib64/R/lib/libR.so)\nframe #57: Rf_eval + 0x575 (0x7f02b4f75d45 in /usr/lib64/R/lib/libR.so)\nframe #58: <unknown function> + 0x119666 (0x7f02b4f74666 in /usr/lib64/R/lib/libR.so)\nframe #59: Rf_applyClosure + 0x268 (0x7f02b4f75568 in /usr/lib64/R/lib/libR.so)\nframe #60: Rf_eval + 0x2a2 (0x7f02b4f75a72 in /usr/lib64/R/lib/libR.so)\nframe #61: <unknown function> + 0x120582 (0x7f02b4f7b582 in /usr/lib64/R/lib/libR.so)\nframe #62: Rf_eval + 0x575 (0x7f02b4f75d45 in /usr/lib64/R/lib/libR.so)\nframe #63: <unknown function> + 0x11d368 (0x7f02b4f78368 in /usr/lib64/R/lib/libR.so)\n", call = (function (input, weight, bias, running_mean, running_var, training, momentum, eps, cudnn_enabled) { ... at RcppExports.R#3213
18.
(function (input, weight, bias, running_mean, running_var, training, momentum, eps, cudnn_enabled) { .Call("_torch_cpp_torch_namespace_batch_norm_input_Tensor_weight_Tensor_bias_Tensor_running_mean_Tensor_running_var_Tensor_training_bool_momentum_double_eps_double_cudnn_enabled_bool", ...
17.
do.call(fun, args) at codegen-utils.R#204
16.
do_call(f, args_t[[1]]) at codegen-utils.R#262
15.
call_c_function(fun_name = "batch_norm", args = args, expected_types = expected_types, nd_args = nd_args, return_types = return_types, fun_type = "namespace") at gen-namespace.R#5323
14.
torch_batch_norm(input = input, weight = weight, bias = bias, running_mean = running_mean, running_var = running_var, training = training, momentum = momentum, eps = eps, cudnn_enabled = backends_cudnn_enabled()) at nnf-batchnorm.R#18
13.
nnf_batch_norm(input, running_mean, running_var, self$weight, self$bias, bn_training, exponential_average_factor, self$eps) at nn-batchnorm.R#106
12.
self$initial_bn(x) at tab-network.R#141
11.
self$tabnet$forward_masks(x) at tab-network.R#219
10.
network$forward_masks(x) at explain.R#64
9.
explain_impl(network, x) at explain.R#87
8.
compute_feature_importance(network, data$x)
7.
eval_tidy(xs[[j]], mask)
6.
tibble_quos(xs[!is.null], .rows, .name_repair)
5.
tibble::tibble(variables = colnames(x), importance = compute_feature_importance(network, data$x)) at model.R#373
4.
tabnet_impl(predictors, outcomes, config = config) at hardhat.R#108
3.
tabnet_bridge(processed, config = config) at hardhat.R#82
2.
tabnet_fit.formula(y ~ ., syn2, epochs = 10, verbose = TRUE) at hardhat.R#51
1.
tabnet_fit(y ~ ., syn2, epochs = 10, verbose = TRUE)

Support for CUDA 11.0?

Hi and thanks for this awesome project.

Trying to try it out on Ubuntu 20.04 using CUDA 11.0 and CuDNN 8. I have made sure to install the current dev version of torch, as suggested by this torch comment, and have confirmed that my installation of torch is successfully using my GPU by running torch::cuda_is_available().

I am attempting to run the below:

library(tabnet)
library(tidymodels)

set.seed(1)

data("lending_club", package="modeldata")

split <- initial_split(lending_club, strata = Class)
train <- training(split)
test  <- testing(split)

rec <- recipe(Class ~ ., train) %>%
  step_normalize(all_numeric())


mod <- tabnet(epochs = 1, batch_size = 128) %>%
  set_engine("torch", verbose = TRUE) %>%
  set_mode("classification")


wf <- workflow() %>%
  add_model(mod) %>%
  add_recipe(rec)


mod_fit <- wf %>%
  fit(train)

I end up with the following error:

Error in (function (weight, indices, padding_idx, scale_grad_by_freq,  : 
  Input, output and indices must be on the current device
Exception raised from index_select_out_cuda at /pytorch/aten/src/ATen/native/cuda/Indexing.cu:819 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f00ee682b89 in /home/chris/R/x86_64-pc-linux-gnu-library/4.0/torch/deps/./libc10.so)
frame #1: at::native::index_select_out_cuda(at::Tensor&, at::Tensor const&, long, at::Tensor const&) + 0x94f (0x7f0087621a5f in /home/chris/R/x86_64-pc-linux-gnu-library/4.0/torch/deps/./libtorch_cuda.so)
frame #2: at::native::index_select_cuda(at::Tensor const&, long, at::Tensor const&) + 0x6e (0x7f0087621dfe in /home/chris/R/x86_64-pc-linux-gnu-library/4.0/torch/deps/./libtorch_cuda.so)
frame #3: <unknown function> + 0x4010d65 (0x7f0088822d65 in /home/chris/R/x86_64-pc-linux-gnu-library/4.0/torch/deps/./libtorch_cuda.so)
frame #4: <unknown function> + 0x1361767 (0x7f00dea2076
Timing stopped at: 15.19 0.749 15.21

Curious if this is a CUDA issue. Have you seen this error before? If so, is there a known remedy?

Thanks for your help here.

Scaling in the example prediction wrong

When replicating the README example, the results of the prediction are still scaled/centered and attempting to use tidy to un-normalize them throws this error:

No tidy method for objects of class tabnet_fit

Is there a simple way to un-scale the predictions?

Supervised training fails to continue unsupervised training when using `from_epoch=`

Reprex

test_that("Supervised training can continue unsupervised training, with from_epoch=", {

  data("attrition", package = "modeldata")

  x <- attrition[-which(names(attrition) == "Attrition")]
  y <- attrition$Attrition
  pretrain <- tabnet_pretrain(x, y, epoch = 2, checkpoint_epochs = 1)

  expect_error(
    fit <- tabnet_fit(x, y, tabnet_model = pretrain, from_epoch = 1, epoch = 1 ),
    regexp = NA
  )

})

Currently gives a

Error in (function (self, gradient, retain_graph, create_graph) :
grad can be implicitly created only for scalar outputs
Exception raised from _make_grads at ../torch/csrc/autograd/autograd.cpp:47 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >) + 98 (0x113f241a2 in libc10.dylib)
frame #1: torch::autograd::_make_grads(std::__1::vector<at::Tensor, std::__1::allocatorat::Tensor > const&, std::__1::vector<at::Tensor, std::__1::allocatorat::Tensor > const&) + 2107 (0x11cf307bb in libtorch_cpu.dylib)
frame #2: torch::autograd::backward(std::__1::vector<at::Tensor, std::__1::allocatorat::Tensor > const&, std::__1::vector<at::Tensor, std::_1::allocatorat::Tensor > const&, c10::optional, bool) + 39 (0x11cf318e7 in libtorch_cpu.dylib)
frame #3: c10::impl::detail::WrapFunctionIntoFunctor
<c10::CompileTimeFunctionPointer<void (at::Tensor const&, c10::optionalat::Tensor const&, c10::optional, bool), &(torch::autogra

runtime_error("Indices/Index start at 1 and got a 0.") when missing values in the dataset is misleading

Hello,

Tabnet, as most of deeplearning packages, doesn't manage missing values. But currently, error message is very hard to interpret as been caused by a missing value in the dataset. Could you please make Tabnet robust to missing values or turn the error message caused by missing values explicit ?

Current situation

fitting / predicting Tabnet with missing values provides cryptic error message :

 Error in cpp_Function_apply(torch_variable_list(.env$variables)$ptr, .f_,  : 
  Evaluation error: Indices/Index start at 1 and got a 0.. 

In some initialization cases, the training starts, which is even more misleading to point-out the root-cause of the problem being missing values:

[=============================================================>--------------------------------------------------------------------------------------------] loss= 4.95224571228027
 Erreur : Indices/Index start at 1 and got a 0.
Run `rlang::last_error()` to see where the error occurred. 

Reprex

The ReprEx is not minimal here, sorry for that, but I think it is worth covering the 4 cases :

test_that("Training set with missing value fails with explicit message", {
  
  library(recipes)
  data("attrition", package = "modeldata")
  rec <- recipe(EnvironmentSatisfaction ~ ., data = attrition) %>%
    step_normalize(all_numeric(), -all_outcomes()) 
  fit <- tabnet_fit(rec, attrition, epochs = 1, valid_split = 0.25,
                    verbose = TRUE)
  # numerical missing
  attrition[1,"Age"] <- NA
  rec <- recipe(EnvironmentSatisfaction ~ ., data = attrition) %>%
    step_normalize(all_numeric(), -all_outcomes()) 
  # fit 
  expect_error(
    miss_fit <- tabnet_fit(rec, attrition, epochs = 1, valid_split = 0.25,
                      verbose = TRUE),
    regexp = "missing"
  )
  # predict
  attrition[["EnvironmentSatisfaction"]] <-NA
  expect_error(
    predict(fit, attrition),
    regexp = "missing"
  )
  # categorical missing
  data("attrition", package = "modeldata")
  attrition[1,"BusinessTravel"] <- NA
  
  rec <- recipe(EnvironmentSatisfaction ~ ., data = attrition) %>%
    step_normalize(all_numeric(), -all_outcomes()) 
  # fit
  expect_error(
    miss_fit <- tabnet_fit(rec, attrition, epochs = 1, valid_split = 0.25,
                      verbose = TRUE),
    regexp = "missing"
  )
  # predict
  attrition[["EnvironmentSatisfaction"]] <-NA
  expect_error(
    predict(fit, attrition),
    regexp = "missing"
  )
  
})

Error: The size of tensor a (10) must match the size of tensor b (8)

Hi,

I am trying to fit a tabnet model to a large dataset using unsupervised and supervised training like in the article. I am getting this error message every now and then when I have trained the unsupervised model and I am using it in supervised mode. Sometimes it helps just to run the exact same code again but sometimes it pops up every time. Don't really know how to approach this error. Sorry for not having a reproducible example, the data I am using is bit sensitive.

Error in (function (self, src, non_blocking) :
The size of tensor a (10) must match the size of tensor b (8) at non-singleton dimension 0
Exception raised from infer_size_impl at ....\aten\src\ATen\ExpandUtils.cpp:28 (most recent call first):
00007FF92EF010D200007FF92EF01070 c10.dll!c10::Error::Error [ @ ]
00007FF92EF00BAE00007FF92EF00B60 c10.dll!c10::detail::torchCheckFail [ @ ]
00007FF8BA78201500007FF8BA781DD0 torch_cpu.dll!at::DynamicLibrary::sym [ @ ]
00007FF8BA78344900007FF8BA783420 torch_cpu.dll!at::infer_size_dimvector [ @ ]
00007FF8BA799F5500007FF8BA799E00 torch_cpu.dll!at::TensorIteratorBase::compute_shape [ @ ]
00007FF8BA79843200007FF8BA7983D0 torch_cpu.dll!at::TensorIteratorBase::build [ @ ]
00007FF8BA74CDE200007FF8BA74CDA0 torch_cpu.dll!at::TensorIteratorConfig::build [ @ ]
00007FF8BA8E744A00007FF8BA8E6C60 torch_cpu.dll!at::native::copy_ [ @ ]
00007FF8BA8E6CB700007FF8BA8E6C60 torch_cpu.dll!at::native::copy_ [ @ ]
00007FF8BB040D9E00007FF8BB040CF0 torch_cpu.dll!at::redispatch::copy_ [ @ ]
00007FF8BCD03C3900007FF8BCD029E0 torch_cpu.dll!torch::autograd::VariableType::allCUDATypes [ @ ]
00007FF8BB040D9E00007FF8BB040CF0 torch_cpu.dll!at::redispatch::copy_ [ @ ]
00007FF8BCD0387B00007FF8BCD029E0 torch_cpu.dll!torch::autograd::VariableType::allCUDATypes [ @ ]
00007FF8BB45C18200007FF8BB45C050 torch_cpu.dll!at::Tensor::copy_ [ @ ]
00007FF9279A2EEF00007FF9279A2E00 lantern.dll!lantern_Tensor_copy__tensor_tensor_bool [ @ ]
0000000065FEE0180000000065FEDFF0 torchpkg.dll!Z45cpp_torch_method_copy__self_Tensor_src_Tensor15XPtrTorchTensorS_b [ @ ]
0000000065EEDC7D0000000065EEDBE0 torchpkg.dll!torch_cpp_torch_method_copy__self_Tensor_src_Tensor [ @ ]
000000006C7A7BAE000000006C79F730 R.dll!Rf_NewFrameConfirm [ @ ]
000000006C7A886D000000006C79F730 R.dll!Rf_NewFrameConfirm [ @ ]
000000006C7ED189000000006C7E5EC0 R.dll!R_initAssignSymbols [ @ ]
000000006C7FCBF1000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FE907000000006C7FE460 R.dll!R_cmpfun1 [ @ ]
000000006C7FFB6A000000006C7FF9B0 R.dll!Rf_applyClosure [ @ ]
000000006C7FCD9C000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7740AD000000006C76D110 R.dll!Rf_coerceVector [ @ ]
000000006C7ED189000000006C7E5EC0 R.dll!R_initAssignSymbols [ @ ]
000000006C7FCBF1000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FE907000000006C7FE460 R.dll!R_cmpfun1 [ @ ]
000000006C7FFB6A000000006C7FF9B0 R.dll!Rf_applyClosure [ @ ]
000000006C7F4F54000000006C7E5EC0 R.dll!R_initAssignSymbols [ @ ]
000000006C7FCBF1000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FE907000000006C7FE460 R.dll!R_cmpfun1 [ @ ]
000000006C7FFB6A000000006C7FF9B0 R.dll!Rf_applyClosure [ @ ]
000000006C7F4F54000000006C7E5EC0 R.dll!R_initAssignSymbols [ @ ]
000000006C7FCBF1000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FE907000000006C7FE460 R.dll!R_cmpfun1 [ @ ]
000000006C7FFB6A000000006C7FF9B0 R.dll!Rf_applyClosure [ @ ]
000000006C7FCD9C000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C8008B7000000006C7FFDD0 R.dll!R_execMethod [ @ ]
000000006C7FCFE5000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FE907000000006C7FE460 R.dll!R_cmpfun1 [ @ ]
000000006C7FFB6A000000006C7FF9B0 R.dll!Rf_applyClosure [ @ ]
000000006C7FCD9C000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C8008B7000000006C7FFDD0 R.dll!R_execMethod [ @ ]
000000006C7FCFE5000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FE907000000006C7FE460 R.dll!R_cmpfun1 [ @ ]
000000006C7FFB6A000000006C7FF9B0 R.dll!Rf_applyClosure [ @ ]
000000006C7FCD9C000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C8008B7000000006C7FFDD0 R.dll!R_execMethod [ @ ]
000000006C7FCFE5000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FD4B9000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FD938000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7F2602000000006C7E5EC0 R.dll!R_initAssignSymbols [ @ ]
000000006C7FCBF1000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FD4B9000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FCF04000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FD4B9000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FD938000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7F2602000000006C7E5EC0 R.dll!R_initAssignSymbols [ @ ]
000000006C7FCBF1000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C7FE907000000006C7FE460 R.dll!R_cmpfun1 [ @ ]
000000006C7FFB6A000000006C7FF9B0 R.dll!Rf_applyClosure [ @ ]
000000006C7FCD9C000000006C7FCA80 R.dll!Rf_eval [ @ ]
000000006C8008B7000000006C7FFDD0 R.dll!R_execMethod [ @ ]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.