Giter Club home page Giter Club logo

Comments (29)

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Hi,

I tried to reproduce the problem with the rocker/verse docker. The install_sjSDM function doesn't seem to work here and we have to manually setup the pytorch installation. But for this, I recommend the rocker/ml docker because it ships already with a virtual python environment, here are the installation steps:

RUN R -e "reticulate::virtualenv_install('reticulate', packages = c('torch', 'torchvision', 'torchaudio'))"
RUN R -e "reticulate::virtualenv_install('reticulate', packages = c('pyro-ppl', 'madgrad', 'torch_optimizer'))"
RUN R -e "remotes::install_github('https://github.com/TheoreticalEcology/s-jSDM', subdir = 'sjSDM')"

We urgently need to work on automatic installation setup.

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Hi Maximilian,

Thanks for your reply. Following these steps helps to successfully install torch and all the other dependencies on AWS. It would be surely nice to automate this.

As I'm running my own script where I call 'sjSDM' with both 'DNN' and 'linear' functions in a Docker container on SageMaker, I now get the error Error in py_call_impl(callable, dots$args, dots$keywords) : NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. I can't really see how I passed an empty list of Tensors or if this is actually the issue.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Can you please provide the corresponding code?

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Very sorry for keep adding edited comments (I literally forgot about the 'preview' tab), but it looks like the triple back quote does the job for wrapping the code in the code environment.

This is the full code for scaling the input data, model training (with k-fold cross validation), prediction and getting the performance metrics. The generation of X, Y and spatial input data and defining the hyperparameter tuning grid are done in the previous step which is not shown in this code. The generation of X, Y and spatial input data and defining the hyperparameter tuning grid are done in the previous step which is not shown in this code. I suspect the error is related to the arguments in the 'sjSDM' function.

for(i in 1:k){ 
  
  env_train <- env_vars[fold_id != i, vars]
  env_test <- env_vars[fold_id == i, vars]
  XY_train <- env_vars[fold_id != i, c("UTM_E", "UTM_N")]
  XY_test <- env_vars[fold_id == i, c("UTM_E", "UTM_N")]
  s_otu_train <- as.matrix(Y[fold_id != i,])
  s_otu_test <- as.matrix(Y[fold_id == i,])
  factV <- colnames(env_train)[sapply(env_train, is.factor)]
  scale_env_train_all = dplyr::select(env_train, -any_of(factV)) %>% scale()
  scale_env_train <- data.frame(scale_env_train_all, env_train[,factV, drop = F])
  dd_env_scaler = data.frame(t(data.frame(env_mean = attr(scale_env_train_all, "scaled:center"), env_sd = attr(scale_env_train_all, "scaled:scale"))))
  scale_env_test = as.data.frame(do.call(rbind,apply(dplyr::select(env_test, -any_of(factV)), 1, function(x){(x-dd_env_scaler['env.mean',])/dd_env_scaler['env.sd',]} ) )) %>%
    tibble::add_column(env_test[,factV, drop = F])
  XY_train_scale <- scale(XY_train)
  dd_xy_scaler = data.frame(t(data.frame(sp_mean = attr(XY_train_scale, "scaled:center"), 
                                         sp_sd = attr(XY_train_scale, "scaled:scale"))))
  XY_test_scale <- as.data.frame(do.call(rbind, 
                                         apply(XY_test, 1,function(x){(x-dd_xy_scaler['sp.mean',])/dd_xy_scaler['sp.sd',]})))
  rm(dd_xy_scaler, dd_env_scaler, scale_env_train_all)
  XY_train_scale <- data.frame(XY_train_scale)
  rm(env_test, env_train, XY_test, XY_train)

  for(j in seq_len(noSteps)){

    cat("\nk", i, "tune run", j)
    tr <- subset(tune_results, k == i)[j,,drop = T]
  
    model_train <- sjSDM(
      
      Y = s_otu_train,
      
      env = DNN(data=scale_env_train, formula = ~.,
                hidden=hidden[[tr$hidden_ind]],
                lambda = tr$lambda_env,
                alpha = tr$alpha_env,
                activation = tr$acti_sp,
                dropout=tr$drop,
                bias=T),
      
      biotic = bioticStruct(lambda=tr$lambda_bio,
                            alpha=tr$alpha_bio, 
                            on_diag=F, inverse = FALSE),
      
      spatial = linear(data=XY_train_scale, ~0+UTM_E*UTM_N, 
                       lambda=tr$lambda_sp, 
                       alpha=tr$alpha_sp),
      
      learning_rate = tr$lr, # part of tuning grid
      iter = iter, 
      family = family, 
      sampling = sampling, # 150L, 5000L
      device = device
    )
    
    saveRDS(model_train,
            file.path(output_path,paste0("s-jSDM_tuning_model_", varsName, "_", k, "CV_", 
                                       i, "_tr_", j, "_", abund, ".rds")))
    
    tune_results$loglike_sjSDM[tune_results$k == i][j] <- logLik(model_train)
    tune_results$loss[tune_results$k == i][j] <- model_train$history[length(model_train$history)]
        
    for (pred in 1:2) {
      
      newdd = scale_env_test ; newsp = XY_test_scale; otudd = s_otu_test
      
      if (pred==2) { newdd = NULL; newsp = NULL; otudd = s_otu_train}
      
      pred_dd = apply(abind::abind(lapply(1:3, function(i) {
        predict(model_train, newdata=newdd, SP=newsp)}
        ),along = -1L), 2:3, mean)
      
      attr(pred_dd, 'dimnames') = NULL
      
      otudd_pa = (otudd>0)*1
      
      auc <- sapply(1:dim(otudd)[2], function(i) {
        
        tryCatch({
          as.numeric(pROC::roc(otudd_pa[,i], pred_dd[,i], direction = "<", quiet=T)$auc)},
          error = function(err){ return(NA)}
        )
      })
      
      rsq = data.frame(ll=rep(.1, length.out=ncol(pred_dd)), 
                       nagel=rep(.1, length.out=ncol(pred_dd)), 
                       plr=rep(.1, length.out=ncol(pred_dd)),
                       tjur = rep(NA, length.out=ncol(pred_dd)),
                       cor = rep(NA, length.out=ncol(pred_dd)))
      
      for (m in 1:ncol(pred.dd)) { 
        
        p = pred_dd[,m]; y = otudd.pa[,m]
        
        loglikP = sum( log( p*y + (1-p)*(1-y) ) )
        loglikN = sum( log( mean(p)*y + (1-mean(p))*(1-y) ) )
        
        rsq$nagel[m] = (1-exp(2/length(p)*(loglikN-loglikP))) / (1-exp(2/length(p)*loglikN))
        rsq$ll[m] = loglikP
        
        tppp = sum(p*y)
        fppp = sum(p*(1-y))
        fapp = sum((1-p)*y) ### Weird behavious if fa object is here... check in sjSDM code... 
        tapp = sum((1-p)*(1-y))
        
        rsq$plr[m] = tppp/(tppp+fapp)/fppp*(fppp+tapp) # get NaN if species missing at all sites. OK> 
        
        tjur <- base::diff(tapply(p, y, mean, na.rm = T))
        rsq$tjur[m] <- ifelse(length(tjur) > 0, tjur, NA)
        rsq$cor[m] = suppressWarnings(cor(p, y)) # warning - with NaN when no presences in species in test set
        
      }
           
      if (pred==2) {
        tune_results$AUC_train[tune_results$k == i][j] = mean(auc, na.rm = TRUE)
        tune_results$ll_train[tune_results$k == i][j] = mean(rsq$ll, na.rm = T)
        tune_results$nagel_train[tune_results$k == i][j] = mean(rsq$nagel, na.rm = T)
        tune_results$plr_train[tune_results$k == i][j]  = mean(rsq$plr, na.rm = T)
        tune_results$tjur_train[tune_results$k == i][j]  = mean(rsq$tjur, na.rm = T)
        tune_results$cor_train[tune_results$k == i][j]  = mean(rsq$cor, na.rm = T)
        tune_results$auc_lt5_train[tune_results$k == i][j]  = sum(auc < 0.5, na.rm = T)
        }
      
      if (pred==1) {
        tune_results$AUC_test[tune_results$k == i][j] = mean(auc, na.rm = TRUE)
        tune_results$ll_test[tune_results$k == i][j] = mean(rsq$ll, na.rm = T)
        tune_results$nagel_test[tune_results$k == i][j] = mean(rsq$nagel, na.rm = T)
        tune_results$plr_test[tune_results$k == i][j] = mean(rsq$plr, na.rm = T)
        tune_results$tjur_test[tune_results$k == i][j]  = mean(rsq$tjur, na.rm = T)
        tune_results$cor_test[tune_results$k == i][j]  = mean(rsq$cor, na.rm = T)
        tune_results$auc_lt5_test[tune_results$k == i][j]  = sum(auc < 0.5, na.rm = T)
        }      
    } 
 rm(model_train)
    
  }
}

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

I agree with you that the problem must be that one of the arguments are passed to sjSDM in the wrong format... but I can't reproduce the error.

Can you debug the code locally? Otherwise, I suggest to:
i) locate the exact position of the error, it could be in sjSDM(...) or in the predict(...) function
ii) catch the specific parameter combinations causing the error

You could place your inner loop into a tryCatch and if an error occurs you could write the error+information about the parameters+step in the loop into a log file:

success = 
tryCatch({
  cat(error) 
  # your inner loop
  
  
}, error = function(e) e)
class(success)
if(inherits(success, "error")) { 
  sink(file = paste0(runif(1, 1e5, 1e6), ".txt")) 
  print("Hello")
  sink()
  }

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Thanks. I'll try that. By the way, when I run the code on my local machine (using R 4.0.3), I get a similar error:

Error in py_call_impl(callable, dots$args, dots$keywords) : RuntimeError: Arguments for call are not valid.

However, when I run the same code on my other PC (using R 4.1.0), I don't get any error.

In these cases, I run the code using device = "cpu". And my R in my Docker container is R 4.0.5, so I'm not sure if the R version has any influence.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Interesting. Do you get this error also with the minimal example from the sjSDM help?:

com = simulate_SDM(env = 3L, species = 7L, sites = 100L)
model = sjSDM(Y = com$response,env = com$env_weights, iter = 2L, device = "cpu") 
model = sjSDM(Y = com$response,env = com$env_weights, iter = 2L, device = 0) 

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Yes, I get errors for both runs. For device = "cpu" I get the same error as above, and for device = 0 I get the following error message:

Error in py_call_impl(callable, dots$args, dots$keywords) : 
  AssertionError: Torch not compiled with CUDA enabled

I think it's no surprise I get this last error because I don't have an NVIDIA GPU, so this might be the reason. However, when I try to run on my own PC (which has an NVIDIA GPU), I don't get this error (and I also don't get the first error), but I do get the error related to the fact that my PC NVIDIA driver is too old.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Ok, could you pls post the output of install_diagnostic() from the machine with errors here?

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Sure.

> install_diagnostic()


1 r-reticulate C:\\Users\\Adrian Victor-Matei\\.conda\\envs\\r-reticulate\\python.exe



ENV: r-reticulate


torch:
# packages in environment at C:\Users\Adrian Victor-Matei\.conda\envs\r-reticulate:
#
# Name                    Version                   Build  Channel
pytorch                   1.9.0               py3.6_cpu_0  [cpuonly]  pytorch
pytorch-ranger            0.1.1                    pypi_0    pypi
torch-optimizer           0.1.0                    pypi_0    pypi
torchaudio                0.9.0                      py36    pytorch
torchvision               0.10.0                 py36_cpu  [cpuonly]  pytorch


numpy:
# packages in environment at C:\Users\Adrian Victor-Matei\.conda\envs\r-reticulate:
#
# Name                    Version                   Build  Channel
numpy                     1.19.5           py36hd1b969e_1    conda-forge


python:         C:/Users/Adrian Victor-Matei/.conda/envs/r-reticulate/python.exe
libpython:      C:/Users/Adrian Victor-Matei/.conda/envs/r-reticulate/python36.dll
pythonhome:     C:/Users/Adrian Victor-Matei/.conda/envs/r-reticulate
version:        3.6.13 (default, Feb 19 2021, 05:17:09) [MSC v.1916 64 bit (AMD64)]
Architecture:   64bit
numpy:          C:/Users/Adrian Victor-Matei/.conda/envs/r-reticulate/Lib/site-packages/numpy
numpy_version:  1.19.5
torch:          C:\Users\ADRIAN~1\CONDA~1\envs\R-RETI~1\lib\site-packages\torch\__init__.p

python versions found: 
 C:/Users/Adrian Victor-Matei/.conda/envs/r-reticulate/python.exe
 C:/Users/Adrian Victor-Matei/AppData/Local/Programs/Python/Python39/python.exe
 C:/ProgramData/Anaconda3/python.exe




     active environment : None
       user config file : C:\Users\Adrian Victor-Matei\.condarc
 populated config files : C:\Users\Adrian Victor-Matei\.condarc
          conda version : 4.8.3
    conda-build version : 3.18.11
         python version : 3.8.3.final.0
       virtual packages : 
       base environment : C:\PROGRA~3\ANACON~1  (read only)
           channel URLs : https://repo.anaconda.com/pkgs/main/win-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/win-64
                          https://repo.anaconda.com/pkgs/r/noarch
                          https://repo.anaconda.com/pkgs/msys2/win-64
                          https://repo.anaconda.com/pkgs/msys2/noarch
          package cache : C:\PROGRA~3\ANACON~1\pkgs
                          C:\Users\Adrian Victor-Matei\.conda\pkgs
                          C:\Users\Adrian Victor-Matei\AppData\Local\conda\conda\pkgs
       envs directories : C:\Users\Adrian Victor-Matei\.conda\envs
                          C:\PROGRA~3\ANACON~1\envs
                          C:\Users\Adrian Victor-Matei\AppData\Local\conda\conda\envs
               platform : win-64
             user-agent : conda/4.8.3 requests/2.24.0 CPython/3.8.3 Windows/10 Windows/10.0.19041
          administrator : False
             netrc file : None
           offline mode : False

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

I tested it on a windows machine with R4.03 and it works, here is the install diagnostic:

          name
1 r-reticulate
                                                                                 python
1 C:\\Users\\Administrator\\AppData\\Local\\r-miniconda\\envs\\r-reticulate\\python.exe



ENV: r-reticulate


torch:
#
# Name                    Version                   Build  Channel
pytorch                   1.9.0               py3.9_cpu_0  [cpuonly]  pytorch
pytorch-ranger            0.1.1                    pypi_0    pypi
torch-optimizer           0.1.0                    pypi_0    pypi
torchaudio                0.9.0                      py39    pytorch
torchvision               0.10.0                 py39_cpu  [cpuonly]  pytorch


numpy:
# packages in environment at C:\Users\ADMINI~1\AppData\Local\R-MINI~1\envs\r-reticulate:
#
# Name                    Version                   Build  Channel
numpy                     1.20.2           py39ha4e8547_0  
numpy-base                1.20.2           py39hc2deb75_0  




     active environment : None
 populated config files : 
          conda version : 4.9.2
    conda-build version : not installed
         python version : 3.8.5.final.0
       virtual packages : __win=0=0
                          __archspec=1=x86_64
       base environment : C:\Users\ADMINI~1\AppData\Local\R-MINI~1  (writable)
           channel URLs : https://repo.anaconda.com/pkgs/main/win-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/win-64
                          https://repo.anaconda.com/pkgs/r/noarch
                          https://repo.anaconda.com/pkgs/msys2/win-64
                          https://repo.anaconda.com/pkgs/msys2/noarch
 
               platform : win-64
             user-agent : conda/4.9.2 requests/2.25.1 CPython/3.8.5 Windows/10 Windows/10.0.14393
          administrator : True
             netrc file : None
           offline mode : False

The only suggestion I have at this point is to update your conda to 4.9.2, remove the r-reticulate environment via reticulate::conda_remove('r-reticulate') and rerun the install_sjSDM() function

But then we have still the error from your docker, did you try to run the minimal example there?

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

I updated conda to the latest version (4.10.3), I removed the r-reticulate environment, restarted R and reran install_sjSDM, but I still get the same error. Also, I have conda 4.10.1 on my other machine where I don't get the error.

Regarding docker on AWS, I have good news - the minimal example works using "device = 0" (which is using GPU). So, for the script I'm trying to run, it's probably the way the inputs for the sjSDM function are formatted. But I'm not sure in what way the formatting is wrong.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Hm, if the data isn't so sensitive you could send me your project folder with data+code via email so that I can take a look at it.

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Although I'm thinking that I get the same error even with the minimal example on my work machine, so it may not really be related to how the inputs are formatted. And, in general, the script works in some other environments, so sjSDM may not be stable enough across environments and package versions. What do you think?

Regarding sharing the data and code, this is not my script. I can ask de person who wrote the code and sent me the data.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Well, I am not sure, I am running sjSDM on three different machines with three different R versions (1x MacOS, 2xLinux) with 2x CPU and 1x GPU support. I think we have to differentiate here between your workstation and docker:

  1. Windows-Workstation:
    Error: Error in py_call_impl(callable, dots$args, dots$keywords) : RuntimeError: Arguments for call are not valid.
    I have no idea what is causing this error... I will explore this on the windows machine

  2. Docker:
    Error: Error in py_call_impl(callable, dots$args, dots$keywords) : NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat.
    This error is thrown by pytorch, so here I am relatively sure that it is related to the arguments, but I will run now the sjSDM tests in the rocker/ml docker to check this.

Anyway, I have to implement proper type checking for each function

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

I'm curious to see the outcome of your Docker tests.

Also, AWS SageMaker has its built-in PyTorch estimator for model training which also has the option to do model parallelisation, so the instance GPU resources would be distributed way more efficiently. I would be interested if I could use the SageMaker PyTorch estimator for sjSDM without any conflicts since sjSDM uses PyTorch in the background.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

I added argument checks to the package, so maybe try it again with the latest version to see if we get informative error messages now (tests are running successfully on the docker)

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

I've been on holiday in the last 2 weeks, so now I retried sjSDM on AWS. Like I mentioned before, it works with a minimum example:

library(sjSDM)

com = simulate_SDM(env = 3L, species = 7L, sites = 100L)
model = sjSDM(Y = com$response,env = com$env_weights, iter = 2L, device = "cpu")
print("Model done")

However when I bring some custom code that performs some pre-modelling data processing, I get an algorithm error:

Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : 
  contrasts can be applied only to factors with 2 or more levels

Here's the custom code:

library(sjSDM)

library(dplyr)
library(readr)
library(readxl)
library(tidyr)

# Setup parameters
# Container directories
prefix <- '/opt/ml'
input_path <- paste(prefix, 'processing/input/', sep='/')

## Updated to new vars, also changes to elevation_m, canopy_height_m  to _f. 

# # model settings:
abund <- "pa"
device <- "gpu"


### 1. Get data from github #####

samtoolsfilter <- "F2308" # F2308 filter only
samtoolsqual <- "q48"
minimaprundate <- 20200929
kelpierundate <- 20200927
primer <- "BF3BR2"

gitHub <- "https://raw.githubusercontent.com/dougwyu/HJA_analyses_Kelpie/master/Kelpie_maps"

outputidxstatstabulatefolder <- paste0("outputs_minimap2_",
                                       minimaprundate,"_",
                                       samtoolsfilter,"_", 
                                       samtoolsqual, 
                                       "_kelpie", 
                                       kelpierundate,
                                       "_",
                                       primer,
                                       "_vsearch97")

datFile <- paste0("sample_by_species_table_", 
                  samtoolsfilter, 
                  "_minimap2_",
                  minimaprundate,
                  "_kelpie",
                  kelpierundate,
                  "_FSL_qp.csv")

# file path:
fn <- file.path(gitHub, outputidxstatstabulatefolder, datFile)

# read complete data set
otuenv <- read.csv(fn, stringsAsFactors = FALSE, na.strings = "NA")

# keep OTUs with >= minocc incidences AND with presnece at both M1 or M2
minocc <- 6 # set to high number (e.g. 20) for testing

## get Species columns by M1 and M2, with minocc calculated per trap
## can choose below whether to include just species shared between M1 and M2
spM <- otuenv %>% 
  dplyr::filter(period == "S1") %>%
  dplyr::select(SiteName, trap, contains("__")) %>%
  tidyr::pivot_longer(cols = contains("__"), names_to = "OTU", values_drop_na = FALSE) %>%
  mutate(value = value>0) %>% # change to PA
  group_by(OTU, trap) %>%
  summarise(nSites = sum(value, na.rm = T)) %>% # Number of sites at which present
  filter(nSites >= minocc) %>% # filter by minocc
  ungroup() %>%
  tidyr::pivot_wider(names_from = trap, values_from = nSites, values_fn = function(x) sum(x)>0) %>%
  filter(M1) %>% # CHOOOSE HERE FOR SINGLE. OR SHARED TRAP SPECIES GFROUP: filter(M1 & M2)
  tidyr::separate(col = OTU, into = c("ID", "empty", "class", "order", "family",
                                          "genus", "epithet", "BOLD", "BOLDID", "size"),
                  remove = FALSE, sep = "_") %>%
  dplyr::select(OTU)


# filter species here to those in sp.M1m2$OTU - already filtered for minocc
otu_qp_csv <- otuenv %>% 
  dplyr::filter(period == "S1" & trap == "M1") %>%
  dplyr::select(spM$OTU) ## 

# convert to presence/absence data
otu_pa_csv <- otu_qp_csv
otu_pa_csv[otu_pa_csv > 0] <- 1
min(colSums(otu_pa_csv)) >= minocc # should be TRUE

# clean up
rm(datFile, gitHub, kelpierundate, minimaprundate, outputidxstatstabulatefolder, primer, samtoolsfilter, samtoolsqual, fn, spM)

# remove OTUs, XY, and normalised NDVI and EVI
# average, optionally log, select, and scale env covariates

## Load new version of vars
load(paste(input_path, "envVars.rdata", sep="/"))


env_vars <- otuenv %>% 
  dplyr::filter(period == "S1" & trap == "M1") %>%
  dplyr::select(trap, period, UTM_E, UTM_N, SiteName) %>%
  mutate(uniqueID = paste(SiteName, trap, period, sep = "_")) %>%
  left_join(y = allVars, by = "SiteName") %>%
  mutate(lg_DistStream = log(DistStream + 0.001),
         lg_DistRoad = log(DistRoad + 0.001),
         lg_cover2m_max = log(l_Cover_2m_max + 0.001),
         lg_cover2m_4m = log(l_Cover_2m_4m + 0.001),
         lg_cover4m_16m = log(l_Cover_4m_16m + 0.001))


varsName <- "vars8" # be30 replaces minT
vars <- c("Ess30", "DistRoad", "Nss30", "DistStream", "tri30", "LC08_045029_20180726_B5", "l_rumple", "cut_40msk", "insideHJA", "cut_msk", "cut40_r1k", "l_Cover_2m_4m", "l_Cover_4m_16m", "cut_r1k", "be30", "ndmi_p95_r100", "ndvi_p50_r500", "cut40_r250", "nbr_stdDev_r250", "precipitation_mm", "gt4_250", "tpi250", "cut_r250", "LC08_045029_20180726_B1", "ndvi_p50_r100", "gt4_r30", "twi30", "ndvi_p5_r100", "tpi1k", "ndvi_p5_r500", "l_p25", "LC08_045029_20180726_B10")

if(abund == "pa") {
  Y <- otu_pa_csv
  family <- stats::binomial('probit') } else {
    if(abund ==  "qp") {
      Y <- otu_qp_csv
      family <- stats::poisson('log') # check other family?
    } else stop("check abund")
  } 
                    
model = sjSDM(Y = Y, env = env_vars, iter = 2L, device = device)

print("Model done")  

This error should emerge because of the input data format in the sjSDM function, but I'm not sure why it doesn't like the format. You will need the "envVars.rdata" data to replicate the error, so I can send them to you.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

ok, this error is related to the number of levels in a factor variable, actually it says that you have a categorical variable with only one level (see also https://stackoverflow.com/questions/44200195/how-to-debug-contrasts-can-be-applied-only-to-factors-with-2-or-more-levels-er ). Can you post here the output of:

summary(env_vars)

Otherwise, send me the data via email then I will look personally into it ([email protected])

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Yes, it's clearer now. Thanks for pointing it out to that post. Looks like the issue is with two of the variables from env_vars which, when converted into factors, have only a single level each. This can be checked by doing env_vars %>% mutate_all(as.factor) %>% str giving:

data.frame':	88 obs. of  77 variables:
 $ trap                    : Factor w/ 1 level "M1": 1 1 1 1 1 1 1 1 1 1 ...
 $ period                  : Factor w/ 1 level "S1": 1 1 1 1 1 1 1 1 1 1 ...
 $ UTM_E                   : Factor w/ 88 levels "555122","555665",..: 16 77 15 84 36 68 34 31 24 20 ...
 $ UTM_N                   : Factor w/ 87 levels "4891311","4891351",..: 87 79 86 84 42 82 52 39 75 31 ...
 $ SiteName                : Factor w/ 88 levels "076361","081090",..: 1 2 3 4 5 6 7 8 9 10 ...
 $ uniqueID                : Factor w/ 88 levels "076361_M1_S1",..: 1 2 3 4 5 6 7 8 9 10 ...
 $ ht30                    : Factor w/ 88 levels "2.72500157356262",..: 2 18 8 82 22 45 84 68 5 46 ...
 $ gt4_r30                 : Factor w/ 61 levels "0.0922222211956978",..: 3 47 4 28 55 54 45 50 11 61 ...
 $ gt4_250                 : Factor w/ 88 levels "0.58316308259964",..: 1 58 12 2 25 23 61 77 13 57 ...
 $ gt4_500                 : Factor w/ 88 levels "0.647498786449432",..: 1 69 7 3 33 40 53 72 30 42 ...
 $ cut_r                   : Factor w/ 26 levels "1940","1945",..: 15 22 NA NA 16 12 NA NA 21 9 ...
 $ cut_msk                 : Factor w/ 2 levels "0","1": 2 2 1 1 2 2 1 1 2 2 ...
 $ cut_40msk               : Factor w/ 2 levels "0","1": 1 2 1 1 1 1 1 1 2 1 ...
 $ cut_r1k                 : Factor w/ 86 levels "0.0462196879088879",..: 5 37 9 80 72 64 67 52 19 74 ...
 $ cut_r500                : Factor w/ 82 levels "0","0.0216647665947676",..: 8 59 2 50 65 68 31 45 14 69 ...
 $ cut_r250                : Factor w/ 64 levels "0","0.00452488707378507",..: 8 55 1 12 40 55 27 26 42 54 ...
 $ cut40_r1k               : Factor w/ 74 levels "0","0.00171184027567506",..: 22 33 40 48 52 41 43 64 46 10 ...
 $ cut40_r500              : Factor w/ 56 levels "0","0.00114025082439184",..: 15 40 6 22 45 38 21 25 29 1 ...
 $ cut40_r250              : Factor w/ 34 levels "0","0.00904977414757013",..: 1 25 1 5 13 27 1 1 26 1 ...
 $ be30                    : Factor w/ 88 levels "401.118469238281",..: 84 64 80 67 41 49 33 36 65 21 ...
 $ tri30                   : Factor w/ 88 levels "1.98985290527344",..: 30 66 8 19 63 28 74 54 60 82 ...
 $ slope30                 : Factor w/ 88 levels "2.40012121200562",..: 35 68 8 20 62 28 71 56 60 85 ...
 $ aspect30                : Factor w/ 88 levels "0.185087159276009",..: 40 83 26 37 72 61 65 25 7 74 ...
 $ Nss30                   : Factor w/ 88 levels "-0.999938488006592",..: 1 78 25 6 65 45 50 30 67 69 ...
 $ Ess30                   : Factor w/ 88 levels "-0.999811947345734",..: 50 34 68 53 14 6 3 69 73 19 ...
 $ twi30                   : Factor w/ 88 levels "3.67923450469971",..: 76 54 77 86 10 79 2 28 5 7 ...
 $ tpi250                  : Factor w/ 88 levels "-38.688777923584",..: 8 38 30 18 70 35 73 69 60 83 ...
 $ tpi500                  : Factor w/ 88 levels "-78.395378112793",..: 13 40 22 10 78 30 84 81 69 87 ...
 $ tpi1k                   : Factor w/ 88 levels "-151.955001831055",..: 21 14 47 9 79 23 77 84 61 80 ...
 $ l_Cover_2m_4m           : Factor w/ 88 levels "0.001917066401802",..: 80 65 84 52 60 22 40 8 78 31 ...
 $ l_Cover_2m_max          : Factor w/ 88 levels "0.164202883839607",..: 3 27 6 31 60 79 34 77 5 84 ...
 $ l_Cover_4m_16m          : Factor w/ 87 levels "0","0.010884465649724",..: 65 86 43 37 81 20 42 24 61 40 ...
 $ l_p25                   : Factor w/ 88 levels "2.65000009536743",..: 6 15 11 59 22 66 72 84 5 49 ...
 $ l_p95                   : Factor w/ 88 levels "9.43149948120117",..: 4 5 41 83 12 35 64 69 3 28 ...
 $ l_rumple                : Factor w/ 88 levels "2.32601499557495",..: 7 14 42 87 12 23 61 53 25 13 ...
 $ DistStream              : Factor w/ 88 levels "2.31222009658813",..: 69 37 64 33 79 26 53 67 52 42 ...
 $ DistRoad                : Factor w/ 88 levels "3.33125972747803",..: 23 51 22 26 76 63 33 34 11 59 ...
 $ insideHJA               : Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 2 1 1 ...
 $ ndmi_stdDev_r100        : Factor w/ 88 levels "0.0259699020534754",..: 81 57 82 64 78 84 31 18 72 86 ...
 $ ndmi_stdDev_r250        : Factor w/ 88 levels "0.0288740117102861",..: 82 41 83 79 70 72 51 64 71 84 ...
 $ ndmi_stdDev_r500        : Factor w/ 88 levels "0.0314815156161785",..: 82 40 77 83 58 74 69 62 48 78 ...
 $ nbr_stdDev_r100         : Factor w/ 88 levels "0.0172927919775248",..: 80 35 79 55 87 74 15 28 70 83 ...
 $ nbr_stdDev_r250         : Factor w/ 88 levels "0.020262848585844",..: 82 15 79 83 80 63 36 67 72 86 ...
 $ nbr_stdDev_r500         : Factor w/ 88 levels "0.0231596957892179",..: 77 11 75 85 68 63 58 70 29 87 ...
 $ ndvi_p5_r100            : Factor w/ 88 levels "0.367758303880692",..: 2 85 20 38 5 25 76 74 34 13 ...
 $ ndvi_p5_r250            : Factor w/ 88 levels "0.545559287071228",..: 7 88 22 8 17 52 64 37 44 13 ...
 $ ndvi_p5_r500            : Factor w/ 88 levels "0.569515287876129",..: 5 87 34 15 40 61 49 29 76 8 ...
 $ ndvi_p50_r100           : Factor w/ 88 levels "0.6645827293396",..: 20 87 80 27 72 38 83 66 16 3 ...
 $ ndvi_p50_r250           : Factor w/ 88 levels "0.780440330505371",..: 8 88 78 4 48 31 61 50 32 3 ...
 $ ndvi_p50_r500           : Factor w/ 88 levels "0.792116701602936",..: 5 85 71 6 54 29 18 64 63 8 ...
 $ ndvi_p95_r100           : Factor w/ 88 levels "0.735673010349274",..: 20 80 70 16 72 32 78 43 12 3 ...
 $ ndvi_p95_r250           : Factor w/ 88 levels "0.845906734466553",..: 6 88 65 4 60 23 73 38 25 5 ...
 $ ndvi_p95_r500           : Factor w/ 88 levels "0.842167496681213",..: 2 81 42 4 54 28 50 51 31 10 ...
 $ ndmi_p5_r100            : Factor w/ 88 levels "0.0934058278799057",..: 5 88 15 29 11 22 82 69 19 3 ...
 $ ndmi_p5_r250            : Factor w/ 88 levels "0.216926902532578",..: 2 88 12 7 16 29 69 41 20 5 ...
 $ ndmi_p5_r500            : Factor w/ 88 levels "0.25164532661438",..: 2 88 27 3 46 30 32 49 65 5 ...
 $ ndmi_p50_r100           : Factor w/ 88 levels "0.195089891552925",..: 12 88 36 34 71 46 81 62 22 2 ...
 $ ndmi_p50_r250           : Factor w/ 88 levels "0.391383528709412",..: 4 88 44 8 60 43 71 57 33 5 ...
 $ ndmi_p50_r500           : Factor w/ 88 levels "0.420779198408127",..: 2 86 57 7 59 38 28 64 58 9 ...
 $ ndmi_p95_r100           : Factor w/ 88 levels "0.29041263461113",..: 11 86 46 41 62 85 75 43 53 14 ...
 $ ndmi_p95_r250           : Factor w/ 88 levels "0.457065671682358",..: 4 88 59 22 58 71 77 55 54 30 ...
 $ ndmi_p95_r500           : Factor w/ 88 levels "0.513863265514374",..: 15 85 73 29 63 66 53 62 56 22 ...
 $ LC08_045029_20180726_B1 : Factor w/ 66 levels "75","88","90",..: 56 7 22 13 36 25 33 39 41 63 ...
 $ LC08_045029_20180726_B3 : Factor w/ 68 levels "193","235","241",..: 60 10 36 31 35 41 27 38 50 61 ...
 $ LC08_045029_20180726_B4 : Factor w/ 67 levels "118","128","134",..: 58 4 40 43 27 52 17 30 53 63 ...
 $ LC08_045029_20180726_B5 : Factor w/ 85 levels "1479","1659",..: 80 33 51 29 59 30 27 43 67 13 ...
 $ LC08_045029_20180726_B7 : Factor w/ 77 levels "228","234","258",..: 65 1 59 47 33 61 9 31 48 76 ...
 $ LC08_045029_20180726_B10: Factor w/ 41 levels "2978","2979",..: 15 10 6 21 21 31 22 26 36 37 ...
 $ maxT_annual             : Factor w/ 88 levels "12.2527589797974",..: 2 16 7 13 61 29 67 71 25 78 ...
 $ meanT_annual            : Factor w/ 88 levels "7.13777589797974",..: 2 16 7 9 64 27 55 67 26 77 ...
 $ minT_annual             : Factor w/ 88 levels "2.01560258865356",..: 2 16 10 5 64 28 33 45 36 62 ...
 $ precipitation_mm        : Factor w/ 88 levels "1818.08447265625",..: 83 59 77 74 25 58 17 22 56 26 ...
 $ lg_DistStream           : Factor w/ 88 levels "0.838640535050862",..: 69 37 64 33 79 26 53 67 52 42 ...
 $ lg_DistRoad             : Factor w/ 88 levels "1.20365067068988",..: 23 51 22 26 76 63 33 34 11 59 ...
 $ lg_cover2m_max          : Factor w/ 88 levels "-1.80058096139141",..: 3 27 6 31 60 79 34 77 5 84 ...
 $ lg_cover2m_4m           : Factor w/ 88 levels "-5.8371768246251",..: 80 65 84 52 60 22 40 8 78 31 ...
 $ lg_cover4m_16m          : Factor w/ 87 levels "-6.90775527898214",..: 65 86 43 37 81 20 42 24 61 40 ...

FYI, summary(env_vars) is

     trap              period              UTM_E            UTM_N           SiteName        
 Length:88          Length:88          Min.   :555122   Min.   :4891311   Length:88         
 Class :character   Class :character   1st Qu.:560464   1st Qu.:4895960   Class :character  
 Mode  :character   Mode  :character   Median :562765   Median :4898970   Mode  :character  
                                       Mean   :563148   Mean   :4899062                     
                                       3rd Qu.:566056   3rd Qu.:4902208                     
                                       Max.   :570775   Max.   :4908110                     
                                                                                            
   uniqueID              ht30           gt4_r30           gt4_250          gt4_500      
 Length:88          Min.   : 2.725   Min.   :0.09222   Min.   :0.5832   Min.   :0.6475  
 Class :character   1st Qu.:14.633   1st Qu.:0.88250   1st Qu.:0.8515   1st Qu.:0.8686  
 Mode  :character   Median :23.682   Median :0.96944   Median :0.9227   Median :0.9281  
                    Mean   :23.256   Mean   :0.88415   Mean   :0.8898   Mean   :0.8987  
                    3rd Qu.:31.548   3rd Qu.:0.99472   3rd Qu.:0.9545   3rd Qu.:0.9463  
                    Max.   :44.369   Max.   :1.00000   Max.   :0.9869   Max.   :0.9726  
                                                                                        
     cut_r         cut_msk         cut_40msk        cut_r1k           cut_r500         cut_r250     
 Min.   :1940   Min.   :0.0000   Min.   :0.000   Min.   :0.04622   Min.   :0.0000   Min.   :0.0000  
 1st Qu.:1960   1st Qu.:0.0000   1st Qu.:0.000   1st Qu.:0.24130   1st Qu.:0.2571   1st Qu.:0.2127  
 Median :1964   Median :0.0000   Median :0.000   Median :0.34223   Median :0.3677   Median :0.4231  
 Mean   :1970   Mean   :0.4773   Mean   :0.125   Mean   :0.34827   Mean   :0.3877   Mean   :0.4341  
 3rd Qu.:1982   3rd Qu.:1.0000   3rd Qu.:0.000   3rd Qu.:0.43787   3rd Qu.:0.5134   3rd Qu.:0.6391  
 Max.   :2006   Max.   :1.0000   Max.   :1.000   Max.   :0.77404   Max.   :0.9133   Max.   :0.9864  
 NA's   :46                                                                                         
   cut40_r1k         cut40_r500        cut40_r250          be30            tri30       
 Min.   :0.00000   Min.   :0.00000   Min.   :0.0000   Min.   : 401.1   Min.   : 1.990  
 1st Qu.:0.01427   1st Qu.:0.00000   1st Qu.:0.0000   1st Qu.: 683.3   1st Qu.: 4.853  
 Median :0.05934   Median :0.04105   Median :0.0000   Median : 836.7   Median : 7.700  
 Mean   :0.07257   Mean   :0.07910   Mean   :0.1021   Mean   : 858.5   Mean   : 7.939  
 3rd Qu.:0.10571   3rd Qu.:0.13312   3rd Qu.:0.1663   3rd Qu.:1001.4   3rd Qu.:11.145  
 Max.   :0.32126   Max.   :0.47548   Max.   :0.8688   Max.   :1549.5   Max.   :15.612  
                                                                                       
    slope30         aspect30            Nss30             Ess30              twi30       
 Min.   : 2.40   Min.   :  0.1851   Min.   :-0.9999   Min.   :-0.99981   Min.   : 3.679  
 1st Qu.:11.08   1st Qu.:136.1890   1st Qu.:-0.8304   1st Qu.:-0.64294   1st Qu.: 4.995  
 Median :16.94   Median :189.3146   Median :-0.2524   Median :-0.13069   Median : 6.402  
 Mean   :17.80   Mean   :195.6734   Mean   :-0.1368   Mean   :-0.08059   Mean   : 6.418  
 3rd Qu.:23.79   3rd Qu.:274.7807   3rd Qu.: 0.6197   3rd Qu.: 0.52357   3rd Qu.: 7.733  
 Max.   :32.39   Max.   :352.3589   Max.   : 1.0000   Max.   : 0.99597   Max.   :11.405  
                                                                                         
     tpi250            tpi500              tpi1k          l_Cover_2m_4m      l_Cover_2m_max  
 Min.   :-38.689   Min.   :-78.39538   Min.   :-151.955   Min.   :0.001917   Min.   :0.1642  
 1st Qu.: -9.983   1st Qu.:-19.66711   1st Qu.: -40.212   1st Qu.:0.008706   1st Qu.:0.8652  
 Median :  2.285   Median :  0.09749   Median : -12.012   Median :0.020884   Median :0.9547  
 Mean   :  4.437   Mean   :  4.51255   Mean   :   1.365   Mean   :0.042823   Mean   :0.8864  
 3rd Qu.: 17.064   3rd Qu.: 17.03330   3rd Qu.:  36.597   3rd Qu.:0.054528   3rd Qu.:0.9831  
 Max.   : 64.136   Max.   :115.74850   Max.   : 188.053   Max.   :0.265305   Max.   :0.9982  
                                                                                             
 l_Cover_4m_16m        l_p25            l_p95           l_rumple        DistStream      
 Min.   :0.00000   Min.   : 2.650   Min.   : 9.431   Min.   : 2.326   Min.   :   2.312  
 1st Qu.:0.08073   1st Qu.: 9.684   1st Qu.:25.539   1st Qu.: 3.902   1st Qu.: 224.706  
 Median :0.17857   Median :16.847   Median :35.132   Median : 4.825   Median : 421.478  
 Mean   :0.23589   Mean   :16.794   Mean   :37.429   Mean   : 5.425   Mean   : 500.608  
 3rd Qu.:0.32729   3rd Qu.:22.359   3rd Qu.:48.512   3rd Qu.: 6.599   3rd Qu.: 770.294  
 Max.   :0.90916   Max.   :31.769   Max.   :71.091   Max.   :11.320   Max.   :1214.661  
                                                                                        
    DistRoad       insideHJA ndmi_stdDev_r100  ndmi_stdDev_r250  ndmi_stdDev_r500  nbr_stdDev_r100  
 Min.   :  3.331   no :52    Min.   :0.02597   Min.   :0.02887   Min.   :0.03148   Min.   :0.01729  
 1st Qu.: 25.312   yes:36    1st Qu.:0.04042   1st Qu.:0.04079   1st Qu.:0.04337   1st Qu.:0.03106  
 Median : 43.716             Median :0.04677   Median :0.04855   Median :0.04742   Median :0.03830  
 Mean   : 69.529             Mean   :0.05338   Mean   :0.05144   Mean   :0.05043   Mean   :0.04539  
 3rd Qu.: 61.822             3rd Qu.:0.06329   3rd Qu.:0.05782   3rd Qu.:0.05567   3rd Qu.:0.05497  
 Max.   :797.951             Max.   :0.14462   Max.   :0.09104   Max.   :0.08925   Max.   :0.12234  
                                                                                                    
 nbr_stdDev_r250   nbr_stdDev_r500    ndvi_p5_r100     ndvi_p5_r250     ndvi_p5_r500   
 Min.   :0.02026   Min.   :0.02316   Min.   :0.3678   Min.   :0.5456   Min.   :0.5695  
 1st Qu.:0.03375   1st Qu.:0.03715   1st Qu.:0.6775   1st Qu.:0.6841   1st Qu.:0.6903  
 Median :0.04204   Median :0.04302   Median :0.7443   Median :0.7345   Median :0.7263  
 Mean   :0.04360   Mean   :0.04311   Mean   :0.7178   Mean   :0.7222   Mean   :0.7220  
 3rd Qu.:0.05121   3rd Qu.:0.04835   3rd Qu.:0.7860   3rd Qu.:0.7651   3rd Qu.:0.7598  
 Max.   :0.08614   Max.   :0.08061   Max.   :0.8494   Max.   :0.8331   Max.   :0.8152  
                                                                                       
 ndvi_p50_r100    ndvi_p50_r250    ndvi_p50_r500    ndvi_p95_r100    ndvi_p95_r250   
 Min.   :0.6646   Min.   :0.7804   Min.   :0.7921   Min.   :0.7357   Min.   :0.8459  
 1st Qu.:0.8400   1st Qu.:0.8429   1st Qu.:0.8453   1st Qu.:0.8972   1st Qu.:0.9002  
 Median :0.8591   Median :0.8575   Median :0.8550   Median :0.9134   Median :0.9158  
 Mean   :0.8511   Mean   :0.8528   Mean   :0.8524   Mean   :0.9091   Mean   :0.9118  
 3rd Qu.:0.8738   3rd Qu.:0.8678   3rd Qu.:0.8636   3rd Qu.:0.9265   3rd Qu.:0.9255  
 Max.   :0.8956   Max.   :0.8920   Max.   :0.8889   Max.   :0.9551   Max.   :0.9466  
                                                                                     
 ndvi_p95_r500     ndmi_p5_r100      ndmi_p5_r250     ndmi_p5_r500    ndmi_p50_r100   
 Min.   :0.8422   Min.   :0.09341   Min.   :0.2169   Min.   :0.2516   Min.   :0.1951  
 1st Qu.:0.9026   1st Qu.:0.38666   1st Qu.:0.3866   1st Qu.:0.3895   1st Qu.:0.4919  
 Median :0.9150   Median :0.42377   Median :0.4164   Median :0.4197   Median :0.5174  
 Mean   :0.9119   Mean   :0.40971   Mean   :0.4126   Mean   :0.4140   Mean   :0.5124  
 3rd Qu.:0.9244   3rd Qu.:0.46692   3rd Qu.:0.4507   3rd Qu.:0.4418   3rd Qu.:0.5569  
 Max.   :0.9412   Max.   :0.52106   Max.   :0.5189   Max.   :0.4839   Max.   :0.6218  
                                                                                      
 ndmi_p50_r250    ndmi_p50_r500    ndmi_p95_r100    ndmi_p95_r250    ndmi_p95_r500   
 Min.   :0.3914   Min.   :0.4208   Min.   :0.2904   Min.   :0.4571   Min.   :0.5139  
 1st Qu.:0.4904   1st Qu.:0.4954   1st Qu.:0.5652   1st Qu.:0.5740   1st Qu.:0.5765  
 Median :0.5161   Median :0.5182   Median :0.6180   Median :0.6193   Median :0.6102  
 Mean   :0.5128   Mean   :0.5123   Mean   :0.6104   Mean   :0.6079   Mean   :0.6057  
 3rd Qu.:0.5421   3rd Qu.:0.5342   3rd Qu.:0.6495   3rd Qu.:0.6426   3rd Qu.:0.6357  
 Max.   :0.6067   Max.   :0.5723   Max.   :0.7856   Max.   :0.7079   Max.   :0.6898  
                                                                                     
 LC08_045029_20180726_B1 LC08_045029_20180726_B3 LC08_045029_20180726_B4 LC08_045029_20180726_B5
 Min.   : 75.0           Min.   :193.0           Min.   :118.0           Min.   :1479           
 1st Qu.:133.0           1st Qu.:298.8           1st Qu.:167.0           1st Qu.:2485           
 Median :155.0           Median :332.5           Median :182.0           Median :2788           
 Mean   :164.0           Mean   :354.7           Mean   :223.0           Mean   :2802           
 3rd Qu.:179.5           3rd Qu.:374.0           3rd Qu.:220.2           3rd Qu.:3253           
 Max.   :382.0           Max.   :763.0           Max.   :833.0           Max.   :3706           
                                                                                                
 LC08_045029_20180726_B7 LC08_045029_20180726_B10  maxT_annual     meanT_annual     minT_annual   
 Min.   : 228.0          Min.   :2978             Min.   :12.25   Min.   : 7.138   Min.   :2.016  
 1st Qu.: 302.5          1st Qu.:3002             1st Qu.:13.99   1st Qu.: 8.681   1st Qu.:3.306  
 Median : 347.5          Median :3011             Median :14.54   Median : 9.151   Median :3.672  
 Mean   : 414.7          Mean   :3011             Mean   :14.54   Mean   : 9.045   Mean   :3.544  
 3rd Qu.: 411.5          3rd Qu.:3020             3rd Qu.:15.14   3rd Qu.: 9.478   3rd Qu.:3.935  
 Max.   :1572.0          Max.   :3072             Max.   :16.65   Max.   :10.576   Max.   :4.496  
                                                                                                  
 precipitation_mm lg_DistStream     lg_DistRoad    lg_cover2m_max       lg_cover2m_4m   
 Min.   :1818     Min.   :0.8386   Min.   :1.204   Min.   :-1.8005810   Min.   :-5.837  
 1st Qu.:2040     1st Qu.:5.4148   1st Qu.:3.231   1st Qu.:-0.1435955   1st Qu.:-4.635  
 Median :2154     Median :6.0433   Median :3.778   Median :-0.0453234   Median :-3.822  
 Mean   :2171     Mean   :5.8420   Mean   :3.739   Mean   :-0.1440508   Mean   :-3.740  
 3rd Qu.:2260     3rd Qu.:6.6468   3rd Qu.:4.124   3rd Qu.:-0.0160171   3rd Qu.:-2.891  
 Max.   :2660     Max.   :7.1022   Max.   :6.682   Max.   :-0.0008499   Max.   :-1.323  
                                                                                        
 lg_cover4m_16m    
 Min.   :-6.90775  
 1st Qu.:-2.50436  
 Median :-1.71718  
 Mean   :-1.89475  
 3rd Qu.:-1.11390  
 Max.   :-0.09414  

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Ah, cool! Then it should work now after removing the one-level factors.

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Well yes, I jumped that fence by removing the first two variables which only had a single level, but another fence came. This error reads as Error in sjSDM(Y = Y, env = env_vars[, -1:-2], iter = 2L, device = device) : Assertion on 'Y' failed: Must be of type 'matrix', not 'data.frame'. Easy to fix by just converting Y into a matrix. But this is weird because when I ran this code on my local machine, I didn't get this error.

Anyway, even though after converting Y into a matrix I didn't get the above error anymore, I still got the following error (which is the type of error I also got on my local machine):

Error in py_call_impl(callable, dots$args, dots$keywords) : 
  AssertionError: Size mismatch between tensors

Detailed traceback:
  File "/usr/local/lib/R/site-library/sjSDM/python/sjSDM_py/model_sjSDM.py", line 336, in fit
    dataLoader = self._get_DataLoader(X, Y, SP, batch_size, True, parallel)
  File "/usr/local/lib/R/site-library/sjSDM/python/sjSDM_py/model_sjSDM.py", line 228, in _get_DataLoader
    data = torch.utils.data.TensorDataset(torch.tensor(X, dtype=self.dtype, device=torch.device('cpu')),
  File "/opt/venv/reticulate/lib/python3.8/site-packages/torch/utils/data/dataset.py", line 205, in __init__
    assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors), "Size mismatch between tensors"

It points me to the model_sjSDM.py method which the sjSDM class is based on. I don't think it's ideal to go from the R code to these Python methods to debug. Apparently, sjSDM still doesn't like my inputs.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

To do: how to handle NA's in the env matrix

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

I'm now trying to run the entire original code with Docker and it goes through several iterations, but then it stops and I get the following algorithm error:

Error in py_call_impl(callable, dots$args, dots$keywords) : 
  NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat.  This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function.  Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

I checked the env data frame input does not contain any NAs since it has now been scaled, so I don't know where this error emerges from. I'll send you the original code if you'd like to reproduce the error.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

Yes, pls send me the code

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

I'm trying to now run another script with Docker on AWS, and this script chooses the best model out of several tuned models. However, I do get the following error when I'm running on AWS:

Error in hidden[[res.best$hidden.ind]] : 
  attempt to select less than one element in get1index
Calls: sjSDM ... assert -> eval -> eval -> checkMatrix -> DNN -> qassert

But I don't get this (or any) error when I'm running this script on my personal machine. This error seems to emerge when DNN is used. Have you seen this error being generated before? If you want to reproduce this error, I can send you the code and data (models and input data). Thanks.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

The error occurs because of your "hidden" variable and usually when you try to index a list with a zero index:

l = list(1, 2)
l[[0]]

from s-jsdm.

adrian-v-m avatar adrian-v-m commented on August 22, 2024

Yes, thanks. Realised soon after I posted the error message there was again a mix of variable notation. The data loaded had variable names containing _ and sjSDM was expected variables names containing .

The variables in the script have been renamed, but clearly in an inconsistent way.

from s-jsdm.

MaximilianPi avatar MaximilianPi commented on August 22, 2024

I guess we can close this issue

from s-jsdm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.