Giter Club home page Giter Club logo

mixedup's Introduction

mixedup Logo

mixedup

a package for extracting clean results from mixed models



Codecov test coverage R-CMD-check pkgdown test-coverage Lifecycle: experimental

This package provides extended functionality for mixed models. The goal of mixedup is to solve little problems I have had that slip through the cracks from the various modeling packages and others in trying to get presentable output. Basically the idea is to create (tidy) objects that are easy to use and essentially ready for presentation, as well as consistent across packages and across functions. Such objects would be things like variance components and random effects. I use several of these packages (including mgcv) for mixed models, and typically have to do some notable post processing to get some viable output even with broom::tidy, and this effort often isn’t applicable if I switch to another package for the same type of model. These functions attempt to address this issue.

For more details and examples see https://m-clark.github.io/mixedup/.

Installation

You can install mixedup from GitHub with remotes. Use the second approach if you don’t already have rstanarm or brms (they aren’t required to use in general).

remotes::install_github('m-clark/mixedup')

# if you don't already have rstanarm and/or brms

withr::with_envvar(c(R_REMOTES_NO_ERRORS_FROM_WARNINGS = "true"), 
  remotes::install_github('m-clark/mixedup')
)

Supported models

  • lme4
  • glmmTMB
  • nlme
  • mgcv
  • rstanarm
  • brms

Feature list

  • Extract Variance Components
  • Extract Random Effects
  • Extract Fixed Effects
  • Extract Random Coefficients
  • Extract Heterogeneous Variances
  • Extract Correlation Structure
  • Extract Model Data
  • Summarize Model
  • Find Typical

Not all features are available to the various modeling packages (e.g. autocorrelation for lme4), and some functionality may just not be supported for this package, but most functions are applicable to the packages listed.

Examples

Setup

In the following I suppress the package startup and other information that isn’t necessary for demo.

library(lme4)

lmer_model <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)

library(glmmTMB)

tmb_model <- glmmTMB(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)

library(nlme)

nlme_model <-  nlme(
  height ~ SSasymp(age, Asym, R0, lrc),
  data = Loblolly,
  fixed = Asym + R0 + lrc ~ 1,
  random = Asym ~ 1,
  start = c(Asym = 103, R0 = -8.5, lrc = -3.3)
)

library(brms)

# brm_model = brm(
#   Reaction ~ Days + (1 + Days | Subject), 
#   data = sleepstudy, 
#   refresh = -1,
#   verbose = FALSE,
#   open_progress = FALSE,
#   cores = 4,
#   iter = 1000
# )

library(rstanarm)

# rstanarm_model = stan_glmer(
#   Reaction ~ Days + (1 + Days | Subject), 
#   data = sleepstudy, 
#   refresh = -1,
#   verbose = FALSE,
#   show_messages = FALSE,
#   open_progress = FALSE,
#   cores = 4,
#   iter = 1000
# )

library(mgcv)

gam_model = gam(
  Reaction ~  Days +
    s(Subject, bs = 're') +
    s(Days, Subject, bs = 're'),
  data = lme4::sleepstudy,
  method = 'REML'
)

Extract Output from a Mixed Model

library(mixedup)

extract_random_effects(tmb_model)
# A tibble: 36 × 7
   group_var effect    group  value    se lower_2.5 upper_97.5
   <chr>     <chr>     <fct>  <dbl> <dbl>     <dbl>      <dbl>
 1 Subject   Intercept 308     2.82  13.7    -23.9        29.6
 2 Subject   Intercept 309   -40.0   13.8    -67.2       -12.9
 3 Subject   Intercept 310   -38.4   13.7    -65.4       -11.5
 4 Subject   Intercept 330    22.8   13.9     -4.51       50.2
 5 Subject   Intercept 331    21.6   13.6     -5.11       48.2
 6 Subject   Intercept 332     8.82  12.9    -16.5        34.1
 7 Subject   Intercept 333    16.4   13.1     -9.23       42.1
 8 Subject   Intercept 334    -7.00  12.9    -32.3        18.3
 9 Subject   Intercept 335    -1.04  14.0    -28.5        26.4
10 Subject   Intercept 337    34.7   13.6      7.94       61.4
# … with 26 more rows

extract_fixed_effects(nlme_model)
# A tibble: 3 × 7
  term   value    se     z p_value lower_2.5 upper_97.5
  <chr>  <dbl> <dbl> <dbl>   <dbl>     <dbl>      <dbl>
1 Asym  101.   2.46   41.2       0     96.5      106.  
2 R0     -8.63 0.318 -27.1       0     -9.26      -7.99
3 lrc    -3.23 0.034 -94.4       0     -3.30      -3.16

extract_random_coefs(lmer_model)
# A tibble: 36 × 7
   group_var effect    group value    se lower_2.5 upper_97.5
   <chr>     <chr>     <fct> <dbl> <dbl>     <dbl>      <dbl>
 1 Subject   Intercept 308    254.  13.9      226.       281.
 2 Subject   Intercept 309    211.  13.9      184.       238.
 3 Subject   Intercept 310    212.  13.9      185.       240.
 4 Subject   Intercept 330    275.  13.9      248.       302.
 5 Subject   Intercept 331    274.  13.9      246.       301.
 6 Subject   Intercept 332    260.  13.9      233.       288.
 7 Subject   Intercept 333    268.  13.9      241.       295.
 8 Subject   Intercept 334    244.  13.9      217.       271.
 9 Subject   Intercept 335    251.  13.9      224.       278.
10 Subject   Intercept 337    286.  13.9      259.       313.
# … with 26 more rows

extract_vc(brm_model, ci_level = .8)
# A tibble: 3 × 7
  group    effect    variance    sd sd_10 sd_90 var_prop
  <chr>    <chr>        <dbl> <dbl> <dbl> <dbl>    <dbl>
1 Subject  Intercept    793.  28.2  18.7   38.3    0.527
2 Subject  Days          42.2  6.50  4.73   8.1    0.028
3 Residual <NA>         669.  25.9  23.6   28.0    0.445

summarize_model(lmer_model, cor_re = TRUE, digits = 1)
Computing profile confidence intervals ...

Variance Components:
    Group    Effect Variance   SD SD_2.5 SD_97.5 Var_prop
  Subject Intercept    612.1 24.7   14.4    37.7      0.5
  Subject      Days     35.1  5.9    3.8     8.8      0.0
 Residual              654.9 25.6   22.9    28.9      0.5

Fixed Effects:
      Term Value  SE    t P_value Lower_2.5 Upper_97.5
 Intercept 251.4 6.8 36.8     0.0     238.0      264.8
      Days  10.5 1.5  6.8     0.0       7.4       13.5

find_typical(gam_model, probs = c(.25, .50, .75))
# A tibble: 6 × 8
  group_var effect    group   value    se lower_2.5 upper_97.5 probs
  <chr>     <chr>     <chr>   <dbl> <dbl>     <dbl>      <dbl> <chr>
1 Subject   Days      331    -3.19   2.67     -8.43       2.04 25%  
2 Subject   Days      369     0.873  2.67     -4.36       6.11 50%  
3 Subject   Days      352     3.51   2.67     -1.73       8.75 75%  
4 Subject   Intercept 350   -13.9   13.3     -39.9       12.2  25%  
5 Subject   Intercept 369     3.26  13.3     -22.8       29.3  50%  
6 Subject   Intercept 333    17.2   13.3      -8.87      43.2  75%  

Consistent output

mods = list(
  tmb  = tmb_model,
  lmer = lmer_model, 
  brm  = brm_model,
  stan = rstanarm_model,
  gam  = gam_model
)

purrr::map_df(mods, extract_vc, .id = 'model') 
Computing profile confidence intervals ...
# A tibble: 15 × 8
   model group    effect      variance    sd sd_2.5 sd_97.5 var_prop
 * <chr> <chr>    <chr>          <dbl> <dbl>  <dbl>   <dbl>    <dbl>
 1 tmb   Subject  "Intercept"    566.  23.8   15.0    37.7     0.451
 2 tmb   Subject  "Days"          32.7  5.72   3.80    8.59    0.026
 3 tmb   Residual  <NA>          655.  25.6   NA      NA       0.523
 4 lmer  Subject  "Intercept"    612.  24.7   14.4    37.7     0.47 
 5 lmer  Subject  "Days"          35.1  5.92   3.80    8.75    0.027
 6 lmer  Residual ""             655.  25.6   22.9    28.9     0.503
 7 brm   Subject  "Intercept"    793.  28.2   15.8    46.3     0.527
 8 brm   Subject  "Days"          42.2  6.50   4.32    9.28    0.028
 9 brm   Residual  <NA>          669.  25.9   22.5    29.6     0.445
10 stan  Subject  "Intercept"    585.  24.2   12.3    36.3     0.447
11 stan  Subject  "Days"          44.0  6.64   4.00    9.98    0.034
12 stan  Residual  <NA>          680.  26.1   NA      NA       0.519
13 gam   Subject  "Intercept"    628.  25.1   16.1    39.0     0.477
14 gam   Subject  "Days"          35.9  5.99   4.03    8.91    0.027
15 gam   Residual  <NA>          654.  25.6   22.8    28.7     0.496

Code of Conduct

Please note that the ‘mixedup’ project is released with a Contributor Code of Conduct.

By contributing to this project, you agree to abide by its terms.

mixedup's People

Contributors

m-clark avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mixedup's Issues

issue with new mixedup or new mgcv

Hi, first thank you for that great package!
Unfortunately, it does not work for my use case anymore :(
Hope you can help :D
Here's a reprex:

packageVersion("mixedup")
# [1] ‘0.4.0’
packageVersion("mgcv")
# [1] ‘1.8.41’
mtcarsf <- mtcars
mtcarsf$cyl <- factor(mtcarsf$cyl)
mod <- mgcv::bam(mpg ~ s(cyl, bs = "re", by = I(log(wt))), data = mtcarsf)
coefs <- mixedup::extract_random_effects(mod)
# Error in `[.data.frame`(model$model, , re_names[i]) : 
# undefined columns selected

digging into the code vn does not look to contain a valid column anymore when grabbed here

$vn
[1] "eraI(log(expected_dog_goals))"

if no by term in the spline, it works fine

> mod <- mgcv::bam(mpg ~ s(cyl, bs = "re"), data = mtcarsf)
> coefs <- mixedup::extract_random_effects(mod)
> coefs
# A tibble: 3 × 7
  group_var effect    group value    se lower_2.5 upper_97.5
  <chr>     <chr>     <chr> <dbl> <dbl>     <dbl>      <dbl>
1 cyl       Intercept 4      6.00  3.42     -0.71      12.7 
2 cyl       Intercept 6     -0.72  3.45     -7.48       6.04
3 cyl       Intercept 8     -5.28  3.41    -12.0        1.41

extract_vc for glmmTMB with ci has numerous issues

This appears to be a result of the 1.0 release, but there are problems with residual variance, the component names are not included in some tmb models, continued issues with ar etc. models, and so forth.

Main thing for now is to get ci with extract_vc (and thus summarize_models) for standard glmm models. To do this, one needs to overcome the inconsistent confint returned result among other things.

`extract_vc()` with `weights = varIdent()` in `lme()`

Hi, thanks for my probably favourite non-CRAN-package.

I guess you may not be planning to make this possible, since extract_vc() "has functionality for simpler models", but I noticed that when fitting a heterogeneous error variance (i.e. diagonal variance structure on the R-side of a model) with lme(), I will not receive separate estimates for the error variance when using extract_vc(). Moreover, it seems like I get only one (the first?) of the multiple error variances.

Below is a reprex where I tried to extract the relevant info myself.

library(agridat)
library(glmmTMB)
library(mixedup)
library(nlme)
library(tidyverse)


# data --------------------------------------------------------------------
dat <- agridat::mcconway.turnip %>%
  mutate(unit = 1:n()) %>%
  mutate_at(vars(density, unit), as.factor)

# mod:lme -----------------------------------------------------------------
diag_lme <- lme(
  yield ~
    gen * date * density,
  random  = ~ 1 | block,
  weights = varIdent(form =  ~ 1 | date),
  data    = dat
)

# extract VC --------------------------------------------------------------
extract_vc(diag_lme)
#>          group    effect variance    sd sd_2.5 sd_97.5 var_prop
#> block    block Intercept    1.597 1.264  0.455   3.511     0.27
#> 1     Residual              4.306 2.075  1.548   2.782     0.73

# it should do something like this:
lme_Gside <- extract_vc(diag_lme) %>%
  filter(group != "Residual") %>% 
  as_tibble()

lme_Rside <- diag_lme$modelStruct$varStruct %>%
    coef(unconstrained = FALSE, allCoef = TRUE) %>%
    enframe(name = "group", value = "varStruct") %>%
    mutate(sigma         = diag_lme$sigma) %>%
    mutate(StandardError = sigma * varStruct) %>%
    mutate(variance      = StandardError ^ 2) %>%
    mutate(effect = "Residual") %>%
    select(effect, group, variance)

bind_rows(lme_Gside, lme_Rside)
#> # A tibble: 3 x 7
#>   group     effect    variance    sd sd_2.5 sd_97.5 var_prop
#>   <chr>     <chr>        <dbl> <dbl>  <dbl>   <dbl>    <dbl>
#> 1 block     Intercept     1.60  1.26  0.455    3.51     0.27
#> 2 21Aug1990 Residual      4.31 NA    NA       NA       NA   
#> 3 28Aug1990 Residual     15.5  NA    NA       NA       NA

Created on 2022-01-09 by the reprex package (v2.0.1)

`posterior_samples` deprecation

brms now uses the posterior package approach for posterior draws, which seems to bears little resemblance to the posterior_draws function by default. The documentation for as_draws and related doesn't really describe what the functions return or what to do with them, but at first blush the as_draw_matrix should work similarly to how posterior_samples was used for extract_random_effects. Ideally though, we could use the new approach to save some other processing, e.g. going directly to df/tibble via summarize_draws, but it'd be nice to avoid another dependency.

Note that this only applies to extract_random_effects/coefs.

Feature tracking

Extract Variance Components
  • lme4
  • glmmTMB
  • nlme
  • brms
  • rstanarm
  • mgcv
  • gpboost
Extract Random Effects
  • lme4
  • glmmTMB
  • nlme
  • brms
  • rstanarm
  • mgcv
Extract Fixed Effects
  • lme4
  • glmmTMB
  • nlme
  • brms
  • rstanarm
  • mgcv
  • gpboost
Extract Random Coefficients
  • lme4
  • glmmTMB
  • nlme
  • brms
  • rstanarm
  • mgcv
Summarize Models
  • lme4
  • glmmTMB
  • nlme
  • brms
  • rstanarm
  • mgcv
  • gpboost
Extract Heterogeneous Variances
  • glmmTMB
  • nlme
Extract Correlation Structures
  • glmmTMB
  • nlme
  • brms
Find Typical
  • lme4
  • glmmTMB
  • nlme
  • brms
  • rstanarm
  • mgcv

R version

I'm unable to install this package and get this error message: "package ‘mixedup’ is not available for this version of R." package may need updates.

**Summarise_model** returns *'Error: Can't transform a data frame with duplicate names.'*

I am running into an issue when I call 'summarise_model' on a mixed model object (class = "lmerModLmerTest"). The error returned is: "Error: Can't transform a data frame with duplicate names."

FWIW, @m-clark has worked with a dummy dataset I sent to him and found NO errors (suggesting, perhaps some versioning issue). That said, we are both running the same versions of 'dplyr', 'lme4', and 'mixedup' (1.0.2, 1.1.26, 0.3.8 respectively). My R version is 3.6.3 (and his is 4.0.3). Will update my R and see what happens...

head(dfs2)
Growth Days StartingValue V4
1 0.0 1.2133686 0.4694595 XZ92877
2 1.2 0.5078494 0.4694595 XZ92877
3 0.5 0.7492574 0.4694595 XZ92877
4 1.5 0.5078494 0.3097708 TR59858
5 1.7 0.7492574 0.3097708 TR59858
6 0.0 1.2133686 0.3097708 TR59858

lmer_fit2 <- lmer(Growth ~ Days*StartingValue + (1|V4), data = dfs2)
summarise_model(lmer_fit2,ci=FALSE)
Error: Can't transform a data frame with duplicate names.
Run rlang::last_error() to see where the error occurred.

empty zi for glmmTMB with extract_vc

If a tmb model component doesn't have a random effect, extract_vc will fail, and subsequently summarize_model(..., component = 'zi'). It should return an empty result with a message, allowing the fixed effects to still be displayed.

library(glmmTMB)
library(mixedup)

(m1 <- glmmTMB(
  count ~ mined + (1 | site),
  zi =  ~ mined,
  family = poisson,
  data = Salamanders
))
#> Warning in Matrix::sparseMatrix(dims = c(0, 0), i = integer(0), j =
#> integer(0), : 'giveCsparse' has been deprecated; setting 'repr = "T"' for you

#> Warning in Matrix::sparseMatrix(dims = c(0, 0), i = integer(0), j =
#> integer(0), : 'giveCsparse' has been deprecated; setting 'repr = "T"' for you

#> Warning in Matrix::sparseMatrix(dims = c(0, 0), i = integer(0), j =
#> integer(0), : 'giveCsparse' has been deprecated; setting 'repr = "T"' for you
#> Formula:          count ~ mined + (1 | site)
#> Zero inflation:         ~mined
#> Data: Salamanders
#>       AIC       BIC    logLik  df.resid 
#> 1908.4695 1930.8080 -949.2348       639 
#> Random-effects (co)variances:
#> 
#> Conditional model:
#>  Groups Name        Std.Dev.
#>  site   (Intercept) 0.28    
#> 
#> Number of obs: 644 / Conditional model: site, 23
#> 
#> Fixed Effects:
#> 
#> Conditional model:
#> (Intercept)      minedno  
#>      0.0879       1.1419  
#> 
#> Zero-inflation model:
#> (Intercept)      minedno  
#>       1.139       -1.736

summarize_model(m1, component = 'zi')
#> Error in sqrt(variance$variance): non-numeric argument to mathematical function

Created on 2021-04-14 by the reprex package (v2.0.0)

Installation false warning

In a recent fresh install I can get the following on a machine that does not have all the packages that might be used.

Error : (converted from warning) namespace ‘rstanarm’ is not available and has been replaced
by .GlobalEnv when processing object ‘brms_model’
ERROR: unable to build sysdata DB for package ‘mixedup’

There is no issue with the build or hundreds of tests, and I have not otherwise found any error. brms_model (along with rstanarm_model) is an internal data object used for the vignettes so that it doesn't have to compile/run during their build. I found that deleting rstanarm or lme4 packages causes this error, but not having other packages installed (like glmmTMB) is not a problem.

There is a closed issue on devtools/remotes that speaks to the problem, but actually wasn't resolved: r-lib/remotes#374

I will update the ReadMe with the workaround noted in the remotes issue and post it here, but otherwise have no idea why it's a problem with 4.0, as nothing changed on my end regarding these objects.

Workaround:

withr::with_envvar(c(R_REMOTES_NO_ERRORS_FROM_WARNINGS="true"), 
  devtools::install_github('m-clark/mixedup')
)

Some options drop term names in extract fixed effects.

While recently discovered and fixed for merMod when CI is not wanted, it appears to happen for exponentiate and other classes due to dplyr bug/philosophy where rownames are dropped even if the class of the object is not changed (i.e. is still a data.frame and not a tibble). This is because mutate, filter, etc. will as.data.frame(tbl_df(...)) even though this is not required or requested for the operation, and there is no argument to change the behavior.

tmb_2 = glmmTMB::glmmTMB(Reaction ~ Days + (Days|Subject), lme4::sleepstudy)
extract_fixed_effects(tmb_2, exponentiate = T)

# A tibble: 2 x 7
#  term     value       se     z p_value lower_2.5 upper_97.5
#  <chr>    <dbl>    <dbl> <dbl>   <dbl>     <dbl>      <dbl>
# 1 1     1.53e109 1.01e110 37.9        0  3.46e103   6.75e114
# 2 2     3.51e  4 5.28e  4  6.97       0  1.85e  3   6.68e  5

Explore basic methods for gpboost

gpboost seems like a viable package for fast mixed models conducted on large data, possibly for nonlinear effects. For standard mixed models, it'd be nice to have the usual summaries for fixed and random effects at least. The rest of it's functionality might be too much to sort out, but if one is only doing what amounts to a glmm, hopefully we could extract some presentable results.

Update tests to check for tibbled output

This has mostly been implemented, but the tests check only for data.frame, and I've been bitten by that at least once.

  • For basic functionality tests, check for tibble rather than data.frame
    • Note that sometimes we actually want a matrix e.g. corr mat, and though this could be tibbled also, I don't really feel the need
  • Note this should get rid of rownames also

Simplify extract_ranef.merMod

extract_ranef.merMod needlessly calls ranef 2x, once for names and another for values. For models with many levels this can be prohibitively slow.

0.4.0 Roadmap

Package needs some love.

  • General update for R 4.2 and related package updates
    • Update test objects with results from current packages
    • Update tests given current test objects
    • Improve code throughout
      • Change stops to {assertthat}
      • Use {{}} where applicable
  • #8
  • #31
  • #10
    • As long as this only warns but still allows install I think it's okay to close
  • Misc issues

Add aliases to some functions

For consistency and ease of relating to typical lme4 style

  • extract_ranef - extract_random_effects

  • extract_fixef - extract_fixed_effects

  • extract_coef - extract_coefficients

  • extract_variance_components, extract_VarCorr - extract_vc

In extract_random_effects(), if the effect is a factor, no label is passed

When I try to get the random effects of a model using bam(), the effect is not fully labelled, leaving necessary factor labels not passed.

For instance, using the example in [https://m-clark.github.io/posts/2019-10-20-big-mixed-models/]:

library(lme4)
library(mgcv)
library(dplyr)
library(mixedup)

ss <- sleepstudy %>%
  mutate(Period = as.factor(ifelse(Days < 4, "Before", "After")))

ga_model = gam(
    Reaction ~  Period + s(Subject, bs = 're') + s(Period, Subject, bs = 're'),
    data = ss,
    method = 'REML'
)

extract_random_effects(ga_model) %>%
  data.frame()

I would expect that in the column with the effect the "Period" would instead read as "PeriodBefore" and "PeriodAfter", instead of only "Period", as this does not allow to recognize whether it is one or the other.

Thank you very much for the package.

extract_random_effect did not yield confidence intervals for nlme object

mixedup::extract_random_efects does not report condfidence intervals for nlme object. It is not working for nonlinear models even though the documentation inform that fucntion can handle with this objects.

mod=nlme(yAsym/exp((xmid-x)/scal),
data=data,
fixed = Asym+xmid+scal ~ 1,
random = Asym+xmid+scal
1|country,
)

extract_random_effect(mod)
group_var effect group value

1 country Asym Afghanistan -13353.
2 country Asym Algeria -11859.
3 country Asym Andorra -5100.
4 country Asym Argentina 337.
5 country Asym Armenia -4778.
6 country Asym Australia -7036.
7 country Asym Austria 1543.
8 country Asym Azerbaijan -11756.
9 country Asym Bahrain -13132.
10 country Asym Belarus -12661.

... with 275 more rows

Any alternative or solution?

Pass dots from extract_random_coefficients

Right now extract_random_coefficients ignores dots (this is documented), but should probably be fine for passing, e.g. add_group_N or exponentiate to the underlying functions.

extract_ranef() fails for random slopes on factor predictors with gam

This was found via gammit for doing random categorical effects. It can be overcome by simply creating another grouping variable that is the interaction of the categorical predictor and grouping variable for the random effect (specifically model_w_wm at that link). This would also make clear what coefficients go with which effects. However it'd be nice to not to have to worry with it if specified as follows, but the level names are not retained in the smooth elements/attributes if you do it this way.

library(mgcv)
library(mixedup)
library(lme4)
library(dplyr)

data(sleepstudy)

ga_model <- gam(Reaction ~ Days + s(Subject, bs = "re") + s(Days, Subject, bs = "re"),
                data = sleepstudy %>% 
                    mutate(Days = factor(case_when(
                        Days < 2 ~ "x",
                        Days < 5 ~ "y",
                        TRUE ~ "z"
                    )))
                 ##
)

extract_random_coefs(ga_model) # error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.