Giter Club home page Giter Club logo

Comments (7)

simonpcouch avatar simonpcouch commented on September 10, 2024 1

Mm, yes and no. tune contends that the most safe/reasonable default for importance weights is to apply them when fitting but not calculating metrics, so it doesn't allow overriding when resampling with tune. At the same time, tune does respect the output of use_case_weights_with_yardstick() for the given case weights class. So, if you'd like to inherit all of the behavior for importance weights except that you'd like to apply them at metric calculation time, you could make a subclass of importance_weights and define a use_case_weights_with_yardstick() method for it. This example adapts @lilykoff's code to do so:

library(tidyverse)
library(tidymodels)
library(censored)
#> Loading required package: survival

nhanes_mortality = readr::read_csv("https://github.com/tidymodels/tune/files/15015680/nhanes_mortality.csv.gz")
#> Rows: 3728 Columns: 4
#> ── Column specification ────────────────────────────────────────────────────────
#> Delimiter: ","
#> dbl (4): age, mortstat, event_time, weight_norm
#> 
#> ℹ Use `spec()` to retrieve the full column specification for this data.
#> ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.

# create a survival object
all_mort_surv =
  nhanes_mortality %>%
  mutate(mort_surv = Surv(event_time, mortstat)) %>%
  mutate(case_weights_imp = hardhat::importance_weights(weight_norm))

# start: simon's edits ------------------------------------------------------------------------
# subclass the case weights and introduce needed methods for them
class(all_mort_surv$case_weights_imp)
#> [1] "hardhat_importance_weights" "hardhat_case_weights"      
#> [3] "vctrs_vctr"

all_mort_surv$case_weights_imp <-
  structure(
    all_mort_surv$case_weights_imp, 
    class = c("survey_weights", class(all_mort_surv$case_weights_imp))
  )

class(all_mort_surv$case_weights_imp)
#> [1] "survey_weights"             "hardhat_importance_weights"
#> [3] "hardhat_case_weights"       "vctrs_vctr"

# define a method that tells tune to pass case weights to metrics calculations
.use_case_weights_with_yardstick.survey_weights <- function(x) {TRUE}

# define a method that tells vctrs how to convert to double
vec_cast.double.survey_weights <- hardhat:::vec_cast.double.hardhat_importance_weights

# end: simon's edits ------------------------------------------------------------------------

# now we want to do cross validation
set.seed(123)
# get five folds
folds = vfold_cv(all_mort_surv, v = 5, repeats = 1)
get_conc = function(ind, folds){
  # get concordance using coxph
  # for each fold, get the training and testing data, calculate concordance
  dat = get_rsplit(folds, index = ind)
  fit = coxph(mort_surv ~ age, data=analysis(dat), weights = weight_norm)
  test = assessment(dat)
  concordance(mort_surv ~ predict(fit, newdata = test), data = test, weights = weight_norm, reverse = TRUE)[[1]]
}
coxph_c_cv = map_dbl(.x = 1:5, .f = get_conc, folds = folds)
coxph_c_cv
#> [1] 0.6460921 0.6106318 0.7961755 0.6727030 0.6150353

mean(coxph_c_cv)
#> [1] 0.6681275


# now we do the same in the tidy framework
survival_metrics = metric_set(concordance_survival)
survreg_spec =
  proportional_hazards() %>%
  set_engine("survival") %>%
  set_mode("censored regression")


wflow =
  workflow() %>%
  add_model(survreg_spec) %>%
  add_variables(outcomes = mort_surv,
                predictors = age) %>%
  add_case_weights(case_weights_imp)

res = fit_resamples(
  wflow,
  resamples = folds,
  metrics = survival_metrics,
  control = control_resamples(save_pred = TRUE)
)

tidy_c_cv = collect_metrics(res)
all.equal(mean(coxph_c_cv), tidy_c_cv$mean)
#> [1] TRUE

Created on 2024-04-17 with reprex v2.1.0

If yall think it's an unsafe default to not apply importance weights when calculating metrics, we'd definitely be interested in hearing why and in what contexts. This may be a matter of tidymodels introducing an importance weight subclass that does default to applying case weights when calculating metrics. I didn't implement the use_case_weights_with_yardstick() methods so I'm not sure the references that those decisions draw from (and will follow up with those later), but it looks to me like they very intentionally choose not to include importance weights in metric calculations when resampling.

from tune.

topepo avatar topepo commented on September 10, 2024 1

So the end result is that you cannot do weighted survival concordance with weighting except for frequency weights (e.g. no survey weighting)?

We think that our logic is pretty sound for the cases that we've outlined in the blog post (referenced above). We did ask in several venues (including that blog) for help on survey weights, so we'd love to hear what you think should happen. We don't have much experience in that area.

It's not too hard to define your own type of case weight. The current definitions are here. We can help with it and, if needed , update the logic for your cases.

from tune.

simonpcouch avatar simonpcouch commented on September 10, 2024

Thank you for the thorough issue description!

Noting that I've edited the call to read_csv() in the OP to read directly from GH.

from tune.

simonpcouch avatar simonpcouch commented on September 10, 2024

Ah, it appears this may be intentional. In ?.use_case_weights_with_yardstick():

tune/R/case_weights.R

Lines 3 to 8 in 668eec2

#' @description
#' This S3 method defines the logic for deciding when a case weight vector
#' should be passed to yardstick metric functions and used to measure model
#' performance. The current logic is that frequency weights (i.e.
#' [hardhat::frequency_weights()]) are the only situation where this should
#' occur.

That is, internally, tune intentionally doesn't use importance case weights when calculating metrics. That's about as far as I can get with GitHub archeology. This tidyverse.org blog post reads:

As a counter example, importance weights reflect the idea that they should only influence the model fitting procedure. It wouldn’t make sense to use a weighted mean to center a predictor; the weight shouldn’t influence an unsupervised operation in the same way as model estimation. More critically, any holdout data set used to quantify model efficacy should reflect the data as seen in the wild (without the impact of the weights).

At the least, we can try and find a more public-facing place in the documentation to surface this and provide a reference or two.

from tune.

muschellij2 avatar muschellij2 commented on September 10, 2024

So the end result is that you cannot do weighted survival concordance with weighting except for frequency weights (e.g. no survey weighting)?

from tune.

simonpcouch avatar simonpcouch commented on September 10, 2024

Going to go ahead and close. Feel free to ping here or in a separate issue if you think this default ought to be changed, as we're very much interested in making improvements here.

from tune.

github-actions avatar github-actions commented on September 10, 2024

This issue has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue.

from tune.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.