Giter Club home page Giter Club logo

mets's People

Contributors

kkholst avatar klaus-holst avatar scheike avatar tagteam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mets's Issues

Inquiry about Bootphreg with stratification: extracting stratum-specific cumulative baseline hazard

Dear Dr. Holst,

I am reaching out to you to inquire about the Bootphreg function when used with stratification.

When using strata, e.g.:

b <- Bootphreg(Surv(time, status==1) ~ trt + strata(celltype), data=veteran, B=1000),

I see that the output b contains, for each of the B bootstraps, a wild bootstrapped estimate of the baseline cumulative hazard function.

My understanding is that the latter is calculated in this section of the Bootphreg01 function:

cumhaz <- cbind(jumptimes,cumsumstrata(1/val$S0,strata,nstrata)).

I was wondering whether it would be possible (and whether you deem it mathematically sensible) to extract, for each of the B bootstraps, the stratum-specific estimate of the baseline cumulative hazard function. To achieve this, after carefully studying the code, I tried to modify the Bootphreg01 function as follows:

Snipped of original Bootphreg01 function:

{ ... {....
cumhaz <- cbind(jumptimes,cumsumstrata(1/val$S0,strata,nstrata))
colnames(cumhaz) <- c("time","cumhaz")
res[[i]] <- list(coef=cc,cumhaz=cumhaz[,2])
}
names(res) <- 1:B
return(res)
}

Snipped of the modified Bootphreg01:

{ ... {....
cumhaz <- cbind(jumptimes,cumsumstrata(1/val$S0,strata,nstrata))
colnames(cumhaz) <- c("time","cumhaz")
cumhaz_strata <- list()
for (k in 1:nstrata){
strata_idx <- strata == k - 1
cumhaz_strata[[k]] <-cbind(jumptimes[strata_idx], cumsum(1/val$S0[strata_idx]))
}
res[[i]] <- list(coef=cc,cumhaz=cumhaz[,2], cumhaz_strata = cumhaz_strata)
}
names(res) <- 1:B
return(res)
}

With this modification, the output b <- Bootphreg(Surv(time, status==1) ~ trt + strata(celltype), data=veteran, B=1000) contains, for each bootstrap B, in addition to coef and cumhaz, a list cumhaz_strata of length corresponding to the number of strata (i.e., length 4 in this example), with (ideally) the stratum-specific baseline cumulative hazard estimates.

The output seems in line with what I wanted to achieve, but I am wondering, since I am not an expert in the field and I do not know how the functions cumsum and cumsumstrata behave in the background, if you deem it correct. And if you think it is correct, I was also wondering whether it would be possible to add this possibility, i.e., to extract the stratum-specific cumulative hazard estimates, to the original Bootphreg function in the package.

I remain at your disposal for any questions/clarifications.

I hope to hear back from you and many thanks in advance.

Best regards,

Alessandra

predict.phreg hardening

library("mets")
n <- 1e2

t <- rexp(n)
cat <- sample(c("A", "B"), replace = TRUE, size = n)
d <- data.frame(t = t, cat = cat)
model <- phreg(Surv(t)~cat, data = d)
predict(model, newdata = data.frame(cat = c("A", "B")))
predict(model, newdata = data.frame(cat = "A"))

Error in mlogit

Hi,

I encountered an error while following the "Mediation Analysis for survival data" tutorial, specifically in the Multinomial regression section. Here are the steps and the error details:
data(tTRACE)
dcut(tTRACE) <- ~.
weightmodel <- fit <- mlogit(wmicat.4 ~ agecat.4 + vf + chf, data = tTRACE, family = binomial)
Error in cluster.default(id) : only implemented for resamples objects
To further diagnose the issue, I tried running the example code from the mlogit help documentation, but encountered the same error:
data(bmt)
dfactor(bmt) <- cause1f ~ cause
drelevel(bmt, ref = 3) <- cause3f ~ cause
dlevels(bmt)
mreg <- mlogit(cause1f ~ tcell + platelet, bmt)
Error in cluster.default(id) : only implemented for resamples objects
Could you please help me understand where the issue might be?

error in predict.phreg function in mets

I am building an R shiny app and it uses your R package mets. When I tried to use predict with the outcome from phreg, it gives me error.
Error in Xs %*% varbeta :
requires numeric/complex matrix/vector arguments

library(survival)
library(mets)
load("bcdeter.RData")

mCox <- phreg(Surv(lower, upper, type="interval2") ~ cont, data = bcdeter)
summary(mCox)
newdata<-data.frame(cont=seq(-1,2,by=0.5))
predictC<-predict(mCox,newdata=newdata, times=seq(0,45,by=5))

predictC

Inquiry about the vignettes for mediation analysis in survival model

Thanks you for creating the mets package.

I am trying the mediation analysis in survival model and found your vignettes Mediation Analysis for survival data

However, I am not sure which are the direct and indirect effects in your example.

For example, in the following example, which outputs are the direct and indirect effects?

## binomial regression ###########################################################
aaMss <- phreg(Surv(time,status==2)~dnr.f0+dnr.f1+preauto+ttt24+cluster(id),data=wdata,weights=wdata$weights)
summary(aaMss)
#> 
#>    n events
#>  400    194
#> 
#>  200 clusters
#> coeffients:
#>              Estimate   Std.Err      2.5%     97.5% P-value
#> (Intercept) -0.520113  0.259635 -1.028987 -0.011238  0.0452
#> dnr.f01      0.339934  0.376813 -0.398605  1.078473  0.3670
#> dnr.f11      0.274655  0.071476  0.134564  0.414747  0.0001
#> preauto      0.552689  0.365370 -0.163423  1.268800  0.1304
#> ttt24        0.300601  0.380049 -0.444282  1.045483  0.4290
#> 
#> exp(coeffients):
#>             Estimate    2.5%  97.5%
#> (Intercept)  0.59445 0.35737 0.9888
#> dnr.f01      1.40486 0.67126 2.9402
#> dnr.f11      1.31608 1.14404 1.5140
#> preauto      1.73792 0.84923 3.5566
#> ttt24        1.35067 0.64128 2.8448

Thank you very much.

reshaping of the data in bicomprisk

Thanks you for creating the mets package.

I stumble upon a strange behavior when using the bicomprisk() function. I still do not fully understand the theoretical background behind this function, but it seems not a desired behavior.

Here is the problem I saw:
For pairs, with both censored time, the time used in the function is the first one to appear in the data frame.

Sincerely,

Jaromil

Here is the code which replicates this behavior

library(mets)
library(dplyr)
library(tidyr)

## RUN the model
out <- bicomprisk(Event(time,status)~strata(zyg)+id(id),data=prt,cause=c(2,2),
                    return.data = T)

## Data in Wide format/ with max and min time
mydz =prt%>%as_tibble()%>%
  filter(zyg=="DZ")%>%
  group_by(id)%>%
  filter(n()==2)%>%
  summarise(
    mintime = min(time),
    maxtime = max(time),
    status_sum = sum(status),
    wmin =which.min(time),
    minstatus = status[which.min(time)],
    maxstatus = status[which.max(time)]
  )

## Analyse which time is kept when both subject of the pair is censored
## when wmin=1, then the first subject of the pairs has the minimum time.
## it always corresponds to cases when the time used is the minimum. 
mydz%>%
  mutate(model_time = dz$time)%>%
  filter(minstatus==0)%>%
  filter(maxstatus==0)%>%
  count(wmin,model_time==mintime)

Interpretation h2

Hi,
I would like to confirm a part of the results generated in mets.
When running as below

< twinlm(bmi ~ 1, data=d,......) >
In the output (type=ae or ace ...)
< variance decomposition
A 0.90 ( 95%CI 0.82 - 0.95)
..........
Broad sen.....0.90......>
I can interprete 0.90 as my
h2 heritability not ordinary h

Many Thanks

Archiving on CRAN

Hi,
I am the maintainer of the {KMunicate} package, which depends on {pammtools}, which depends on {pec}, which depends on {riskRegression}, which (finally!) depends on {mets}. I have been given notice by CRAN maintainers that the package will be archived on 2022-10-06 unless the issues with these checks are addressed by then.
I thought about reporting this here as well, just in case, is there any update/timeline on that?
Thanks!

Alessandro

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.