Giter Club home page Giter Club logo

personalized's Introduction

version Build Status Appveyor Build Status codecov

Overview of ‘personalized’

The ‘personalized’ package is designed for the analysis of data where the effect of a treatment or intervention may vary for different patients. It can be used for either data from randomized controlled trials or observational studies and is not limited specifically to the analysis of medical data.

The personalized package provides estimation methods for subgroup identification under the framework of Chen et al (2017). It also provides routines for valid estimation of the subgroup-specific treatment effects.

Documentation

Documentation

Installing the ‘personalized’ package

Install from CRAN using:

install.packages("personalized")

or install the development version using the devtools package:

devtools::install_github("jaredhuling/personalized")

or by cloning and building using R CMD INSTALL

Quick Usage Overview

Load the package:

library(personalized)

Create a propensity score model

(it should be a function which inputs covariates and treatments and returns propensity score):

prop.func <- function(x, trt)
{
    # fit propensity score model
    propens.model <- cv.glmnet(y = trt,
                               x = x, family = "binomial")
    pi.x <- predict(propens.model, s = "lambda.min",
                    newx = x, type = "response")[,1]
    pi.x
}

Fit a model to estimate subgroup:

subgrp.model <- fit.subgroup(x = x, y = y,
                             trt = trt,
                             propensity.func = prop.func,
                             loss   = "sq_loss_lasso",
                             nfolds = 5)              # option for cv.glmnet

Display estimated subgroups and variables selected which determine the subgroups:

summary(subgrp.model)
## family:    gaussian 
## loss:      sq_loss_lasso 
## method:    weighting 
## cutpoint:  0 
## propensity 
## function:  propensity.func 
## 
## benefit score: f(x), 
## Trt recom = Trt*I(f(x)>c)+Ctrl*I(f(x)<=c) where c is 'cutpoint'
## 
## Average Outcomes:
##                Recommended Ctrl    Recommended Trt
## Received Ctrl -3.9319 (n = 109) -21.2055 (n = 122)
## Received Trt  -25.078 (n = 112)   -8.326 (n = 157)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=Ctrl,Recom=Ctrl]-E[Y|T=/=Ctrl,Recom=Ctrl] 
##                                      21.1461 (n = 221) 
##     Est of E[Y|T=Trt,Recom=Trt]-E[Y|T=/=Trt,Recom=Trt] 
##                                      12.8795 (n = 279) 
## 
## NOTE: The above average outcomes are biased estimates of
##       the expected outcomes conditional on subgroups. 
##       Use 'validate.subgroup()' to obtain unbiased estimates.
## 
## ---------------------------------------------------
## 
## Benefit score quantiles (f(X) for Trt vs Ctrl): 
##      0%     25%     50%     75%    100% 
## -9.2792 -1.8237  0.5011  2.5977  9.6376 
## 
## ---------------------------------------------------
## 
## Summary of individual treatment effects: 
## E[Y|T=Trt, X] - E[Y|T=Ctrl, X]
## 
##     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
## -18.5583  -3.6474   1.0023   0.9507   5.1954  19.2753 
## 
## ---------------------------------------------------
## 
## 5 out of 50 interactions selected in total by the lasso (cross validation criterion).
## 
## The first estimate is the treatment main effect, which is always selected. 
## Any other variables selected represent treatment-covariate interactions.
## 
##             Trt     V2     V11     V17    V32    V35
## Estimate 0.5463 0.9827 -0.4356 -0.1532 0.0326 0.1007

Use repeated train and test splitting to estimate subgroup treatment effects:

val.model <- validate.subgroup(subgrp.model, B = 100,
                               method = "training_test",
                               train.fraction = 0.75)

Display estimated subgroup treatment effects:

print(val.model, digits = 2, sample.pct = TRUE)
## family:  gaussian 
## loss:    sq_loss_lasso 
## method:  weighting 
## 
## validation method:  training_test_replication 
## cutpoint:           0 
## replications:       100 
## 
## benefit score: f(x), 
## Trt recom = Trt*I(f(x)>c)+Ctrl*I(f(x)<=c) where c is 'cutpoint'
## 
## Average Test Set Outcomes:
##                         Recommended Ctrl           Recommended Trt
## Received Ctrl  -9.56 (SE = 7.98, 19.88%) -18.62 (SE = 6.72, 26.5%)
## Received Trt  -16.64 (SE = 6.85, 23.23%) -13.41 (SE = 7.8, 30.39%)
## 
## Treatment effects conditional on subgroups:
## Est of E[Y|T=Ctrl,Recom=Ctrl]-E[Y|T=/=Ctrl,Recom=Ctrl] 
##                              6.54 (SE = 10.49, 43.11%) 
##     Est of E[Y|T=Trt,Recom=Trt]-E[Y|T=/=Trt,Recom=Trt] 
##                              5.21 (SE = 11.06, 56.89%) 
## 
## Est of 
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:                 
## 2.91 (SE = 8.29)

Visualize subgroup-specific treatment effect estimates across training/testing iterations:

plot(val.model)

Investigate the marginal characteristics of the two estimated subgroups

Here we only display covariates with a significantly different mean value (at level 0.05)

summ <- summarize.subgroups(subgrp.model)
print(summ, p.value = 0.05)
##     Avg (recom Ctrl) Avg (recom Trt) Ctrl - Trt SE (recom Ctrl) SE (recom Trt)
## V2           -2.4161          1.9013     -4.317          0.1423         0.1298
## V11           1.1279         -0.7963      1.924          0.1914         0.1572
## V17           0.8053         -0.3715      1.177          0.2170         0.1736

Accessing Help Files for Main Functions of personalized

Access help files for the main functions of the personalized package:

?fit.subgroup
?validate.subgroup

personalized's People

Contributors

aaronpotvien avatar jaredhuling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

personalized's Issues

survival analysis with counting process data

Dear authors,
Thank you very much for your excellent package; and I have a problem in working with survival outcome data with counting process format.

library(personalized)
library(survival)
> dtRNA %>% select(PtID,startTime,endTime,status) %>% head(10)
# A tibble: 10 × 4
   PtID  startTime endTime status
   <chr>     <dbl>   <dbl>  <dbl>
 1 AH01          0       2      0
 2 AH01          2       4      0
 3 AH01          4       8      0
 4 AH02          0       2      0
 5 AH02          2       4      0
 6 AH02          4      13      0
 7 AH03          0       2      0
 8 AH03          2       4      0
 9 AH03          4      10      0
10 AH04          0       2      0
#Data reformat 
trt <- if_else(dtRNA$UTI_flg=="UTI",1,0)
trt <- if_else(dtRNA$fluidvaso_tag=="restrictive",1,0)
Xmatrix <- assay(vsd) %>% 
  as.data.frame() %>% 
  select(dtRNA$SampleName) %>% 
  t()
start.time = dtRNA$startTime
end.time = dtRNA$endTime
end.time = if_else(is.na(end.time),start.time+2,end.time)
status <- dtRNA$status

# create function for fitting propensity score model
prop.func <- function(x, trt)
  {
 # fit propensity score model
  propens.model <- cv.glmnet(
    y = trt,
    x = x, 
    family = "binomial")
  pi.x <- predict(propens.model, s = "lambda.min",
                 newx = x, type = "response")[,1]
  pi.x
}
plot_overlap <- check.overlap(
  Xmatrix, trt, prop.func,
  type = "both")
#Fitting Subgroup Identification Models
subgrp.model <- fit.subgroup(
  x = Xmatrix, 
  y = Surv(start.time,end.time, status),
  trt = trt,
  method = "weighting",
  propensity.func = prop.func,
  loss   = "cox_loss_lasso",cutpoint = "median",
  nfolds = 5)              # option for cv.glmnet
summary(subgrp.model)

The PtID indicate a unique patient subject; and each patient can have multiple observations. I feel that this model is not correctly fit because there is no argument in the fit.subgroup function to indicate patient ID. The outcome is given in y = Surv(start.time,end.time, status). Are there any hints for survival model with counting process data?

Double checking correctness of default behavior of propensity.func() when NULL

With the multiple treatments update in place, is the default behavior of propensity.func() within fit.subgroup() correct for its return of pi.x?

For instance, when n.trts == 2, all subjects get assigned the same value of mean.trt. However, when there are more, say n.trts == 3, then there are 3 different possible values assigned (each getting the mean of their respective category of membership).

This seems like a disconnect (going from 1 to 2 treatments but going from 1 to 3 unique values of pi.x).

Doesn't the generalized version already provide the right calculation when n.trts == 2?. Could the following block be removed from propensity.func() without breaking anything?

    if (n.trts == 2)
    {
        mean.trt <- mean(trt == unique.trts[2L])
        propensity.func <- function(trt, x) rep(mean.trt, length(trt))
    } else

For reference, the entire block of code I'm referring to is this:

if (is.null(propensity.func))
{
    if (n.trts == 2)
    {
        mean.trt <- mean(trt == unique.trts[2L])
        propensity.func <- function(trt, x) rep(mean.trt, length(trt))
    } else
    {
        mean.trt <- numeric(n.trts)
        for (t in 1:n.trts)
        {
            mean.trt[t] <- mean(trt == unique.trts[t])
        }
        propensity.func <- function(trt, x)
        {
            pi.x <- numeric(length(trt))
            for (t in 1:n.trts)
            {
                which.t       <- trt == unique.trts[t]
                pi.x[which.t] <- mean(which.t)
            }

            pi.x
        }
    }
}

Example datasets

It might be useful to include the data you simulate in the vignette as a dataset in your package. Then people can simply load it to use the vignette.

update check overlap function for multiple trtments

the check overlap function now only works for treatment/control setting. It will be more complicated for multiple treatments. need to make checks that the propensity function returns a matrix, not a vector in this case

fixed handling of propensity scores for multiple treatments

propensity score function should return a matrix where each column represents the probability of a particular treatment. Should it be K-1 or K columns? K may be easier, but need to make sure enforcement of ordering with respect to the treatments is clear to the user

add hinge loss via kernlab package

add hinge loss option (binary and continuous outcomes) via kernlab. this may be involved as we will need to carefully specify the kernel such that observation weights can be incorporated

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.