Giter Club home page Giter Club logo

npdr's Introduction

Nearest-neighbor Projected-Distance Regression (NPDR)

Trang T. Le, Bryan A. Dawkins, B. A. McKinney. “Nearest-neighbor Projected-Distance Regression (NPDR) for detecting network interactions with adjustments for multiple tests and confounding,” Bioinformatics, Volume 36, Issue 9, May 2020, Pages 2770–2777 free.

NPDR is a nearest-neighbor feature selection algorithm that fits a generalized linear model for projected distances of a given attribute over all pairs of instances in a neighborhood. In the NPDR model, the predictor is the attribute distance between neighbors projected onto the attribute dimension, and the outcome is the projected phenotype distance (for quantitative traits) or hit/miss (for case/control) between all pairs of nearest neighbor instances. NPDR can fit any combination of predictor data types (categorical or numeric) and outcome data types (case-control or quantitative) as well as adjust for covariates that may be confounding. As with STIR (STatistical Inference Relief), NDPR allows for the calculation of statistical significance of importance scores and adjustment for multiple testing.

Install

You can install the development version from GitHub with remotes:

# install.packages("remotes") # uncomment to install remotes
remotes::install_github("insilico/npdr")

library(npdr)
# data(package = "npdr")

Dependencies

To set fast.reg = TRUE or fast.dist = TRUE or use.glmnet = TRUE, please install the speedglm and glmnet packages:

install.packages(c("speedglm", "wordspace", "glmnet"))

If an issue arises with updating openssl, try updating it on your own system, e.g. for MacOS brew install [email protected].

Details

Relief-based methods are nearest-neighbor machine learning feature selection algorithms that compute the importance of attributes that may involve interactions in high-dimensional data. Previously we introduced STIR, which extended Relief-based methods to compute statistical significance of attributes in case-control data by reformulating the Relief score as a pseudo t-test. Here we extend the statistical formalism of STIR to a generalized linear model (glm) formalism to handle quantitative and case-control outcome variables, any predictor data type (continuous or categorical), and adjust for covariates while computing statistical significance of attributes.

Contact

[email protected]

Websites

Related references

npdr's People

Contributors

brett-mckinney avatar trangdata avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

npdr's Issues

Add "batch" number as additional variable in regression

We discussed adding a "batch" variable in the individual regression to alleviate some violation of the independence assumption (hence the term pseudo). For example, a diff between sample 3 and 2 would have 3 as the batch number (rule of thumb: take the first sample id).

I thought about considering this variable as a random effect term, but the independence assumption there is not quite what we want. For instance, within the neighborhood of 3, these differences (e.g. 3-2, 3-5, 3-6) are independent. However, they may not be independent of other differences in a different neighborhood (e.g. 2-5). In short, we have within-neighborhood independence but not between-neighborhood (which a mixed model would correct for).

Maybe we should stick with the fixed model and adding the batch variable as a fixed effect term.

Error in example code when running regular_nestedCV

I'm getting an error when trying to run the following in quantitative-trait-maineffect-simulation.R:

rncv.qtrait <- regular_nestedCV(train.ds = qtrait.data, 
+                                   validation.ds =  qtrait.3sets$validation, 
+                                   label = "qtrait",
+                                   method.model = "classification",
+                                   is.simulated = TRUE,
+                                   ncv_folds = c(10, 10),
+                                   param.tune = FALSE,
+                                   learning_method = "rf", 
+                                   importance.algorithm = "RReliefFequalK",
+                                   relief.k.method = "k_half_sigma",     # surf k
+                                   num_tree = 500,
+                                   verbose = F)

The error:

Error in if (tmps < .Machine$double.eps^0.5) 0 else tmpm/tmps : 
  missing value where TRUE/FALSE needed

Perhaps we need to revisit the regular_nestedCV function?

More refactorization

  • modularize the code (e.g. move simulation function elsewhere, break down functions with more than 500 lines of code, etc.)
  • create examples that work (hopefully independent of the data simulation step, move the examples from inst to vignettes)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.