Giter Club home page Giter Club logo

pacman's People

Contributors

arbelt avatar dasonk avatar dpastoor avatar gadenbuie avatar jemus42 avatar jimhester avatar khughitt avatar mmcgowan13 avatar smach avatar trinker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pacman's Issues

p_base

I'm confused on what p_base is supposed to do. Can you provide some explanation?

p_opendir

We use opening a directory in p_library which makes sense to me. However, opening a directory isn't really related to package management. Should we export p_opendir? It's a nice little function but we could also just keep it internal.

Also there isn't much harm in exporting it I feel like a package like this should be focused and only include things that deal with the particular task at hand.

Add verbose argument

p_install and p_load are verbose. Add a verbose argument that suppesses these messages.

verbose = getOption("pac_vervose")

Defaulting to TRUE and allowing the user to set this to FALSE in their .Rprofile

p_load

What is the purpose of offering the choice of using require or library? I'm just wondering if there is a special reason you are offering both.

Get packages not made for latest R release

Sometimes when I want to get a package and the package has not been updated for the latest release of R it gives an error that says essentially this package isn't for R version x. I can still go download this package from CRAN (as a zip in my case) and install it and it works fine.

I'd like this package (specifically the p_get/p_install and p_load) to somehow still get those packages and install them but throw up a warning that says this.

I've thought about having R get the file as a tarball or zipball and installing that way (locally) but this seems like there's an easier way.

Thoughts?

If this doesn't make sense I'll try to find a specific instance where this occurs (I can't get a package that's out of date).

Create a p_reload function

add a p_reload function that unloads a function and reloads it. Helpful if you are working on a package, make a change and install the package again. devtools has a similar function already.

Built out of p_unload and p_load and takes multiple arguments.

This is a discussion starter to see if there's a need.

Don't auto detect in `p_install` and p_load`

Currently p_load and p_install tries to auto detect the source. This slows things way down.

We can add a source argument or something like this (maybe type) which won't interfere with install.packages and install_github arguments (or maybe it's late and I'm talking in nonsensical circles).

Why export p_detectOS

Is there a reason to export p_detectOS. While it is useful internally I don't see this as being package specific or necessary for most package tasks a user may undertake.

Thoughts?

XML is now on CRAN

Previously we had planned for windows users who could not get XML as a binaries from CRAN. This is no longer the case, however, RCurl still requires a download from outside of CRAN

Vignettes

Sometimes vignettes don't just take the name of the package. For instance for xtable the vignette is actually called "xtableGallery"

vignette("xtable")
#Warning message:
#vignette ‘xtable’ not found 
vignette(package = "xtable")
vignette("xtableGallery")

If the user supplies a package name to p_vignette should we give them a list of available vignettes for the package?

Merge testing with master

I added a branch called 'testing'. This is where I might make some radical changes that may or may not break things. At the moment all I did was delete the (essentially) duplicate code files for functions and their alias and only keep one of them (typically the function with the longer name) but add something at the bottom so that the alias still holds.

For example instead of having both p_version.R and p_ver.R (which isn't good practice if we always want them to be the same because then if we change one but forget about the other then things get messed up) I deleted p_ver.R and changed p_version.R to:

p_version <-
function(package="R") {
    x <- as.character(substitute(package))
    if (x%in%c("r", "R")){
        R.Version()[["version.string"]]
    } else {
        packageDescription(x)["Version"]
    }
}

p_ver <- p_version

If this is alright with you I can merge those changes into the master branch. I already did some testing and it doesn't seem to break anything

install_github("pacman", "trinker", "testing")

Note that I can do the merge if you don't feel comfortable with it. Although it's not bad. You would just need to make sure that you're currently in the master branch (git branch) and then just type git merge testing . Mainly I just wanted to let you know what was going on and to make sure you're alright with it.

http://gitref.org/branching/#merge

Description

Should we convert the url for the package in the description to our github page or would you prefer to keep it as your blog?

p_functions

I think it would be interesting if we offered an option to list ALL of the functions in a package (even the ones that aren't exported). I haven't put much thought into this yet but what do you think?

`p_functions`: all of default not found in output with `all = TRUE`

I was creating a unit test for p_functions and came across this problem. Not sure if it is even a problem that needs to be addressed. But I would expect the output when all = TRUE would contain all the functions of when all = FALSE. This is not true of testthat. This seems to be C level calls:

testhat_funs <- p_functions(testthat) 
all_testthat_funs <- p_functions(testthat, all=TRUE)

testhat_funs %in% all_testthat_funs

##  [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE
## [13]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [25]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [37]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [49]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [61]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE


testhat_funs

##  [1] ".__C__ListReporter"     ".__C__MinimalReporter"  ".__C__MultiReporter"   
##  [4] ".__C__Reporter"         ".__C__SilentReporter"   ".__C__StopReporter"    
##  [7] ".__C__SummaryReporter"  ".__C__TapReporter"      ".__C__TeamcityReporter"
## [10] "auto_test"              "auto_test_package"      "colourise"             
## [13] "context"                "equals"                 "evaluate_promise"      
## [16] "expect_equal"           "expect_equivalent"      "expect_error"          
## [19] "expect_false"           "expect_identical"       "expect_is"             
## [22] "expect_less_than"       "expect_match"           "expect_message"        
## [25] "expect_more_than"       "expect_named"           "expect_null"           
## [28] "expect_output"          "expect_that"            "expect_true"           
## [31] "expect_warning"         "expectation"            "fail"                  
## [34] "get_reporter"           "gives_warning"          "has_names"             
## [37] "is.expectation"         "is_a"                   "is_equivalent_to"      
## [40] "is_false"               "is_identical_to"        "is_less_than"          
## [43] "is_more_than"           "is_null"                "is_true"               
## [46] "library_if_available"   "ListReporter"           "make_expectation"      
## [49] "matches"                "MinimalReporter"        "MultiReporter"         
## [52] "not"                    "prints_text"            "Reporter"              
## [55] "set_reporter"           "shows_message"          "SilentReporter"        
## [58] "source_dir"             "StopReporter"           "SummaryReporter"       
## [61] "takes_less_than"        "TapReporter"            "TeamcityReporter"      
## [64] "test_check"             "test_dir"               "test_env"              
## [67] "test_file"              "test_package"           "test_that"             
## [70] "throws_error"           "watch"                  "with_reporter"     

Non CRAN repos

In the provious code for p_load we had this phrase:

if not it attempts to install the package from CRAN and/or any other repository

This indicates that we could install from non-CRAN repos. I don't know if this is true. If it is I don't know how it worked. I doubt it works any longer but may be wrong as when I recoded p_load and p_install I didn't have this in mind. I suspect p_set_cranrepo may do this. It's be cool if it does it automatically as I did include this in line 130 of: https://github.com/trinker/pacman/blob/master/R/p_install.R in the p_install_cran helper function.

This was after the last series of commits: b56d2f6

p_load throws warning more than one repo

After extensive work by Dason on pacman earlier today it inspired me to move forward and report this problem. If you download a package you get

Warning message:
In if (getOption("repos") == "@CRAN@") { :
  the condition has length > 1 and only the first element will be used

I think it can be traced abck to

> pacman:::p_set_cranrepo
function(){
    # If no repo is set then choose the main CRAN mirror
    # this way the user doesn't have to deal with the repo...
    if(getOption("repos") == "@CRAN@"){
        options(repos = "http://cran.r-project.org/")
    }

    # Add ripley's repos on windows
    if(p_detectOS() == "Windows"){
        if(is.na(options()$repo["CRANextra"])){
            options(repos = c(options()$repos, 
                             CRANextra = "http://www.stats.ox.ac.uk/pub/RWin"))
        }
    }
}
<environment: namespace:pacman>


> getOption("repos")
                                                                CRANextra 
          "http://cran.rstudio.com/" "http://www.stats.ox.ac.uk/pub/RWin" 

p_temp

This SO question brings up an interesting idea of temporary installs that we may want to investigate.

Getting buzz around pacman

I have been wondering if it would be better to (at some point) stop working on pacman in isolation and get some input from others.

Thoughts for promoting pacman and getting people here to try it out and test it:

You and I both have blogs (these blogs are linked to stats blogs and mine is linked to R bloggers). By simple putting out a small mess around tutorial as a blog we'll begin to reach people to come test and provide feedback, ideas, requests and issues.

I also put a page for pacman on my blog: LINK

So I guess the questions are:

  1. is this something we should do and if so when?
  2. any other promotion ideas

add `character.only = FALSE`

The philosophy to allow no quotes on most of the functions in pacman is getting us into trouble when we want to use inside of other functions as seen here:

FUN <- function(package) {
    p_loaded(package)
}

FUN(ggplot2)

package 
  FALSE 

library uses character.only = FALSE as an argument. We have to decide to:

  1. drop the ability to supply unquoted package names
  2. add character.only = FALSE
  3. Dason comes up with a supper cool alternative

p_install

Right now p_install only installs from CRAN. I'm thinking we could check CRAN, bioconductor, and at least Brian Ripley's repository. This might eliminate the need for p_getXML.

Upload to CRAN (release date)

We will release pacman to CRAN on 2-1-2014. Before then we will:

  1. Tackle as many issues as possible TR/DK
  2. Remove the release data info from README.md TR
  3. Spell check manual TR
  4. Test all examples TR
  5. Run CRAN checks on all 3 OS with R 3.0.2 and R dev
  6. Add \code{} around TRUE, FALSE and NULL in documentationsTR

Add github.user argument to p_install family?

I was thinking about p_temp and how it might be nice to be able to temporarily install packages from github. We would need to include some sort of github.user parameter to allow this (and not to mention we would need to require devtools).

Would this be worthwhile to attempt?

p_getRipley not acting as anticpated

I attempted to use p_getRipley in an .on.load function for qdap, as I figured this is how a developer might utilize p_get eventually. The attempt produces a warning and doesn't appear to put the packages in my library or load them.

Outside of the .on.load p_getRipley works.

The .on.load attempt:

.onLoad <- function(libname = find.package("qdap"), 
    pkgname = "qdap") {
    if (!"pacman" %in% .packages(all.available = TRUE)) {
        install.packages("devtools")
        library(devtools)
        install_github("pacman", "trinker")
    }
    if (!"openNLP" %in% .packages(all.available = TRUE)) {
        install.packages("openNLP", type = "source")
    }    
    require(pacman)
    if (!"XML" %in% .packages(all.available = TRUE)) {
        p_getRipley("XML")
    }
    if (!"XML" %in% .packages(all.available = TRUE)) {
        p_getRipley("RCurl")
    }    
    require(openNLP)
    require(XML)
    require(RCurl)
}

The Error Message

Installing github repo(s) qdap/master from trinker
Installing qdap.zip from https://github.com/trinker/qdap/zipball/master
Installing qdap
* checking for file 'C:\Users\trinker\AppData\Local\Temp\RtmpUDEYkL\trinker-qdap-07880e0/DESCRIPTION' ... OK
* preparing 'qdap':
* checking DESCRIPTION meta-information ... OK
* excluding invalid files
Subdirectory 'R' contains invalid file names:
  '.zzz.R'
* checking for LF line-endings in source and make files
* checking for empty or unneeded directories
* looking to see if a 'data/datalist' file should be added
* building 'qdap_0.1.0.tar.gz'

* installing *source* package 'qdap' ...
** R
** data
**  moving datasets to lazyload DB
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded
*** arch - i386
*** arch - x64

* DONE (qdap)
Loading required package: qdap

Attaching package:qdapThe following object(s) are masked _by_.GlobalEnv:

    delete, distTab, folder, hash, lookup, paste2, qview, replacer, strWrap, Trim

The following object(s) are masked frompackage:MASS:

    qda

The following object(s) are masked frompackage:tm:

    dissimilarity, stopwords

qdap downloaded and loaded
Warning messages:
1: In FUN(c("colorspace", "dichromat", "digest", "httr", "labeling",  :
  DESCRIPTION file of package 'RCurl' is missing or broken
2: In FUN(c("colorspace", "dichromat", "digest", "httr", "labeling",  :
  DESCRIPTION file of package 'RCurl' is missing or broken
3: In FUN(c("colorspace", "dichromat", "digest", "httr", "labeling",  :
  DESCRIPTION file of package 'RCurl' is missing or broken

When using p_loaded

> p_loaded()
 [1] "qdap"             "gridExtra"        "gplots"           "MASS"             "KernSmooth"       "caTools"         
 [7] "bitops"           "gdata"            "gtools"           "Snowball"         "openNLPmodels.en" "venneuler"       
[13] "rJava"            "wordcloud"        "RColorBrewer"     "Rcpp"             "tm"               "igraph"          
[19] "pacman"           "devtools"         "scales"           "ggplot2"         
Warning message:
In FUN(c("colorspace", "dichromat", "digest", "httr", "labeling",  :
  DESCRIPTION file of package 'RCurl' is missing or broken

Vignette for pacman

Would a vignette be something we would want to make at some point? I'm thinking it would be a good idea but I'm too lazy to make one right now.

I'm assigning this to Milestone 2.0

I figure once we feel we can release the package to the general public either via advertising or a release on CRAN we'll turn the version number to 1.0. Once we've cleaned it up more and added a vignette we'll turn to 2.0.

Unit Tests Fail in Travis-CI check

I'm not sure of the reason but pacman fails CRAN checks (errors) https://travis-ci.org/trinker/pacman

Here's the error mesage:

  Running ‘testthat.R’ [7s/13s]
 ERROR
Running the tests in ‘tests/testthat.R’ failed.
Last 13 lines of output:
  1 element mismatch

  5. Failure(@test-load.R#26): p_load works for individual string inputs ---------
  p_load("MASS", "pacman") not equal to success
  1 element mismatch

  6. Failure(@test-load.R#31): p_load works for individual non-string inputs -----
  p_load(MASS, pacman) not equal to success
  1 element mismatch

  Error: Test failures
  In addition: There were 19 warnings (use warnings() to see them)
  Execution halted

I suspect it is because we use "pacman" in the tests.

p_load gives failed message but installs and can load fine

Example:

> p_delete("ggmap")
The following packages have been deleted:
ggmap 
> p_load(ggmap)
Installing package into/home/dasonk/R/x86_64-pc-linux-gnu-library/3.0’
(aslibis unspecified)
trying URL 'http://cran.r-project.org/src/contrib/ggmap_2.3.tar.gz'
Content type 'application/x-gzip' length 2929284 bytes (2.8 Mb)
opened URL
==================================================
downloaded 2.8 Mb

* installing *source* packageggmap...
** packageggmapsuccessfully unpacked and MD5 sums checked
** R
** data
*** moving datasets to lazyload DB
** inst
** preparing package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded
* DONE (ggmap)

The downloaded source packages are in/tmp/Rtmpp9oIOX/downloaded_packagesWarning message:
In p_load(ggmap) : Failed to install/load:
ggmap
> p_load(ggmap)

negate parameter in p_unload

I haven't delved too far into p_unload yet but is the point really to delete all of the other packages except the ones indicated? Wouldn't that be better for p_delete?

And I think p_delete might make better user of remove.packages as opposed to the home brewed solution currently.

p_exists gives locations of where a package exists

Per Dason's post yesterday's regarding the p_load searching CRAN, bioconductor and Ripley's repositories I think it would be nice to have p_exists reflect this as well. Currently p_exists looks at CRAN or locally. I think it may be nice to be able to search locally as well as the 3 places names above for a package.

Not sure how the arguments would look. Right now I'm thinking it could take "local", "cran", "bioconductor" , "ripley" or "all" as an response to the location argument. The first 4 would return a logical TRUE/FALSE where as the "all" would return a named vector of TRUE/FALSE or simply a vector of all the locations the package exists at.

The p_exists would have to detect OS and return appropriately for bio and ripley (not sure if that's an issue). It already does this when searching CRAN and locally is a moot point as it's already installed then OS detection is unnecessary.

p_install

Currently I don't see a p_install. Is all installation supposed to be done using p_load?

My thoughts were that at the very least we could make a p_install. We wouldn't have to export it if we want the user to use p_load even for install but it could simplify the p_load function. I think having a p_install would be beneficial because we could wrap in the getXML convenience thing directly into p_install.

I think instead of having that convenience function we could just have a couple of extra repositories provided (such as Dr. Ripley's page) that would take care of the RCurl and XML issues that Windows users have. Hopefully sometime this weekend I'll get around to testing some of this stuff on my mac and linux computers.

p_load has no way of controlling dependencies argument in install.packages

I think it would be helpful to have p_load be able to specify dependencies from the install.packages function. I'd say the defualt should be TRUE where as I think it's NA in the install.packages function but this is certainly open for discussion.

Right now the question is should p_load allow for installing dependencies? We don't want to bloat but this seems like something people would definitely want to supply. Normally we could just use ... to pass extra arguments but we've already used ... in the function.

Thoughts?

"Usage" with roxygen

Currently for the functions where we have aliases the usage section only displays the main function. For example in ?p_citation we see:

Usage

p_citation(package = "r")

But there isn't p_cite(package = "r")

I think it's possible to just add
@Usage p_citation(package = "r')
@Usage p_cite(package = "r")

but I'm wondering if there's a nicer way to automate this...

p_vignette: Error for package with no vignette

I'd like to give a clearer error message for a package with no vignette. Currently the error looks like this:

> p_vignette(ggplot2)
Error in if (any(vidx)) { : missing value where TRUE/FALSE needed

Is there a way to see all available CRAN vignettes (not local packages with vignettes but CRAN ones) similar to available.packages? Then we could do something like:

x %in% CRAN.vignettes

p_update

Why aren't we using update.packages in p_update?

Added is.global

I added an is.global function designed mostly to be used as arguments in pacman functions for whether or not to print quiet.

See: ac8d2e5

Here's an example outlining intended usage:

FUN2 <- function(x = is.global(2)) x
FUN2()                              
FUN3 <- function() FUN2(); FUN3()   

Outcome

> FUN2()                              
[1] TRUE
> FUN3 <- function() FUN2(); FUN3()   
[1] FALSE

Discussion point

This breaks the p_xxx framework but makes sense to me in the historical naming convention of R.

_Options:_

  1. Get rid of function and just default to TRUE for quiet
  2. Rename to something like p_is.global
  3. Keep as is.global

p_data display stinks

I hate the way p_data is currently displayed. I'd like to alter it so it doesn't print a static window but to the console so it can be saved as an object.Something like this seems more appropriate:

dat <- data.frame(data(package="ggplot2")[["results"]][, -c(1:2)])
colnames(dat) <- c("data", "description")
dat  

Improve `p_citation`

Currently p_citation uses capture.output to grab just the bibtex or text of a citation (see bottom). There are utils package tools to do this as in:

utils:::print.bibentry(citation("psych"), style = "text")
utils:::print.bibentry(citation("psych"), style = "Bibtex")

Incorporate this into p_citation.

p_citation <- function(package = "r", copy2clip = interactive(), 
    tex = getOption("pac_tex"), ...) {

    ## Try to get tex options otherwise default to TRUE
    tex <- ifelse(is.null(tex), TRUE, tex)

    ## check if package is an object
    if(!object_check(package)){
        package <- as.character(substitute(package))
    }

    if(package %in% c("R", "r")){
        # To cite R we need to use package = "base"
        package <- "base"
    }

    ## Default variable set to FALSE.  
    ## IF not bibtex entry goes to FALSE
    nobib <- FALSE

    if(copy2clip){

        ## Grab citation to optionally manipulate 
        ## and copy to clipboard
        out <- capture.output(citation(package = package, ...))

        ## check for BiBTex Entry
        loc <- grep("BibTeX entry for LaTeX", out)
        if (identical(loc, integer(0))) {
            if (isTRUE(tex)) warning("No BibTex entry found")
            tex <- FALSE
            loc <- length(out)
            nobib <- TRUE
        } else {
            ## Remove stuff after last closed curly brace
            out <- out[1:tail(which(grepl("}", out)), 1)]
        }

        if (!is.na(tex)) {

            if (isTRUE(tex)) {  
                ## Grab only the bibtex portion
                locs <- (1 + loc):length(out)
            } else {

                ## Remove the `To cite in publications, please use:` 
                grab <- seq_along(out) != grep("To cite|in publications", out)
                out <- out[grab]

                if (!nobib) {
                    ## Grab only the standard portion
                    loc <- grep("BibTeX entry for LaTeX", out) 
                }
                locs <- 1:(loc - 1)
            }

            out <- out[locs]

            ## Remove blanks ("") at the begining and end of the vector
            nonblanks <- which(out != "")
            out <- out[head(nonblanks, 1): tail(nonblanks, 1)]
            out <- paste(substring(out, 3), collapse="\n")
        }
        writeToClipboard(out)            
    }   
    citation(package = package, ...)
}

p_load with tar.gz

using p_load with a tar.gz results in a warning. So for instance using p_load(pacman_0.2.0.tar.gz) results inf:

Warning message:
In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE,  :
  there is no package called ‘pacman_0.2.0.tar.gz’

This results in another warning:

Warning message:
In p_load(pacman_0.2.0.tar.gz) : Failed to install/load:
pacman_0.2.0.tar.gz

This is caused by pacman:::p_load_single's:

return(suppressMessages(require(package, character.only = TRUE)))

This is sensible because here package would be called pacman_0.2.0.tar.gz. Not sure how we want to handle this. We'd need to detect the tar.gz or zip extension again here and strip it and the versioning off. The logic may be to break at the last underscore and take the first element. I don't want to approach this by myself and do something silly. Thoughts?

add clipboard capability to `p_citation`

I use p_citation a bit and would like to add the capability to:

  1. the ability to copy to the system's clipboard (not available for linux)

If there's no objection I'll do this as it's fairly straight forward.

Friend can't install pacman

Not sure if this is a pacman problem or my friends setup. He tried to installpacman and got an error:

nstalling github repo(s) pacman/master from trinker
Installing pacman.zip from https://github.com/trinker/pacman/zipball/master
Installing pacman
* checking for file 'C:\Documents and Settings\amaloe\Local Settings\Temp\RtmpgbxVj8\trinker-pacman-0a24465/DESCRIPTION' ... OK
* preparing 'pacman':
* checking DESCRIPTION meta-information ... OK
* checking for LF line-endings in source and make files
* checking for empty or unneeded directories
* building 'pacman_0.2.0.tar.gz'
 ERROR
packaging into .tar.gz failed
Error: Command failed (1)
In addition: Warning message:
running command '"C:/PROGRA~1/R/R-215~1.1/bin/i386/R" CMD build "C:\Documents and Settings\amaloe\Local Settings\Temp\RtmpgbxVj8\trinker-pacman-0a24465" --no-manual --no-vignettes' had status 1 

Any ideas why? How to help him? Need more infor?

He's running windows XP R 2.15.1

p_loaded can take package name

Currently p_loaded takes no package arguments and returns the packages loaded or optionally the base packages loaded as well.

I propose that this function can take a package name and return a logical TRUE/FALSE of whether the package is loaded. If the
package argument is empty then the function returns a list of all functions loaded as it does currently.

If Dason agrees this is a good idea I'll take care of this.

p_load not installing from CRAN consistently

I wanted to see how slow p_load was at installing and loading packages if it was for the targeted packages (CRAN) so I picked 10 random packages (well they were at the beginning of the alphabet) and install 10 I didn't have. Here are the results:

> p_load(matlab)

pacman loaded the following:
matlab
> tic()
> 
> p_load(apcluster, appell, aroma.cn, ascrda, ash, wsMethods, BAS, BayesComm, bayesDem, BayesSingleSub)
trying URL 'http://cran.rstudio.com/bin/windows/contrib/3.0/apcluster_1.3.2.zip'
Content type 'application/zip' length 1585428 bytes (1.5 Mb)
opened URL
downloaded 1.5 Mb

package ‘apcluster’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
        C:\Users\trinker\AppData\Local\Temp\RtmpSWweU9\downloaded_packages
trying URL 'http://cran.rstudio.com/bin/windows/contrib/3.0/appell_0.0-4.zip'
Content type 'application/zip' length 164269 bytes (160 Kb)
opened URL
downloaded 160 Kb

package ‘appell’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
        C:\Users\trinker\AppData\Local\Temp\RtmpSWweU9\downloaded_packages
trying URL 'http://cran.rstudio.com/bin/windows/contrib/3.0/ash_1.0-14.zip'
Content type 'application/zip' length 35913 bytes (35 Kb)
opened URL
downloaded 35 Kb

package ‘ash’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
        C:\Users\trinker\AppData\Local\Temp\RtmpSWweU9\downloaded_packages

GitHub package: This may take a few moments...
Installing github repo wsMethods/master from trinker
Downloading wsMethods.zip from https://github.com/trinker/wsMethods/archive/master.zip
Error : client error: (406) Not Acceptable
trying URL 'http://cran.rstudio.com/bin/windows/contrib/3.0/BAS_1.0.zip'
Content type 'application/zip' length 291465 bytes (284 Kb)
opened URL
downloaded 284 Kb

package ‘BAS’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
        C:\Users\trinker\AppData\Local\Temp\RtmpSWweU9\downloaded_packages
Loading required package: apcluster
Loading required package: Rcpp

Attaching package: ‘apcluster’

The following object is masked from ‘package:stats’:

    heatmap

Loading required package: appell
Loading required package: ascrda
Loading required package: ash
Loading required package: BAS
Loading required package: MASS
Loading required package: BayesComm
Loading required package: bayesDem
Loading required package: BayesSingleSub


pacman loaded the following:
apcluster, appell, ash, BAS
Warning messages:
1: In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE,  :
  there is no package called ‘ascrda’
2: In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE,  :
  there is no package called ‘BayesComm’
3: In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE,  :
  there is no package called ‘bayesDem’
4: In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE,  :
  there is no package called ‘BayesSingleSub’
5: In p_load(apcluster, appell, aroma.cn, ascrda, ash, wsMethods, BAS,  : 

Failed to install the following:
aroma.cn, wsMethods
6: In p_load(apcluster, appell, aroma.cn, ascrda, ash, wsMethods, BAS,  : 

Failed to load the following:
ascrda, BayesComm, bayesDem, BayesSingleSub
> 
> toc()
elapsed time is 29.780000 seconds 

Is it moving too fast and needs a system sleep. What's going on? More documenting this behavior. If/when Dason looks under the hood this may go away.

Also oddly...

> p_iscran('BayesComm')
BayesComm 
     TRUE 

p_zip

I just noticed p_zip. I'm not sure if I like having that a separate function. Having a non-exported function called p_zip which takes care of installing zip/tar.gz files would be fine with me but I think to stay in the spirit of the package we would just want p_install and/or p_load to take care of installing from a .zip automatically.

Documentation

Documentation would be easier maintained if Roxygen was used. This is mainly a note so that I can close this issue once I get around to it.

Undoing a Commit

Undo a Commit

I accidentally committed something without pulling from github first. Now I can't push or pull from github locally. How can I fix this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.