Giter Club home page Giter Club logo

future's Introduction

CRAN check status R CMD check status Top reverse-dependency checks status future.tests checks status Coverage Status

future: Unified Parallel and Distributed Processing in R for Everyone The 'future' hexlogo

Introduction

The purpose of the future package is to provide a very simple and uniform way of evaluating R expressions asynchronously using various resources available to the user.

In programming, a future is an abstraction for a value that may be available at some point in the future. The state of a future can either be unresolved or resolved. As soon as it is resolved, the value is available instantaneously. If the value is queried while the future is still unresolved, the current process is blocked until the future is resolved. It is possible to check whether a future is resolved or not without blocking. Exactly how and when futures are resolved depends on what strategy is used to evaluate them. For instance, a future can be resolved using a sequential strategy, which means it is resolved in the current R session. Other strategies may be to resolve futures asynchronously, for instance, by evaluating expressions in parallel on the current machine or concurrently on a compute cluster.

Here is an example illustrating how the basics of futures work. First, consider the following code snippet that uses plain R code:

> v <- {
+   cat("Hello world!\n")
+   3.14
+ }
Hello world!
> v
[1] 3.14

It works by assigning the value of an expression to variable v and we then print the value of v. Moreover, when the expression for v is evaluated we also print a message.

Here is the same code snippet modified to use futures instead:

> library(future)
> v %<-% {
+   cat("Hello world!\n")
+   3.14
+ }
> v
Hello world!
[1] 3.14

The difference is in how v is constructed; with plain R we use <- whereas with futures we use %<-%. The other difference is that output is relayed after the future is resolved (not during) and when the value is queried (see Vignette 'Outputting Text').

So why are futures useful? Because we can choose to evaluate the future expression in a separate R process asynchronously by simply switching settings as:

> library(future)
> plan(multisession)
> v %<-% {
+   cat("Hello world!\n")
+   3.14
+ }
> v
Hello world!
[1] 3.14

With asynchronous futures, the current/main R process does not block, which means it is available for further processing while the futures are being resolved in separate processes running in the background. In other words, futures provide a simple but yet powerful construct for parallel and / or distributed processing in R.

Now, if you cannot be bothered to read all the nitty-gritty details about futures, but just want to try them out, then skip to the end to play with the Mandelbrot demo using both parallel and non-parallel evaluation.

Implicit or Explicit Futures

Futures can be created either implicitly or explicitly. In the introductory example above we used implicit futures created via the v %<-% { expr } construct. An alternative is explicit futures using the f <- future({ expr }) and v <- value(f) constructs. With these, our example could alternatively be written as:

> library(future)
> f <- future({
+   cat("Hello world!\n")
+   3.14
+ })
> v <- value(f)
Hello world!
> v
[1] 3.14

Either style of future construct works equally(*) well. The implicit style is most similar to how regular R code is written. In principle, all you have to do is to replace <- with a %<-% to turn the assignment into a future assignment. On the other hand, this simplicity can also be deceiving, particularly when asynchronous futures are being used. In contrast, the explicit style makes it much clearer that futures are being used, which lowers the risk for mistakes and better communicates the design to others reading your code.

(*) There are cases where %<-% cannot be used without some (small) modifications. We will return to this in Section 'Constraints when using Implicit Futures' near the end of this document.

To summarize, for explicit futures, we use:

  • f <- future({ expr }) - creates a future
  • v <- value(f) - gets the value of the future (blocks if not yet resolved)

For implicit futures, we use:

  • v %<-% { expr } - creates a future and a promise to its value

To keep it simple, we will use the implicit style in the rest of this document, but everything discussed will also apply to explicit futures.

Controlling How Futures are Resolved

The future package implements the following types of futures:

Name OSes Description
synchronous: non-parallel:
sequential all sequentially and in the current R process
asynchronous: parallel:
multisession all background R sessions (on current machine)
multicore not Windows/not RStudio forked R processes (on current machine)
cluster all external R sessions on current, local, and/or remote machines

The future package is designed such that support for additional strategies can be implemented as well. For instance, the future.callr package provides future backends that evaluates futures in a background R process utilizing the callr package - they work similarly to multisession futures but has a few advantages. Continuing, the future.batchtools package provides futures for all types of cluster functions ("backends") that the batchtools package supports. Specifically, futures for evaluating R expressions via job schedulers such as Slurm, TORQUE/PBS, Oracle/Sun Grid Engine (SGE) and Load Sharing Facility (LSF) are also available.

By default, future expressions are evaluated eagerly (= instantaneously) and synchronously (in the current R session). This evaluation strategy is referred to as "sequential". In this section, we will go through each of these strategies and discuss what they have in common and how they differ.

Consistent Behavior Across Futures

Before going through each of the different future strategies, it is probably helpful to clarify the objectives of the Future API (as defined by the future package). When programming with futures, it should not really matter what future strategy is used for executing code. This is because we cannot really know what computational resources the user has access to so the choice of evaluation strategy should be in the hands of the user and not the developer. In other words, the code should not make any assumptions on the type of futures used, e.g. synchronous or asynchronous.

One of the designs of the Future API was to encapsulate any differences such that all types of futures will appear to work the same. This despite expressions may be evaluated locally in the current R session or across the world in remote R sessions. Another obvious advantage of having a consistent API and behavior among different types of futures is that it helps while prototyping. Typically one would use sequential evaluation while building up a script and, later, when the script is fully developed, one may turn on asynchronous processing.

Because of this, the defaults of the different strategies are such that the results and side effects of evaluating a future expression are as similar as possible. More specifically, the following is true for all futures:

  • All evaluation is done in a local environment (i.e. local({ expr })) so that assignments do not affect the calling environment. This is natural when evaluating in an external R process, but is also enforced when evaluating in the current R session.

  • When a future is constructed, global variables are identified. For asynchronous evaluation, globals are exported to the R process/session that will be evaluating the future expression. For sequential futures with lazy evaluation (lazy = TRUE), globals are "frozen" (cloned to a local environment of the future). Also, in order to protect against exporting too large objects by mistake, there is a built-in assertion that the total size of all globals is less than a given threshold (controllable via an option, cf. help("future.options")). If the threshold is exceeded, an informative error is thrown.

  • Future expressions are only evaluated once. As soon as the value (or an error) has been collected it will be available for all succeeding requests.

Here is an example illustrating that all assignments are done to a local environment:

> plan(sequential)
> a <- 1
> x %<-% {
+     a <- 2
+     2 * a
+ }
> x
[1] 4
> a
[1] 1

Now we are ready to explore the different future strategies.

Synchronous Futures

Synchronous futures are resolved one after another and most commonly by the R process that creates them. When a synchronous future is being resolved it blocks the main process until resolved.

Sequential Futures

Sequential futures are the default unless otherwise specified. They were designed to behave as similar as possible to regular R evaluation while still fulfilling the Future API and its behaviors. Here is an example illustrating their properties:

> plan(sequential)
> pid <- Sys.getpid()
> pid
[1] 1437557
> a %<-% {
+     pid <- Sys.getpid()
+     cat("Future 'a' ...\n")
+     3.14
+ }
> b %<-% {
+     rm(pid)
+     cat("Future 'b' ...\n")
+     Sys.getpid()
+ }
> c %<-% {
+     cat("Future 'c' ...\n")
+     2 * a
+ }
Future 'a' ...
> b
Future 'b' ...
[1] 1437557
> c
Future 'c' ...
[1] 6.28
> a
[1] 3.14
> pid
[1] 1437557

Since eager sequential evaluation is taking place, each of the three futures is resolved instantaneously in the moment it is created. Note also how pid in the calling environment, which was assigned the process ID of the current process, is neither overwritten nor removed. This is because futures are evaluated in a local environment. Since synchronous (uni-)processing is used, future b is resolved by the main R process (still in a local environment), which is why the value of b and pid are the same.

Asynchronous Futures

Next, we will turn to asynchronous futures, which are futures that are resolved in the background. By design, these futures are non-blocking, that is, after being created the calling process is available for other tasks including creating additional futures. It is only when the calling process tries to access the value of a future that is not yet resolved, or trying to create another asynchronous future when all available R processes are busy serving other futures, that it blocks.

Multisession Futures

We start with multisession futures because they are supported by all operating systems. A multisession future is evaluated in a background R session running on the same machine as the calling R process. Here is our example with multisession evaluation:

> plan(multisession)
> pid <- Sys.getpid()
> pid
[1] 1437557
> a %<-% {
+     pid <- Sys.getpid()
+     cat("Future 'a' ...\n")
+     3.14
+ }
> b %<-% {
+     rm(pid)
+     cat("Future 'b' ...\n")
+     Sys.getpid()
+ }
> c %<-% {
+     cat("Future 'c' ...\n")
+     2 * a
+ }
Future 'a' ...
> b
Future 'b' ...
[1] 1437616
> c
Future 'c' ...
[1] 6.28
> a
[1] 3.14
> pid
[1] 1437557

The first thing we observe is that the values of a, c and pid are the same as previously. However, we notice that b is different from before. This is because future b is evaluated in a different R process and therefore it returns a different process ID.

When multisession evaluation is used, the package launches a set of R sessions in the background that will serve multisession futures by evaluating their expressions as they are created. If all background sessions are busy serving other futures, the creation of the next multisession future is blocked until a background session becomes available again. The total number of background processes launched is decided by the value of availableCores(), e.g.

> availableCores()
mc.cores 
       2 

This particular result tells us that the mc.cores option was set such that we are allowed to use in total two (2) processes including the main process. In other words, with these settings, there will be two (2) background processes serving the multisession futures. The availableCores() is also agile to different options and system environment variables. For instance, if compute cluster schedulers are used (e.g. TORQUE/PBS and Slurm), they set specific environment variable specifying the number of cores that was allotted to any given job; availableCores() acknowledges these as well. If nothing else is specified, all available cores on the machine will be utilized, cf. parallel::detectCores(). For more details, please see help("availableCores", package = "parallelly").

Multicore Futures

On operating systems where R supports forking of processes, which is basically all operating system except Windows, an alternative to spawning R sessions in the background is to fork the existing R process. To use multicore futures, when supported, specify:

plan(multicore)

Just like for multisession futures, the maximum number of parallel processes running will be decided by availableCores(), since in both cases the evaluation is done on the local machine.

Forking an R process can be faster than working with a separate R session running in the background. One reason is that the overhead of exporting large globals to the background session can be greater than when forking, and therefore shared memory, is used. On the other hand, the shared memory is read only, meaning any modifications to shared objects by one of the forked processes ("workers") will cause a copy by the operating system. This can also happen when the R garbage collector runs in one of the forked processes.

On the other hand, process forking is also considered unstable in some R environments. For instance, when running R from within RStudio process forking may resulting in crashed R sessions. Because of this, the future package disables multicore futures by default when running from RStudio. See help("supportsMulticore") for more details.

Cluster Futures

Cluster futures evaluate expressions on an ad-hoc cluster (as implemented by the parallel package). For instance, assume you have access to three nodes n1, n2 and n3, you can then use these for asynchronous evaluation as:

> plan(cluster, workers = c("n1", "n2", "n3"))
> pid <- Sys.getpid()
> pid
[1] 1437557
> a %<-% {
+     pid <- Sys.getpid()
+     cat("Future 'a' ...\n")
+     3.14
+ }
> b %<-% {
+     rm(pid)
+     cat("Future 'b' ...\n")
+     Sys.getpid()
+ }
> c %<-% {
+     cat("Future 'c' ...\n")
+     2 * a
+ }
Future 'a' ...
> b
Future 'b' ...
[1] 1437715
> c
Future 'c' ...
[1] 6.28
> a
[1] 3.14
> pid
[1] 1437557

Any types of clusters that parallel::makeCluster() creates can be used for cluster futures. For instance, the above cluster can be explicitly set up as:

cl <- parallel::makeCluster(c("n1", "n2", "n3"))
plan(cluster, workers = cl)

Also, it is considered good style to shut down cluster cl when it is no longer needed, that is, calling parallel::stopCluster(cl). However, it will shut itself down if the main process is terminated. For more information on how to set up and manage such clusters, see help("makeCluster", package = "parallel"). Clusters created implicitly using plan(cluster, workers = hosts) where hosts is a character vector will also be shut down when the main R session terminates, or when the future strategy is changed, e.g. by calling plan(sequential).

Note that with automatic authentication setup (e.g. SSH key pairs), there is nothing preventing us from using the same approach for using a cluster of remote machines.

If you want to run multiple workers on each node, replicate the node name as many times as the number of workers to run on that node. For example,

> plan(cluster, workers = c(rep("n1", times = 3), "n2", rep("n3", times = 5)))

will run three workers on n1, one on n2, and five on n3, in total nine parallel workers.

Nested Futures and Evaluation Topologies

This far we have discussed what can be referred to as "flat topology" of futures, that is, all futures are created in and assigned to the same environment. However, there is nothing stopping us from using a "nested topology" of futures, where one set of futures may, in turn, create another set of futures internally and so on.

For instance, here is an example of two "top" futures (a and b) that uses multisession evaluation and where the second future (b) in turn uses two internal futures:

> plan(multisession)
> pid <- Sys.getpid()
> a %<-% {
+     cat("Future 'a' ...\n")
+     Sys.getpid()
+ }
> b %<-% {
+     cat("Future 'b' ...\n")
+     b1 %<-% {
+         cat("Future 'b1' ...\n")
+         Sys.getpid()
+     }
+     b2 %<-% {
+         cat("Future 'b2' ...\n")
+         Sys.getpid()
+     }
+     c(b.pid = Sys.getpid(), b1.pid = b1, b2.pid = b2)
+ }
> pid
[1] 1437557
> a
Future 'a' ...
[1] 1437804
> b
Future 'b' ...
Future 'b1' ...
Future 'b2' ...
  b.pid  b1.pid  b2.pid 
1437805 1437805 1437805 

By inspection the process IDs, we see that there are in total three different processes involved for resolving the futures. There is the main R process (pid 1437557), and there are the two processes used by a (pid 1437804) and b (pid 1437805). However, the two futures (b1 and b2) that is nested by b are evaluated by the same R process as b. This is because nested futures use sequential evaluation unless otherwise specified. There are a few reasons for this, but the main reason is that it protects us from spawning off a large number of background processes by mistake, e.g. via recursive calls.

To specify a different type of evaluation topology, other than the first level of futures being resolved by multisession evaluation and the second level by sequential evaluation, we can provide a list of evaluation strategies to plan(). First, the same evaluation strategies as above can be explicitly specified as:

plan(list(multisession, sequential))

We would actually get the same behavior if we try with multiple levels of multisession evaluations;

> plan(list(multisession, multisession))
[...]
> pid
[1] 1437557
> a
Future 'a' ...
[1] 1437901
> b
Future 'b' ...
Future 'b1' ...
Future 'b2' ...
  b.pid  b1.pid  b2.pid 
1437902 1437902 1437902 

The reason for this is, also here, to protect us from launching more processes than what the machine can support. Internally, this is done by setting mc.cores = 1 such that functions like parallel::mclapply() will fall back to run sequentially. This is the case for both multisession and multicore evaluation.

Continuing, if we start off by sequential evaluation and then use multisession evaluation for any nested futures, we get:

> plan(list(sequential, multisession))
[...]
> pid
[1] 1437557
> a
Future 'a' ...
[1] 1437557
> b
Future 'b' ...
Future 'b1' ...
Future 'b2' ...
  b.pid  b1.pid  b2.pid 
1437557 1438017 1438016 

which clearly show that a and b are resolved in the calling process (pid 1437557) whereas the two nested futures (b1 and b2) are resolved in two separate R processes (pids 1438017 and 1438016).

Having said this, it is indeed possible to use nested multisession evaluation strategies, if we explicitly specify (read force) the number of cores available at each level. In order to do this we need to "tweak" the default settings, which can be done as follows:

> plan(list(tweak(multisession, workers = 2), tweak(multisession, 
+     workers = 2)))
[...]
> pid
[1] 1437557
> a
Future 'a' ...
[1] 1438105
> b
Future 'b' ...
Future 'b1' ...
Future 'b2' ...
  b.pid  b1.pid  b2.pid 
1438106 1438211 1438212 

First, we see that both a and b are resolved in different processes (pids 1438105 and 1438106) than the calling process (pid 1437557). Second, the two nested futures (b1 and b2) are resolved in yet two other R processes (pids 1438211 and 1438212).

For more details on working with nested futures and different evaluation strategies at each level, see Vignette 'Futures in R: Future Topologies'.

Checking A Future without Blocking

It is possible to check whether a future has been resolved or not without blocking. This can be done using the resolved(f) function, which takes an explicit future f as input. If we work with implicit futures (as in all the examples above), we can use the f <- futureOf(a) function to retrieve the explicit future from an implicit one. For example,

> plan(multisession)
> a %<-% {
+     cat("Future 'a' ...")
+     Sys.sleep(2)
+     cat("done\n")
+     Sys.getpid()
+ }
> cat("Waiting for 'a' to be resolved ...\n")
Waiting for 'a' to be resolved ...
> f <- futureOf(a)
> count <- 1
> while (!resolved(f)) {
+     cat(count, "\n")
+     Sys.sleep(0.2)
+     count <- count + 1
+ }
1 
2 
3 
4 
5 
6 
7 
8 
9 
10 
> cat("Waiting for 'a' to be resolved ... DONE\n")
Waiting for 'a' to be resolved ... DONE
> a
Future 'a' ...done
[1] 1438287

Failed Futures

Sometimes the future is not what you expected. If an error occurs while evaluating a future, the error is propagated and thrown as an error in the calling environment when the future value is requested. For example, if we use lazy evaluation on a future that generates an error, we might see something like

> plan(sequential)
> b <- "hello"
> a %<-% {
+     cat("Future 'a' ...\n")
+     log(b)
+ } %lazy% TRUE
> cat("Everything is still ok although we have created a future that will fail.\n")
Everything is still ok although we have created a future that will fail.
> a
Future 'a' ...
Error in log(b) : non-numeric argument to mathematical function

The error is thrown each time the value is requested, that is, if we try to get the value again will generate the same error (and output):

> a
Future 'a' ...
Error in log(b) : non-numeric argument to mathematical function
In addition: Warning message:
restarting interrupted promise evaluation

To see the last call in the call stack that gave the error, we can use the backtrace() function(*) on the future, i.e.

> backtrace(a)
[[1]]
log(a)

(*) The commonly used traceback() does not provide relevant information in the context of futures. Furthermore, it is unfortunately not possible to see the list of calls (evaluated expressions) that led up to the error; only the call that gave the error (this is due to a limitation in tryCatch() used internally).

Globals

Whenever an R expression is to be evaluated asynchronously (in parallel) or sequentially via lazy evaluation, global (aka "free") objects have to be identified and passed to the evaluator. They need to be passed exactly as they were at the time the future was created, because, for lazy evaluation, globals may otherwise change between when it is created and when it is resolved. For asynchronous processing, the reason globals need to be identified is so that they can be exported to the process that evaluates the future.

The future package tries to automate these tasks as far as possible. It does this with help of the globals package, which uses static-code inspection to identify global variables. If a global variable is identified, it is captured and made available to the evaluating process. Moreover, if a global is defined in a package, then that global is not exported. Instead, it is made sure that the corresponding package is attached when the future is evaluated. This not only better reflects the setup of the main R session, but it also minimizes the need for exporting globals, which saves not only memory but also time and bandwidth, especially when using remote compute nodes.

Finally, it should be clarified that identifying globals from static code inspection alone is a challenging problem. There will always be corner cases where automatic identification of globals fails so that either false globals are identified (less of a concern) or some of the true globals are missing (which will result in a run-time error or possibly the wrong results). Vignette 'Futures in R: Common Issues with Solutions' provides examples of common cases and explains how to avoid them as well as how to help the package to identify globals or to ignore falsely identified globals. If that does not suffice, it is always possible to manually specify the global variables by their names (e.g. globals = c("a", "slow_sum")) or as name-value pairs (e.g. globals = list(a = 42, slow_sum = my_sum)).

Constraints when using Implicit Futures

There is one limitation with implicit futures that does not exist for explicit ones. Because an explicit future is just like any other object in R it can be assigned anywhere/to anything. For instance, we can create several of them in a loop and assign them to a list, e.g.

> plan(multisession)
> f <- list()
> for (ii in 1:3) {
+     f[[ii]] <- future({
+         Sys.getpid()
+     })
+ }
> v <- lapply(f, FUN = value)
> str(v)
List of 3
 $ : int 1438377
 $ : int 1438378
 $ : int 1438377

This is not possible to do when using implicit futures. This is because the %<-% assignment operator cannot be used in all cases where the regular <- assignment operator can be used. It can only be used to assign future values to environments (including the calling environment) much like how assign(name, value, envir) works. However, we can assign implicit futures to environments using named indices, e.g.

> plan(multisession)
> v <- new.env()
> for (name in c("a", "b", "c")) {
+     v[[name]] %<-% {
+         Sys.getpid()
+     }
+ }
> v <- as.list(v)
> str(v)
List of 3
 $ a: int 1438485
 $ b: int 1438486
 $ c: int 1438485

Here as.list(v) blocks until all futures in the environment v have been resolved. Then their values are collected and returned as a regular list.

If numeric indices are required, then list environments can be used. List environments, which are implemented by the listenv package, are regular environments with customized subsetting operators making it possible to index them much like how lists can be indexed. By using list environments where we otherwise would use lists, we can also assign implicit futures to list-like objects using numeric indices. For example,

> library(listenv)
> plan(multisession)
> v <- listenv()
> for (ii in 1:3) {
+     v[[ii]] %<-% {
+         Sys.getpid()
+     }
+ }
> v <- as.list(v)
> str(v)
List of 3
 $ : int 1438582
 $ : int 1438583
 $ : int 1438582

As previously, as.list(v) blocks until all futures are resolved.

Demos

To see a live illustration how different types of futures are evaluated, run the Mandelbrot demo of this package. First, try with the sequential evaluation,

library(future)
plan(sequential)
demo("mandelbrot", package = "future", ask = FALSE)

which resembles how the script would run if futures were not used. Then, try multisession evaluation, which calculates the different Mandelbrot planes using parallel R processes running in the background. Try,

plan(multisession)
demo("mandelbrot", package = "future", ask = FALSE)

Finally, if you have access to multiple machines you can try to set up a cluster of workers and use them, e.g.

plan(cluster, workers = c("n2", "n5", "n6", "n6", "n9"))
demo("mandelbrot", package = "future", ask = FALSE)

Installation

R package future is available on CRAN and can be installed in R as:

install.packages("future")

Pre-release version

To install the pre-release version that is available in Git branch develop on GitHub, use:

remotes::install_github("HenrikBengtsson/future", ref="develop")

This will install the package from source.

Contributing

To contribute to this package, please see CONTRIBUTING.md.

future's People

Contributors

akgold avatar dstoeckel avatar epruesse avatar etiennebacher avatar fenguoerbian avatar henrikbengtsson avatar hoxo-m avatar hplieninger avatar hughparsonage avatar jarauh avatar mbergins avatar rekyt avatar salim-b avatar zivankaraman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

future's Issues

Add multiprocess, which is multicore with multisession as a fallback

Add multiprocess, which is multicore with multisession as a fallback, such that one can suggest this regardless of platform, i.e.

plan(multiprocess)

Implementation would be something like:

multiprocess <- function(...) {
  fcn <- if (supportsMulticore()) multicore else multisession
  fcn(...)
}

Passing existing futures to new futures should resolve existing futures first

Background

The following does not work:

> library("future")
> plan("multisession")

> a <- future( 1L )
> b <- future( value(a) + 1 )
Warning message:
In serialize(data, node$con) :
  'package:future' may not be available when loading

> resolved(a)
[1] TRUE
> value(a)
[1] 1

> resolved(b)
[1] FALSE
> value(b)
[ stalls ]

This is not surprising. The reason is because a is a future that is evaluated in a background R session and internally this future holds a socket connection to this background session. This future (formally Future) object is passed to the background session as is, but this background session won't have an open connection to it.

Suggestion

  • Add protection against passing Future objects like this by mistake. In the future expression for b, variable a is detected as a global variable (using the machinery of globals and passed/exported as such by the specific Future class. It is in the latter we could add protection against exporting certain types of Future:s.
  • Another feature could be to block until detected "global" futures are resolved (using value(<future>)) and export a "vanilla" future with value already set (e.g. ConstantFuture, but f <- eager(value(f)) would also do). This could probably be implemented in getGlobalsAndPackages().

(An even fancier solution would be to recreate the existing "global" future in the new future by extracting the future expression and it's "frozen" globals (which currently are not stored). This is very unlikely to happen any time soon, but it's a neat idea).

WISH: Add resolve() for efficiently resolving and retrieving values asynchronously

Consider a large number of futures like:

library("listenv")
library("future")

LONG_TIME <- 60
LARGE_NUMBER <- 1e6
create_large_data <- function(ii) {
  Sys.sleep(runif(1L, max=LONG_TIME))  ## Slow process
  rnorm(runif(1L)*LARGE_NUMBER)  ## Large data
}

x <- listenv()
for (ii in 1:10) {
  cat(sprintf("Future #%d\n", ii))
  x[[ii]] %<=% { create_large_data(ii) }
}

If the above futures are processed on a cluster it may take some time to retrieve each of the future values because the values are large and they need to be serialized in order to be transfer back to the main processes. This may take different amount of time for different futures.

Now, if we use

y <- as.list(x)

to resolve and collect the values, we basically do so sequentially. In other words, x[[2]] won't be called until x[[1]] is completed. Now, if future x[[2]] is already resolved but x[[1]] takes a long time to be evaluated, our main process is forces to be idle until x[[1]] is resolved.

It would be better to be able to start retrieving the value of future x[[2]] in the meanwhile. In order to do this, we need to query the futures to check whether they're are resolved or not and only start retrieving values for futures that are resolved. Something like:

resolve <- function(...) UseMethod("resolve")
resolve.listenv <- function(x, ..., sleep=1.0) {

  fs <- futureOf(envir=x, drop=TRUE)
  resolved <- logical(length(fs))
  while (!all(resolved)) {
    for (ii in which(!resolved)) {
      if (!resolved(fs[[ii]])) next
      ## Retrieve value (allow for errors)
      tryCatch({ value(fs[[ii]]) }, error = function(ex) {}) 
      resolved[ii] <- TRUE
    } # for (ii ...)

    ## Wait a bit before checking again
    if (!all(resolved)) Sys.sleep(sleep)
  } # while (...)

  ## Touch every element to trigger removal of internal future variable
  for (ii in seq_along(x)) force(x[[ii]])

  x
} ## resolve() for listenv

which we then can use as:

x <- resolve(x)
x <- as.list(x)

future::plan("eager") only works if future is attached

Setting a future strategy by a character string does not work unless the future package is attached. For example,

> future::plan("eager")
Error in future::plan("eager") : No such strategy for futures: 'eager'

gives an error, whereas the following works:

> library(future)
> future::plan("eager")

Internal future variable of environment remains after element is assigned the value

Internal future variable is not automagically removed when future is resolved, e.g.

> library("future")
> env <- new.env()
> env$a %<=% 1

## Resolve
> env$a
[1] 1

## Future is still in there
> futureOf(env$a)
<environment: 0x000000000ae3cb98>
attr(,"class")
[1] "EagerFuture" "Future"      "environment"

Ideally it should be removed, but can it be done?

WISH: Nested future strategies

It would be useful to be able to specify nested future strategies, e.g.

plan(list(A=multicore, B=lazy))

such that with

a %<=% {
  b %<=% { 1 }
  2*b
}

expression a is evaluated using a multicore future whereas expression b is evaluate using a lazy future. Just like,

a %<=% {
  b %<=% { 1 }  %plan% lazy
  2*b
} %plan% multicore

but without hardcoding it into the code.

This feature would be very useful for distributed processing on a cluster where one layer of futures are distributed to cluster nodes, whereas the second layer uses multicore processing. One can also imagine similar multicore-multicore hierarchies.

A real-world use case is from bioinformatics where one processes sample by sample, and then each sample is in turn processed chromsome by chromsome. One can imagine using a list(samples=cluster, chromosomes=multicore) strategy.

Add await() for Future class

Add await() for Future class, which should poll resolved() using a timing plan, e.g.

f <- future({
  ## Some long asynchronous calculation
  value
})
await(f)  ## Not really called by the user
v <- value(f)

Borrow for await() of async::AsyncTask.

So, why not use value()? That's actually the idea, but value() itself could use await() internally. This means that every implementation of a Future class don't have to implement pretty much the same await functionality. Also, it's nice to have consistent behavior of how the polling/waiting is done.

TESTS: Run all tests with availableCores() = 1, 2, 3

Make sure to run all tests also as on single-core environments, i.e. emulate availableCores() from 1 to 3. This is particularly important since there's some code that is conditional on availableCores() == 1.

The easiest is probably to use a for loop where we set option mc.cores accordingly, e.g.

oopts <- options(warn=1L, mc.cores=2L)

for (cores in 1:min(3L, availableCores())) {
  options(mc.cores=cores-1L)  ## By definition of mc.cores
[...]
} ## for (cores ...)

options(oopts)

The reason for min(3L, availableCores()) and not just 1:3 is that on some test environments we have only access to a single core (e.g. AppVeyor CI) or two cores (e.g. Travis CI) and it's probably best not to try to overuse what we have (or should/could we?)

Warning in serialize(data, node$con) : 'package:<name>' may not be available when loading

Suppress the following warning:

> plan(cluster, cluster=cl)
> g %<=% capturePlot({ plot(iris) })
Warning in serialize(data, node$con) :
  'package:R.devices' may not be available when loading

Some tracing:

16: .signalSimpleWarning("'package:R.devices' may not be available when loading",
        quote(serialize(data, node$con)))
15: serialize(data, node$con)
14: sendData.SOCKnode(con, list(type = type, data = value, tag = tag))
13: sendData(con, list(type = type, data = value, tag = tag))
12: postNode(con, "EXEC", list(fun = fun, args = args, return = return,
        tag = tag))
11: sendCall(cl[[i]], fun, list(...))
10: clusterCall(cl, fun = lapply, X = packages, FUN = requirePackage)
9: run.ClusterFuture(future)

CONSISTENCY: Add globals=TRUE to eager() as well

Add globals=TRUE to eager() with the effect that it will try to identify global variables just like lazy() and multicore() does it. This will have the advantage of getting the same errors on "globals" regardless of how futures are resolved.

As a start, the identified globals don't have to be "exported", but they should be identified and be retrieved just as for lazy(..., globals=TRUE).

Capture and signal interrupts?

Interrupts (e.g. user interrupts/Ctrl-C, process terminate, ...) may be signaled while a future is evaluated. Should the future package make sure such interrupts are automatically caught and record them such that they are signaled when value() is called, cf. errors?

Details

Interrupts can be caught in R using tryCatch(..., interrupt=...), e.g.

res <- FALSE
res <- tryCatch({
  Sys.sleep(10)
  TRUE
}, interrupt = function(int) {
  str(int)
}, error = function(ex) {
  str(ex)
})

This is somewhat related to Issue #25, but since interrupts are quite different than warnings I think the two ideas should be kept separate.

PS. Added this issue based on a note to myself from 2015-06-18.

WISH: Futurized *apply() functions

Should we add something like?

flapply <- function(x, FUN, ..., AS.LIST=FALSE) {
  res <- listenv()
  for (ii in seq_along(x)) {
    res[[ii]] %<=% FUN(x[[ii]], ...)
  }
  names(res) <- names(x)

  ## Test that 'x', 'FUN' and 'ii' are exported to future environment
  rm(list=c("x", "FUN", "ii"))

  ## Return listenv of list of futures - the latter blocks.
  if (AS.LIST) res <- as.list(res)

  res
}

to the future API?

BUG: Future assignments to extended listenv:s gives error on non-existing elements

Future assignments to extended listenv:s gives error on non-existing elements:

> library("listenv")
> library("future")
> plan(eager)

> x <- listenv(); length(x) <- 2
> x[[1]] %<=% 1
Error in get_variable.listenv(target$envir, target$idx, mustExist = TRUE,  :
  No such 'listenv' element: 1
> x[[2]] %<=% 2
Error in get_variable.listenv(target$envir, target$idx, mustExist = TRUE,  :
  No such 'listenv' element: 2

It seems to only affect the "appended" elements that are introduced by the length(x) <- n call. For example:

> x <- listenv(a=1); length(x) <- 2
> x[[1]] %<=% 1
> x[[2]] %<=% 2
Error in get_variable.listenv(target$envir, target$idx, mustExist = TRUE,  :
  No such 'listenv' element: 2

Note that regular assignments works and makes the future assignment possible afterward, e.g.

> x[[2]] <- 2
> x[[2]] %<=% 2

It is also interesting to notice that future assignments to elements at a position beyond length(x) also works (as it should):

> x <- listenv(a=1); length(x) <- 2
> x[[1]] %<=% 1
> x[[2]] %<=% 2
Error in get_variable.listenv(target$envir, target$idx, mustExist = TRUE,  :
  No such 'listenv' element: 2
> x[[3]] %<=% 3

but not to the "expand" elements if you first add one far out, e.g.

> x[[6]] %<=% 6
> x[[4]] %<=% 4
Error in get_variable.listenv(target$envir, target$idx, mustExist = TRUE,  :
  No such 'listenv' element: 4
> x[[5]] %<=% 5
Error in get_variable.listenv(target$envir, target$idx, mustExist = TRUE,  :
  No such 'listenv' element: 5
> x[[7]] %<=% 7
> sessionInfo()
R Under development (unstable) (2015-12-12 r69765)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

locale:
[1] LC_COLLATE=English_United States.1252
[2] LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base

other attached packages:
[1] future_0.9.0  listenv_0.5.0

loaded via a namespace (and not attached):
[1] parallel_3.3.0   tools_3.3.0      codetools_0.2-14 globals_0.6.0

Package needs to be attached or `future::future()` needs to be imported

If using, say, %<=% internally in another package, it appears that the future() function of future needs to be imported as:

importFrom("future", "future")

The problem is most likely the future::futureAssign() function that calls future() when the delayed/future assignment is resolved;

  ## Evaluate expression/value as a "future" and assign its value to
  ## a variable as a "promise".
  ## NOTE: We make sure to pass 'envir' in order for globals to
  ## be located properly.
  a <- b <- NULL; rm(list=c("a", "b")) ## To please R CMD check
  call <- substitute(future(a, envir=b), list(a=value, b=envir))
  future <- eval(call, envir=assign.env) 

plan(<pkg>::<strategy>) gives an error

The plan() function does not handle strategies that are specified as <pkg>::<strategy>. For instance, these works:

> library(future)
> plan(multicore)
> plan(multicore, maxCores=2L)

But not these:

> plan(future::multicore)
Error in plan(future::multicore) :
  Additional arguments to plan() must be named.

> plan(future::multicore, maxCores=2L)
Warning message:
In plan(future::multicore, maxCores = 2L) :
  Ignored 3 unknown arguments: 'maxCores', '', ''

"<anonymous>: ... may be used in an incorrect context" with plan(lazy)

Lazy futures may generate annoying warnings on ": ... may be used in an incorrect context", but they seem to be harmless.

Example

Assume:

library(future)

sum_F <- function(x, ...) {
  y %<=% sum(x, ...)
  y
}

x <- c(1:3, NA)

Then, if we use eager or multicore processing:

> sum(x)
[1] NA
> sum(x, na.rm=TRUE)
[1] 6

> plan(eager)
> sum_F(x)
[1] NA
> sum_F(x, na.rm=TRUE)
[1] 6

> plan(multicore)
> sum_F(x)
[1] NA
> sum_F(x, na.rm=TRUE)
[1] 6

everything is as expected. However, if we use lazy evaluation we get a warning:

> plan(lazy)
> sum_F(x)
[1] NA
Warning message:
<anonymous>: ... may be used in an incorrect context: 'sum(x, ...)'

> sum_F(x, na.rm=TRUE)
[1] 6
Warning message:
<anonymous>: ... may be used in an incorrect context: 'sum(x, ...)'

On the upside, the result is correct, which indicates that arguments passed via ... are indeed properly passed down.

How can we get rid of these warnings?

Lazy futures: Specify how/when globals are resolved?

Background / current status

## A global variable
a <- 0
f <- lazy({
  42 * a
})

## Since 'a' is a global variable in _lazy_ future 'f',
## which still hasn't been resolved, any changes to
## 'a' until 'f' is resolved, will affect its value.
a <- 1
v <- value(f)
print(v)
stopifnot(v == 42) 

Proposal

Should there be an option to specify when globals in lazy futures should be resolved, e.g.

a <- 0
f <- lazy({
  42 * a
}, globals="eager")
a <- 1
v <- value(f)
print(v)
stopifnot(v == 0) 

versus (current):

a <- 0
f <- lazy({
  42 * a
}, globals="lazy")
a <- 1
v <- value(f)
print(v)
stopifnot(v == 42) 

The purpose of this would mostly be to (i) illustrate the importance of understanding how globals are handled, and (ii) provide a lazy future mechanism that as far as possible emulated what happens when evaluating futures in external processes/R sessions.

To add this, we also need to add dependency on the globals package.

IDEA: Add simplify(x) and values(..., simplify=TRUE)

Add simplify() for lists and list environments, e.g.

> x <- as.list(1:6)
> dim(x) <- c(2,3)
> dimnames(x) <- list(c("a", "b"), c("A", "B", "C"))
> x
  A B C
a 1 3 5
b 2 4 6
> str(x)
List of 6
 $ : int 1
 $ : int 2
 $ : int 3
 $ : int 4
 $ : int 5
 $ : int 6
 - attr(*, "dim")= int [1:2] 2 3
 - attr(*, "dimnames")=List of 2
  ..$ : chr [1:2] "a" "b"
  ..$ : chr [1:3] "A" "B" "C"

> y <- simplify(x)
> y
  A B C
a 1 3 5
b 2 4 6
> str(y)
 int [1:2, 1:3] 1 2 3 4 5 6
 - attr(*, "dimnames")=List of 2
  ..$ : chr [1:2] "a" "b"
  ..$ : chr [1:3] "A" "B" "C"

> x <- as.listenv(x)
> y <- simplify(x)
> y
  A B C
a 1 3 5
b 2 4 6
> str(y)
 int [1:2, 1:3] 1 2 3 4 5 6
 - attr(*, "dimnames")=List of 2
  ..$ : chr [1:2] "a" "b"
  ..$ : chr [1:3] "A" "B" "C"

Calling values(x, simplify=TRUE) should correspond to simplify(values(x)).

Prototype

simplify <- function(...) UseMethod("simplify")

simplify.list <- function(x, ...) {
  ns <- sapply(x, FUN=length)
  if (any(ns != 1)) return(x)
  y <- unlist(x)
  dim <- dim(x)
  if (!is.null(dim)) {
    dim(y) <- dim
    dimnames(y) <- dimnames(x)
  }
  y
}

simplify.listenv <- function(x, ...) {
  ns <- sapply(x, FUN=length)
  if (any(ns != 1)) return(x)
  y <- unlist(x)
  dim <- dim(x)
  if (!is.null(dim)) {
    dim(y) <- dim
    dimnames(y) <- dimnames(x)
  }
  y
}

Add withPlan()

Add withPlan() to allow for temporarily setting which type of future to use, e.g.

withPlan({
  res <- a_function_using_futures(x=x, y=y)
}, strategy="eager")

Add useFuture() instead of options(future=...)

Add useFuture() instead of options(future=...) with the advantage of immediate validation and lookup, e.g.

useFuture(lazy)
useFuture("lazy")
useFuture(eager)
useFuture(c("lazy", "eager")[2])

But then, how to check which future is in use? Is there a better name than useFuture().

DOCS: Clarify that future expressions often need to curly brackets

Clarify that future assignment often need to curly brackets. For instance,

> p %<=% 1 + 2
Error in p %<=% 1 + 2 : non-numeric argument to binary operator

gives an error because it is parsed as future assignment p %<=% 1 to which then 2 is added. Hence, the error. The solution is:

> p %<=% { 1 + 2 }
> p
[1] 3

Recode the call to plan()

Recode the call to plan(), e.g.

> plan(lazy)

> plan()
function (expr, envir = parent.frame(), substitute = TRUE, local = TRUE,
    ...)
{
    if (substitute)
        expr <- substitute(expr)
    if (local) {
        a <- NULL
        rm(list = "a")
        expr <- substitute(local(a), list(a = expr))
    }
    future <- LazyFuture()
    delayedAssign("value", eval(expr, envir = envir), assign.env = future)
    future
}
<environment: namespace:future>
attr(,"sys.call")
plan(lazy)

> str(fun)
function (expr, envir = parent.frame(), substitute = TRUE, local = TRUE,
    ...)
 - attr(*, "sys.call")= language plan(lazy)

WISH: It should be possible to adjust the number of assigned cores

2016-11-03: Issue was originally on 'availableCores("mc.cores") should returngetOptions("mc.cores") + 1L' but it recently turned into a more general discussion on how to maximize core utilization. See below.

future::availableCores("mc.cores") should really return getOptions("mc.cores") + 1L, because from help('options'):

mc.cores:
a integer giving the maximum allowed number of additional R processes allowed to be run in parallel to the current R process. Defaults to the setting of the environment variable MC_CORES if set. Most applications which use this assume a limit of 2 if it is unset.

Further clarification: Multicore processing is not supported, it would effectively correspond to options(mc.cores = 0) and in that case availableCores() returns 1.

exportGlobals() should make sure global actually lives in package before dropping

Internal exportGlobals() should make sure global object actually lives in package before dropping it. This would for instance avoid dropping FUN in if it is assigned as FUN <- base::mean in, say, the global environment.

Currently, exportGlobals() is a bit naive and just trusts the environment of the object;

  pkgs <- packagesOf(globals)
  ## Drop all globals which are already part of one of
  ## the packages in 'pkgs'.  They will be available
  ## when those packages are attached.
  pkgsG <- sapply(globals, FUN=function(obj) {
    environmentName(environment(obj))
  })
  keep <- !is.element(pkgsG, pkgs)
  globals <- globals[keep]

As a start it should make sure the name (by names(globals)) matches.

Related to: Issue HenrikBengtsson/globals#9

DOCUMENTATION: Pros & cons

One advantage with where futures are used in for loop compared to *apply() alternatives is that the future expression may be used in only part of the for-loop expression.

For instance, in the below code, (a) slow_calculation() will not be called if all items have already been processed, and (b) it will only be processed once;

res <- listenv()
a <- NULL
for (ii in 1:nbr_of_items) {
  if (is_done(ii)) next
  if (is.null(a)) a <- slow_calculation()
  res[[ii]] %<=% { process(ii, a=a) }
}
res <- as.list(res)

In order to achieve the same with an *apply() function, we need to split it up in at least two pieces, and do a two-pass scan over the data, e.g.

res <- list()

all_done <- all(sapply(1:nbr_of_items, FUN=is_done))
if (!all_done) {
  a <- slow_calculation()
  res <- lapply(1:nbr_of_items, FUN=process, a=a)
}

DOCUMENTATION: Illustrate lazy evaluation with lazy futures

Illustrate in help and/or vignette how lazy future assignments works hand in hand with lazy evaluation, e.g.

> library(future)
> plan(lazy)

> foo <- function(a, do=FALSE) { 
  cat("foo() ...\n")
  if (do) cat("a=", a, "\n", sep="")
}

# Create a future
> x %<=% { cat("Pow!\n"); 1 }

# Lazy evaluation
> foo(x, do=FALSE)
foo() ...

> foo(x, do=TRUE)
foo() ...
Pow!
a=1

> foo(x, do=TRUE)
foo() ...
a=1

FutureRegistry(..., action="collect-first") calls value() which may generate error

FutureRegistry(..., action="collect-first") calls value(). This will throw an error "in the wild" if the future evaluation generated an error. For example:

> library("future")
> plan("multicore", maxCores=2L)
> f <- future(stop())
> f <- future(stop())
> f <- future(stop())
Error in eval(expr, env) :

> traceback()
12: stop(value)
11: value.Future(future)
10: NextMethod("value")
9: value.MulticoreFuture(future)
8: value(future)
7: FutureRegistry("multicore", action = "collect-first")
6: await()
5: requestCore(await = function() FutureRegistry("multicore", action = "collect-first"))
4: run.MulticoreFuture(future)
3: run(future)
2: evaluator(expr, envir = envir, substitute = FALSE, ...)
1: future(stop())

Return warnings for futures with plan multicore

Consider:

v %<=% {Sys.sleep(3);warning("Warning");10} %plan% lazy
v
v %<=% {Sys.sleep(3);warning("No warning?");10} %plan% multicore
v

Unlike errors, warnings are not passed forward under the multicore plan.

Also, amazing package. I wish I'd discovered it sooner.

Globals that are local copies of package objects should not be dropped

With

library("future")
library("listenv")

flapply <- function(x, FUN, ...) {
  res <- listenv()
  for (ii in seq_along(x)) {
    res[[ii]] %<=% FUN(x[[ii]], ...)
  }
  names(res) <- names(x)

  ## Test that 'x', 'FUN' and 'ii' are exported to future environment
  rm(list=c("x", "FUN", "ii"))

  as.list(res)
}

the following gives an error because 'FUN' is incorrectly dropped internally by future:::exportGlobals():

> x <- list(a=1, b=1:2, c=1:3)
>
> plan(lazy)
> flapply(x, FUN=base::length)
Error in eval(expr, envir, enclos) : could not find function "FUN"

The long-term solution: HenrikBengtsson/globals#9 + robustness/assertion via #18.

plan() should return previous plan/strategy

plan(new) should return previous plan/strategy, i.e.

old <- plan(new)

Currently it returns the newly set strategy. If the latter is needed, one can always do

old <- plan(new)
curr <- plan()

afterwards.

futureOf(envir=x, drop=FALSE) should preserve dimensions

Calling f <- futureOf(envir=x, drop=FALSE) returns all futures and those who are not are assigned NA. If x is a list or a list environment it might have dimensions (and corresponding names) set. If so, then f should inherit the same, i.e. something like

if (is.null(expr) && !drop && !is.null(dim(x)) {
  dim(f) <- dim(x)
  dimnames(f) <- dimnames(x)
}

This will be useful for resolved(), cf. Issue #30.

Exceptions: Provide a more informative error message when background session is closed

Problem

Provide a more informative error message when background session is closed. Now we get:

> library(future)
> plan("multisession")
> f <- future({ quit("no") })
> value(f)
Error in unserialize(node$con) : error reading from connection
> traceback()
6: unserialize(node$con)
5: recvData.SOCKnode(con)
4: recvData(con)
3: recvResult(cl)
2: value.ClusterFuture(f)
1: value(f)

Troubleshooting

This error occurs when trying to receive results from the cluster node:

  res <- recvResult(cl)

Suggestion

We could wrap the above in a tryCatch(..., error) call an provide a more informative error message that way. However, to really figure out that the remote end on the connection actually closed the connection is trickier. Looking at cl$con there is no importation saying it is closed.

WISH: Add message when future is resolved

Before I discovered future I wrote mcparallelDo. For almost every purpose I run into, future does a better job of obtaining the results I was seeking when I wrote mcparallelDo. The only feature I think mcparallelDo has that is (seemingly) absent from future is embodied in the verbose argument, i.e. the ability to notify a user in an interactive session when a multicore evaluation has completed. Personally, I find that feature awfully convenient.

Just glancing around briefly, it seems like registering a callback handler that watches resolved could work or perhaps something could accomplish that in the FutureRegistry?

Add argument 'recursive` to resolve()

Add argument 'recursivetoresolve()so that it is possible to resolved nested lists, nested environments and so on. Argumentrecursiveshould take a non-negative number indicating how deep of a recursion one should use. Argumentrecursive=TRUEshould correspond torecursive=+Inf`.

For instance,

  • resolve(x, recursive=0) will only resolve x, i.e. for anything to change x has to be a Future object.
  • resolve(x, recursive=1) will resolve x and any of its elements but nothing beyond. If x is a list or an environment, each of its elements will be resolved.
  • resolve(x, recursive=2) as resolve(x, recursive=1) plus in principal lapply(as.list(x), FUN=resolve, recursive=1).

This is related to Issue #49.

Undetected global futures

Problem

The following works with cores=2L but not with cores=3L:

library("future")
plan(multisession, maxCores=cores)

env <- new.env()
env$a %<=% { 5 }
b %<=% { "a" }
str(b)
y %<=% { env[[b]] }
y
str(as.list(env))

With cores=3L it stalls.

Troubleshooting

Although both globals env and b are identified, and therefore resolved, in the last future assignment, the future env$afrom the first future assignment is undetected. This is because env is an environment.

What happens is that the background process tries to access env$a, but the future that is found and tried to be resolved lives in the main process which in turn has a value that is living in another background process, and two background processes cannot exchange futures.

It's not fully clear to me why it works with cores=2L:

  • It could be that all background sessions have already been consumed and the internals waits for things to be resolved (just as a manual resolve; see below). I would there to be an error in this background session, but then I would also expect that to propagate back. Why don't we see this?
  • It could also be that the future happens to end up in the same background R session and someone it can be resolved because of this (less likely this is the case)

Workaround

Until env is automatically resolved, one can solve the above by enforcing that it is resolved by:

library("future")
plan(multisession, maxCores=3L)

env <- new.env()
env$a %<=% { 5 }
b %<=% { "a" }
str(b)
resolve(env, value=TRUE)
y %<=% { env[[b]] }
y
str(as.list(env))

For some reason is resolve(env, value=FALSE) (the default) not enough.

Suggest

Make sure env is automatically resolved when detected as a global and exported. One could take a conservative approach and resolve all environments recursively.

DOCUMENTATION: Mention futureOf(x) in vignette?

Should parts of the following from the outdated README of 'async' be included in the 'future' vignette?

Exception handling

If an error occurs during the evaluation of an asynchronous
expression, that error is thrown when the asynchronous value is
retrieved. For example:

> e %<=% { stop("Whoops!") }
> 1+2
[1] 3
> e
Error: BatchJobError: 'Error in eval(expr, envir = envir) : Whoops! '

This error is rethrown each time e is retrieved, so it is not
possible to "inspect" e any further using standard R functions such
as print() and str().
In order to troubleshoot an error, one can use the futureOf() function
to retrieve the underlying Future object, e.g.

> futureOf(e)
BatchJobsAsyncTask:
Expression:
  {
      stop("Whoops!")
  }
Status: 'error', 'started', 'submitted'
Error: 'Error in eval(expr, envir, enclos) : Whoops! '
Backend:
Job registry:  async2097362047
  Number of jobs:  1
  Files dir: /tmp/async/.tests/.async/async2097362047-files
  Work dir: /tmp/async/.tests
  Multiple result files: FALSE
  Seed: 35532256
  Required packages: BatchJobs
Cluster functions: 'Local'

cluster future: Include more details iff failing to attach needed package

Before "launching" a cluster future in an external R session, globals are exported and required packages are attached. If a package is not installed or otherwise fails to be attached in the external R session, we get an error:

Error in checkForRemoteErrors(lapply(cl, recvResult)) : 
  one node produced an error: there is no package calledfutureCalls: fun ... run.ClusterFuture -> clusterCall -> checkForRemoteErrors
27: stop("one node produced an error: ", firstmsg, domain = NA)
26: checkForRemoteErrors(lapply(cl, recvResult))
25: clusterCall(cl, fun = lapply, X = packages, FUN = library, character.only = TRUE)
[...]

It would be nice to have more information about the node, what packages are installed and what the library paths are. So instead of a plain library(<pkg>), something like:

if (!require(pkg, character.only=TRUE)) {
  msg <- sprintf("Failed to attach package %s in %s", sQuote(pkg), R.version$version.string)
  data <- utils::installed.packages()

  ## Installed, but fails to load/attach?
  if (is.element(pkg, data[,"Package"])) {
    keep <- (data[,"Package"] == pkg)
    data <- data[keep,,drop=FALSE]
    pkgs <- sprintf("%s %s (in %s)", data[,"Package"], data[, "Version"], sQuote(data[,"LibPath"]))
    msg <- sprintf("%s, although the package is installed: %s", msg, paste(pkgs, collapse=", "))
    mdebug(msg)
    stop(msg)
  }

  paths <- .libPaths()
  msg <- sprintf("%s, because the package is not installed in any of the libraries (%s).", msg, paste(sQuote(paths), collapse=", "))

  pkgs <- sprintf("%s %s", data[,"Package"], data[, "Version"])
  msg <- sprintf("%s There are %d installed packages", msg, length(pkgs))

  ## Too much to also list the installed packages?!?
  # msg <- sprintf("%s: %s", msg, paste(pkgs, collapse=", "))

  mdebug(msg)
  stop(msg)
}

This would give an error message like:

Failed to attach package 'future' in R version 3.2.3 Patched (2016-01-12 r69941),
because the package is not installed in any of the libraries
 ('/home/henrik/R/x86_64-pc-linux-gnu-library/3.2',
'/home/shared/R/R-3.2.3patched-20160114/lib64/R/library').
There are 513 installed packages.

Add resolved() for list, environment and list environments

Add resolved() for list, environment and list environments that returns a logical of the length/dimensions (and names) with elements FALSE, TRUE and NA reflecting whether the corresponding future is not resolved, resolved, or not a future.

Example:

> x <- list()  ## or new.env() or listenv()
> x$a <- future(1)
> x$b <- future(2)
> resolved(x)
    a     b
FALSE  TRUE

but also

> x <- listenv()  ## or new.env()
> x$a %<=% { 1 }
> x$b %<=% { 2 }
> resolved(x)
    a     b
FALSE  TRUE

and

> x <- listenv()
> x$a %<=% { 1 }
> x$b %<=% { 2 }
> x[[4]] %<=% { 4 }
> dim(x) <- c(2,2)
> resolved(x)
      [,1]  [,2]
[1,] FALSE    NA
[2,]  TRUE FALSE

Should availableCores() return the first of a set or a minimum?

Currently, availableCores() defaults to return the first valid value of (in order):

  1. PBS_NUM_PPN
  2. mc.cores (and MC_CORES)
  3. parallel::detectCores()

An alternative is to have it return the minimum value of all searched values. This would have the advantage that any of them can override the others by setting a smaller value. For instance, instead of as today forcing a plan(eager) in a multicore future to prevent nested multicores by mistake, we could force an oopts <- options(mc.cores=1L); on.exit(oopts);.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.