Giter Club home page Giter Club logo

doazureparallel's Introduction

Build Status

This repo is no longer maintained and no new features will be added.

doAzureParallel

Introduction

The doAzureParallel package is a parallel backend for the widely popular foreach package. With doAzureParallel, each iteration of the foreach loop runs in parallel on an Azure Virtual Machine (VM), allowing users to scale up their R jobs to tens or hundreds of machines.

doAzureParallel is built to support the foreach parallel computing package. The foreach package supports parallel execution - it can execute multiple processes across some parallel backend. With just a few lines of code, the doAzureParallel package helps create a cluster in Azure, register it as a parallel backend, and seamlessly connects to the foreach package.

NOTE: The terms pool and cluster are used interchangably throughout this document.

Notable Features

  • Ability to use low-priority VMs for an 80% discount (link)
  • Users can bring their own Docker Image
  • AAD and VNets Support
  • Built in support for Azure Blob Storage

Dependencies

  • R (>= 3.3.1)
  • httr (>= 1.2.1)
  • rjson (>= 0.2.15)
  • RCurl (>= 1.95-4.8)
  • digest (>= 0.6.9)
  • foreach (>= 1.4.3)
  • iterators (>= 1.0.8)
  • bitops (>= 1.0.5)

Setup

  1. Install doAzureParallel directly from Github.
# install the package devtools
install.packages("devtools")

# install the doAzureParallel and rAzureBatch package
devtools::install_github("Azure/rAzureBatch")
devtools::install_github("Azure/doAzureParallel")
  1. Create an doAzureParallel's credentials file
library(doAzureParallel)
generateCredentialsConfig("credentials.json")
  1. Login or register for an Azure Account, navigate to Azure Cloud Shell
wget -q https://raw.githubusercontent.com/Azure/doAzureParallel/master/account_setup.sh &&
chmod 755 account_setup.sh &&
/bin/bash account_setup.sh
  1. Follow the on screen prompts to create the necessary Azure resources and copy the output into your credentials file. For more information, see Getting Started Scripts.

To Learn More:

Getting Started

Import the package

library(doAzureParallel)

Set up your parallel backend with Azure. This is your set of Azure VMs.

# 1. Generate your credential and cluster configuration files.  
generateClusterConfig("cluster.json")
generateCredentialsConfig("credentials.json")

# 2. Fill out your credential config and cluster config files.
# Enter your Azure Batch Account & Azure Storage keys/account-info into your credential config ("credentials.json") and configure your cluster in your cluster config ("cluster.json")

# 3. Set your credentials - you need to give the R session your credentials to interact with Azure
setCredentials("credentials.json")

# 4. Register the pool. This will create a new pool if your pool hasn't already been provisioned.
cluster <- makeCluster("cluster.json")

# 5. Register the pool as your parallel backend
registerDoAzureParallel(cluster)

# 6. Check that your parallel backend has been registered
getDoParWorkers()

Run your parallel foreach loop with the %dopar% keyword. The foreach function will return the results of your parallel code.

number_of_iterations <- 10
results <- foreach(i = 1:number_of_iterations) %dopar% {
  # This code is executed, in parallel, across your cluster.
  myAlgorithm()
}

After you finish running your R code in Azure, you may want to shut down your cluster of VMs to make sure that you are not being charged anymore.

# shut down your pool
stopCluster(cluster)

Table of Contents

This section will provide information about how Azure works, how best to take advantage of Azure, and best practices when using the doAzureParallel package.

  1. Azure Introduction (link)

    Using Azure Batch

  2. Getting Started (link)

    Using the Getting Started to create credentials

    i. Generate Credentials Script (link)

    • Pre-built bash script for getting Azure credentials without Azure Portal

    ii. National Cloud Support (link)

    • How to run workload in Azure national clouds
  3. Customize Cluster (link)

    Setting up your cluster to user's specific needs

    i. Virtual Machine Sizes (link)

    • How do you choose the best VM type/size for your workload?

    ii. Autoscale (link)

    • Automatically scale up/down your cluster to save time and/or money.

    iii. Building Containers (link)

    • Creating your own Docker containers for reproducibility
  4. Managing Cluster (link)

    Managing your cluster's lifespan

  5. Customize Job

    Setting up your job to user's specific needs

    i. Asynchronous Jobs (link)

    • Best practices for managing long running jobs

    ii. Foreach Azure Options (link)

    • Use Azure package-defined foreach options to improve performance and user experience

    iii. Error Handling (link)

    • How Azure handles errors in your Foreach loop?
  6. Package Management (link)

    Best practices for managing your R packages in code. This includes installation at the cluster or job level as well as how to use different package providers.

  7. Storage Management

    i. Distributing your Data (link)

    • Best practices and limitations for working with distributed data.

    ii. Persistent Storage (link)

    • Taking advantage of persistent storage for long-running jobs

    iii. Accessing Azure Storage through R (link)

    • Manage your Azure Storage files via R
  8. Performance Tuning (link)

    Best practices on optimizing your Foreach loop

  9. Debugging and Troubleshooting (link)

    Best practices on diagnosing common issues

  10. Azure Limitations (link)

    Learn about the limitations around the size of your cluster and the number of foreach jobs you can run in Azure.

Additional Documentation

Read our FAQ for known issues and common questions.

Next Steps

For more information, please visit our documentation.

doazureparallel's People

Contributors

angusrtaylor avatar ax42 avatar brnleehng avatar cauldnz avatar daanknoope avatar dtenenba avatar dustindall avatar gopitk avatar grayskripko avatar jiata avatar microsoftopensource avatar msftgits avatar paselem avatar ronomal avatar sorenvind avatar zfengms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

doazureparallel's Issues

Node rebooting

Nodes can be broken on command doAzureParallel::makeCluster with the state "Start task failed". It looks like temporary internet problems. The error log on a broken node ends with

...
Downloading GitHub repo Azure/rAzureBatch@master
from URL https://api.github.com/repos/Azure/rAzureBatch/zipball/master
Error in curl::curl_fetch_memory(url, handle = handle) : 
  Server returned nothing (no headers, no data)
Calls: <Anonymous> ... request_fetch -> request_fetch.write_memory -> <Anonymous> -> .Call
Execution halted

I found the way to get the list of broken nodes

Filter(function(nd) nd$state == 'starttaskfailed', rAzureBatch::listPoolNodes('grayPool')$value)

How can I reboot broken nodes?
I believe it is worth to add such opportunity to doAzure::makeCluster as an optional argument.

Check for deleting the previous pool in makeCluster

This is a code chunk from my project that can be useful for you

are_equal <- function(a, b) !is.null(a) && !is.null(b) && a == b

pool_id <- rjson::fromJSON(file = cluster_file)$pool$name
if (are_equal(rAzureBatch::getPool(pool_id)$state, 'deleting')) {
  cat('Waiting for deleting the previous pool. Should take less than 10 minutes\n')
  while (are_equal(rAzureBatch::getPool(pool_id)$state, 'deleting')) {
    cat('.'); Sys.sleep(10)
  }
  cat('\n')
}

Change MRAN Snapshot Date

The MRAN snapshot date has been interfering in my ability to install the latest packages. I'd like to be able to set the snapshot date in the configuration file. I created a simple fix by changing the snapshot date before each package is installed (see below).

Rscript -e 'args <- commandArgs(TRUE)' -e 'options(repos = c(CRAN = args[1]))' https://mran.revolutionanalytics.com/snapshot/YYYY-MM-DD -e 'install.packages(args[1], dependencies=TRUE)' devtools

There are other ways to go about this, but if you like my solution let me know and I'll send a pull request.

Thanks,
Dustin

Maintaining R objects in workers memory possible? avoid data transfer?

Is it possible to reuse remote R objects between job requests? and avoid the data transfer?
ex: transfer a data.frame only the first time is needed? or manually when I need to refresh for some reason.

ex: this always takes around >100secs on low priority 3 Standard_D5_v2 nodes, 16 max tasks
would say it's always sending the data for the job, would be great to send once and pin in remote workers memory/session somehow.

my_data_set <- iris[sample(150,1e6,replace = TRUE),]
system.time(results <- foreach(i = 1:number_of_iterations) %dopar% {
c((system('hostname',intern=T)),NROW(my_data_set))
})
results

[1] "Job Summary: "
[1] "Id: job20170629182458"
[1] "Waiting for tasks to complete. . ."
|=========================================================| 100%[1] "Number of errors: 0"
user system elapsed
5.52 0.19 115.11
There were 14 warnings (use warnings() to see them)

R packages not installed on pool creation

Hi,

I include the following R packages in the pool configuration file:
"rPackages": {
"cran": ["hts", "lubridate", "tidyr", "dplyr"],
"github": []
}

However, they don't get installed on the VMs upon pool creation.

I confirmed this (on multiple pools) by running the following piece of code on the pool as the compute backend:
result <- foreach(i=1:20) %dopar% {
ip <- c("hts", "lubridate", "tidyr", "dplyr") %in% installed.packages()
}
which returns a list of c(FALSE, FALSE, FALSE, TRUE) arrays (dplyr seems to come with the R distribution).

Is there a way to make sure the packages get installed on the vms only once? I'd rather not pass them to foreach through .packages argument (to be installed upon each iteration).

Thanks!

Performance issue

Hi there,

Thanks for creating this package. It is a fantastic idea.

I am not too sure if this is the right channel to post it (Please do remove it if you find inappropriate). But i just setup a 8 cores machine and tested a simple function that return (1+1) and it takes about 10 mins.

A monte carlo simulation takes almost a day (and still running) when it could be done 6 hrs on my machine.
I am working for a financial firm in Singapore , and we are willing to test any new grid computing capabilities (and we have azure subscription in our firm).

happy to take this offline, my email is [email protected]

Regards
James

GetJobResults('jobName') returns the raw binary result

When calling getJobResult('foojob') for a long running job, the result is a binary blob like:

[1] 1f 8b 08 00 00 00 00 00 00 03 14 9d 77 38 d5 ef 1f c6 8d 4a 4a 9a 48 12 45 24 92 32 92 8a 5b 4a a4 cc ac
[36] a2 14 0a d9 eb 38 8b b3 cf e1 d8 7b 14 85 10 95 24 4d 2a a3 21 45 52 9a 44 2a eb 8b ca 28 a5 fa 7d 7e 7f
...

The returned result should be the same object type as if running a regular loop

blocking_job <- foreach(...) %dopar% {}
non_blocking_job <- foreach(..., .azure.options.wait = FALSE) %dopar {}

The results should be such taht blocking_job == non_blocking_job

Get Job API

Create an API for getting job information and status

Enable per chunk local reduce

Enable users to specify a local reduce method per group of tasks / chunk. This can significantly reduce the amount of data that needs to be moved around and allow for greater flexibility for local aggregations.

baseline performance, will be improved?

Hi, I was trying this with a cluster of low priority standard_d5_v2 vms, 3 nodes, 16 max tasks per node

minimum time I get is around 20secs for the bare minimum task-just outputting hostname, something I'm missing?
or will it be improved?

code
number_of_iterations=10
system.time(results <- foreach(i = 1:number_of_iterations) %dopar% {
(system('hostname',intern=T))
})
results

output
[1] "Job Summary: "
[1] "Id: job20170629182126"
[1] "Waiting for tasks to complete. . ."
|=========================================================| 100%[1] "Number of errors: 0"
user system elapsed
0.86 0.09 22.22
There were 14 warnings (use warnings() to see them)

results
[[1]]
[1] "batchnode-0"

[[2]]
[1] "batchnode-2"

[[3]]
[1] "batchnode-1"

[[4]]
[1] "batchnode-0"

[[5]]
[1] "batchnode-1"

[[6]]
[1] "batchnode-2"

[[7]]
[1] "batchnode-0"

[[8]]
[1] "batchnode-0"

[[9]]
[1] "batchnode-0"

[[10]]
[1] "batchnode-2"

Specify Operating System in cluster.json

Hello, is it possible to specify which OS your pool should start? By default it appears as though the script spins up microsoft-ads linux-data-science-vm linuxdsvm.

I have a package that can only run in a Windows environment - I'm hoping that I can specify this in the config script. Thanks in advance for any assistance.

EDIT - Some extra information. Since posting the above I have read that doAzureParallel uses "Data Science Virtual Machine (DSVM)...This package uses the Linux Edition of the DSVM which comes preinstalled with Microsoft R Server Developer edition" [https://github.com/Azure/doAzureParallel/blob/master/docs/00-azure-introduction.md]. What does this mean in practice?

When I attempt to provision a pool with the aforementioned Windows-only package from Github, the nodes never start - they turn orange within the pool information window ("Start task failed"). I have put this down to the fact that the pool is trying to load a Windows-only package in a Linux environment. If I remove the Github string, the pool provisions as expected but tasks requiring the package never run (for obvious reasons).

Jobs with large number of tasks fails early

Jobs with more than a few hundred tasks can fail before the job is complete. The logic for checking if all tasks are complete is only taking a partial set of the tasks into consideration. Need to loop through all tasks to determine jobs complete time.

Unfriendly error message when there are no jobs

Hi! This is the current behavior when there are no jobs:

getJobList()
[1] "Job List: "
Error in jobs$value[[j]] : subscript out of bounds

It would be much nicer to get something like this back as is returned in the Azure web interface:

No jobs were found for this Batch account

Thanks!

Allow for local/external R workers to be part of doazure cluster/pool

Don't know if this is possible, but would be nice to add our current free/available cores to the worker pool. Or join a local multi-node cluster to work on the same job/pool (hybrid onprem/cloud cluster).

similar scenario is possible with doRedis for example, as it's elastic & queue based, just calling startLocalWorkers(3,QUEUE_NAME,REDIS_HOST) I can add my cores to a onprem swarm r cluster

RegisterPool gives an error

After filling up the config file with Batch credentials I run:

pool <- registerPool("my_pool_config.json")

and got an error:
No encoding supplied: defaulting to UTF-8.
[1] "Booting compute nodes. . . Please wait. . . There are currently no nodes in the pool. . ."
| | 0%
Error in if (x$state == "idle") { : argument is of length zero

Error in curl::curl_fetch_memory(url, handle = handle) : Bad URL, colon is first character

After setting the credentials running
cluster <- makeCluster("cluster.json")

Results in the following error :

[1] "POST\n\n\n1113\n\napplication/json;odata=minimalmetadata\n\n\n\n\n\n\nocp-date:Wed, 26 Jul 2017 11:07:28 GMT\n/rparallel/pools\napi-version:2017-05-01.5.0"
[1] "xxx.westeurope.batch.azure.com/pools"
[1] "Auth String: SharedKey xxx:6b[...]7Y/yIKw="
<request>
Headers:
* User-Agent: rAzureBatch/0.3.1;doAzureParallel/0.3.1
* Content-Length: 1113
* Content-Type: application/json;odata=minimalmetadata
* ocp-date: Wed, 26 Jul 2017 11:07:28 GMT
* Authorization: SharedKey xxx:6b[...]7Y/yIKw=
now dyn.load("D:/wimj/Documents/R/win-library/3.4/curl/libs/x64/curl.dll") ...
Error in curl::curl_fetch_memory(url, handle = handle) : 
  Bad URL, colon is first character

I'm using R-3.4.1[64bit]. Any idea how to solve this?

Default github auth token is broken

When generating a cluster config with doAzureParallel::generateClusterConfig("test.config") the default value is set to NULL. This causes the package to fail at runtime when trying to create a cluster.

The default value should be set to "".

Error in callStorageSas: Not Found (HTTP 404)

What could be the reason of

Waiting for tasks to complete. . .
  |================================================================================| 100%
Error in callStorageSas(request, args$accountName, sas_params = sasToken,  : 
  Not Found (HTTP 404).

How can I debug it?

Blob permissions when using resourceFiles

The default blob container permissions are set to "Private (No anonymous access)". With this default setting, makeCluster() will error when using the argument resourceFiles with the message:

Warning message:
In waitForNodesToComplete(pool$id, 60000) :
The following 1 nodes failed while running the start task:

Container permissions must be set to "Blob (Anonymous read access for blobs only)" for the pool to be provisioned correctly. It took me some time this afternoon to troubleshoot this issue. This can be avoided with a line in the documentation to make users aware of the default.

(For reference I am referring to this documentation:
https://github.com/Azure/doAzureParallel/blob/master/docs/21-distributing-data.md
https://github.com/Azure/doAzureParallel/blob/master/samples/resource_files_example.R)

Fail-fast opportunity for broken installations

I try to install 'ranger' on nodes. My cluster.json file looks like

"rPackages": {
    "cran": ["ranger"],

I got a bunch of C++ errors on a node side

* installing *source* package 'ranger' ...
** package 'ranger' successfully unpacked and MD5 sums checked
** libs
sh: I/usr/lib64/microsoft-r/3.3/lib64/R/include: No such file or directory
make: [AAA_check_cpp11.o] Error 127 (ignored)
sh: I/usr/lib64/microsoft-r/3.3/lib64/R/include: No such file or directory
make: [Data.o] Error 127 (ignored)
sh: I/usr/lib64/microsoft-r/3.3/lib64/R/include: No such file or directory
...
make: [rangerCpp.o] Error 127 (ignored)
sh: I/usr/lib64/microsoft-r/3.3/lib64/R/include: No such file or directory
make: [utility.o] Error 127 (ignored)
sh: line 2: -shared: command not found
make: *** [ranger.so] Error 127
ERROR: compilation failed for package 'ranger'
* removing '/usr/lib64/microsoft-r/3.3/lib64/R/library/ranger'

The downloaded source packages are in
	'/tmp/Rtmpe0Q9qX/downloaded_packages'
Updating HTML index of packages in '.Library'
Making 'packages.html' ... done
Warning message:
In install.packages(args[1]) :
  installation of package 'ranger' had non-zero exit status
...

These installation problems do not affect the state of the nodes and all my nodes are 'Idle' after all and doAzureParallel::makeCluster does not notify me about package installation problems. I'm just going to get Cannot find 'ranger' error after running %dopar% function. It is not obvious where the real error is located. I suggest adding a warning or an error when such node installation error occurs

Infinite "Waiting for tasks to complete. . ."

My %dopar% expression contains in the first line library(somePackage) with broken dependency. According to Azure node log,
ERROR: dependencies 'doAzureParallel', 'rAzureBatch' are not available for package 'stackatto'.
Looks like I should not add both azure packages to DESCRIPTION Imports because it relies on cran availability.

Nevertheless, all nodes are in the state "Start task failed".
%dopar% is blocking R session without any result.

Problem with Authentication

I have a problem with authentication the REST requests.
When calling callBatchService from addPool I am getting the following error:

-> POST /pools?api-version=2016-07-01.3.1 HTTP/1.1
-> Host: <name>.westeurope.batch.azure.com
-> Accept-Encoding: gzip, deflate
-> Accept: application/json, text/xml, application/xml, */*
-> User-Agent: doAzureBatchR/0.0.1
-> Content-Length: 422
-> Content-Type: application/json;odata=minimalmetadata
-> ocp-date: Do., 02 Mär 2017 12:53:51 GMT
-> Authorization: SharedKey inselspital:<key that is different from the key from the Batch service-> 
>> {"vmSize":"Standard_A1_v2","id":"pool1","startTask":{"commandLine":" R -e 'install.packages(\"devtools\", dependencies=TRUE)'","runElevated":true,"waitForSuccess":true},"virtualMachineConfiguration":{"imageReference":{"publisher":"microsoft-ads","offer":"linux-data-science-vm","sku":"linuxdsvm","version":"latest"},"nodeAgentSKUId":"batch.node.centos 7"},"enableAutoScale":true,"autoScaleFormula":"$TargetDedicated = 10"}

<- HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
<- Content-Length: 567
<- Content-Type: application/json;odata=minimalmetadata
<- Server: Microsoft-HTTPAPI/2.0
<- request-id: a20d61d2-6cc2-435b-b9fc-d4c3a468c273
<- Strict-Transport-Security: max-age=31536000; includeSubDomains
<- X-Content-Type-Options: nosniff
<- DataServiceVersion: 3.0
<- Date: Thu, 02 Mar 2017 12:54:23 GMT
<- 

Is the header formatted correctly?

Allow merge task to run on task failures

Currently, if any tasks fail with a non-zero exit code, the job will get blocked by waiting for the merge task indefinitely. The for-loop should not get blocked but return an error on the job instead.

Custom install scripts

Allow users to define a script that will run at node setup time to modify nodes with any custom actions.

[ERROR] 'start task failed' on pool VMs

Hi,

I have a problem with doAzureParallel (0.2.0).

When executing
pool <- makeCluster("pool_config.json")
the pool is created, but the DSVMs produce this error: start task failed

stderr.txt shows: sed: can't read /etc/sudoers: Permission denied

My pool_config.json is attached. (Had to rename it for upload)
pool_config_issue.txt

I configured the batch and storage account like described here:
https://github.com/Azure/doAzureParallel
Except that I created the storage separately and attached it to the batch account later.

Here's additional information:

Region. North Europe

Current cores
6
Operating System
microsoft-ads linux-data-science-vm linuxdsvm (latest)
Current nodes
3
VM size
standard_d2_v2
Target nodes
3
Allocation state
Steady

User Idendity:
Task autouser; Admin

start task:
/bin/bash -c "set -e; set -o pipefail; sed -i -e 's/Defaults requiretty.*/ #Defaults requiretty/g' /etc/sudoers; export PATH=/anaconda/envs/py35/bin:$PATH; sudo env PATH=$PATH pip install --no-dependencies blobxfer; wait";

start task failed
stderr.txt: sed: can't read /etc/sudoers: Permission denied

Regards
Hans

Enable/Disable Cloud Combine (merge task)

Allow users more control over whether or not they want to use a merge task to merge all of the individual results into a single result that will return as the result for the foreach loop.

NAMESPACE problems

I was faced with lots of problems using your package in mine.
Please, read carefully http://r-pkgs.had.co.nz/namespace.html and use @import, @importFrom and :: for your functions.
The first problems are related to the rAzureBatch::addPool function (it is not in imported to namespace) and the httr::connect function in rAzureBatch (the same problem).
At the moment, I have to clutter up user search path using workarounds like attachNamespace('rAzureBatch') to help doAzureParallel to see rAzureBatch functions

CRAN

this package should be in CRAN

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.