Giter Club home page Giter Club logo

data-acc's Introduction

Deprecated Repository

This repository is no longer maintained and it's dependancies may contain security bugs - use at your own risk.

Data Accelerator

CircleCI Go Report Card codecov Releases LICENSE

The Data Accelerator (DAC) orchestrates the creation of burst buffers using commodity hardware and existing parallel file systems. Current focus is on creating NVMe backed Lustre file systems via Slurm's Burst Buffer support.

https://rse-cambridge.github.io/data-acc/

The initial development focus has been on Cambridge University's Data Accelerator. More information on the DAC can be found here:

Currently this makes use of 24 Dell EMC R740xd nodes. Each contains two Intel OmniPath network adapters and 12 Intel P4600 SSDs. Plans are in progress to support Mellanox HDR as well as other parallel file systems.

A whitepaper discussing the co-design between Cambridge, Intel, DellEMC and StackHPC is available here: https://www.dellemc.com/resources/en-us/asset/white-papers/products/ready-solutions/dell-data-accelerator-cambridge.pdf This provides results and configuration information.

For more information, please contact: [email protected]

Try me in docker-compose

We have a Docker Compose based Integration Test so you can try out how we integrated with Slurm. To see an end to end demo with Slurm 19.04 (but without running fs-ansible and not ssh-ing to compute nodes to mount) please try:

cd docker-slurm
./demo.sh

To clean up after the demo, including removing all docker volumes:

docker-compose down --vol --rmi all

For more details please see the docker compose README.

Other Installation Guides

For an Ansible driven deployment into OpenStack VMs (useful for testing out the ansble that creates Lustre filesystems on demand), please take a look at: Development Environment Install Guide

For a manual install there are some pointers in: Manual Install Guide

How it works

There are two key binaries produced by the golang based code:

  • dacd: service that runs on the storage nodes to orchestrate filesystem creation
  • dacctl: CLI tool used by Slurm Cray DataWarp burst buffer plugin to orchestration burst buffer creation

All the dacd workers and the dacctl communicate using etcd: http://etcd.io

The dacd service makes use of Ansible roles (./fs-ansible) to create the Lustre or BeeGFS filesystems on demand, using the NVMe drives that have been assigned by the data accellerator. Mounting on the compute nodes is currently done via ssh (as the user running dacd), rather than using Ansible.

Slurm Integration

When you request a burst buffer via Slurm, the Cray DataWarp burst buffer plugin is used to communicate to dacctl. We support both per job and persistent burst buffers.

You can create a peristent burst buffer by submitting a job like this:

#BB create_persistent name=mytestbuffer capacity=4000GB access=striped type=scratch

To use the above buffer in a job, add the following pragma in a job submission script:

#DW persistentdw name=mytestbuffer
#DW jobdw capacity=2TB access_mode=striped,private type=scratch
#DW swap 1GB
#DW stage_in source=~/mytestinputfile destination=\$DW_JOB_STRIPED/filename1 type=file
#DW stage_out source=\$DW_JOB_STRIPED/outdir destination=~/test7outputdir type=directory

Please note the above job does the following:

  • mounts the persistent buffer called mytestbuffer
  • create a per job buffer of 2TB in size, with extra space requested for the swap
  • mounts a shared directory on every compute node
  • also mounts a private directory that is specific to each compute node
  • adds a 1GB swap file on each compute node
  • before the job starts copies in the specified file
  • after the job completes copied out the specified output file

To delete a persistent buffer you submit the following:

#BB destroy_persistent name=mytestbuffer

Further details on the Slurm intergration can be found here: https://slurm.schedmd.com/burst_buffer.html

Orchestrator golang Code Tour

The golang code is built using make, including creating a tarball that includes all the ansible that needs to be installed on all the dacd nodes. Currently we use CircleCI to run the unit tests on every pull request before it is merged into master, this includes generating tarballs for all commits.

The following tests are currently expected to work:

  • unit tests (make tests)
  • Slurm integration tests using Docker compose (see below on how to run ./docker-slurm)
  • Full end to end test deployment using ansible to install systemd unit files, with SSL certs for etcd, aimed at testing the Ansible inside virtual machines (./dac-ansible)

The following tests are currently a work in progress:

  • functional tests for etcd (make test-func runs dac-func-test golang binary)

Packages

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/registry" is the core data model of the PoolRegistry and VolumeRegistry

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/keystoreregistry" depends on a keystore interface, and implements the PoolRegistry and VolumeRegistry

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/etcdregistry" implements the keystore interface using etcd

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/lifecycle" provides business logic on top of registry interface

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/pfsprovider" provides a plugin interface, and various implementations that implement needed configuration and setup of the data accelerator node

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/dacctl" this does the main work of implementing the CLI tool. While we use "github.com/urfave/cli" is used to build the cli, we keep this at arms length via a CliContext interface.

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/fileio" interfaces to help with unit testing file reading and writing

  • "github.com/RSE-Cambridge/data-acc/internal/pkg/mocks" these are mock interfaces needed for unit testing, created using "github.com/golang/mock/gomock" and can be refreshed by running a build script.

Golang Build and Test (using make)

Built with golang 1.14.x, using go modules for dependency management.

gomock v1.1.1 is used to generate mocks. The version is fixed to stop conflicts with etcd 3.3.x.

To build all the golang code and run unit tests locally:

cd ~/go/src/github.com/RSE-Cambridge/data-acc
make
make test

To build the tarball:

make tar

There is an experimental effort to build things inside a docker container here:

make docker

To mimic what is happening in circleci locally please see: https://circleci.com/docs/2.0/local-cli/

License

This work is licensed under the Apache 2. Please see LICENSE file for more information.

Copyright © 2018-2020 University of Cambridge, StackHPC Ltd

data-acc's People

Contributors

ajmking avatar christopheredsall avatar johngarbutt avatar jsteel44 avatar mjrasobarnett avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data-acc's Issues

blob data in logs

It seems that when commands such as rsync error, the output is captured in the logs as a blob of data. It would be useful to be able to see this output in the logs.

dacd: [754B blob data]

Cannot cancel failed jobs due to stage in/out failures

Job 556 references a file that does not exist for stage in, and 557 references a file that does not exist for stage out.

             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
               556     debug use-mult    test1 SO       0:10      1 (burst_buffer/datawarp: dws_data_out: exit status 23
)
               557     debug use-mult    test1 SO       0:09      1 (burst_buffer/datawarp: dws_data_out: exit status 23
)

scancel 556 and 557 does nothing. The buffers also hang around but that is probably expected:

JobID=557 CreateTime=2019-10-02T10:10:38 Pool=default Size=3200GiB State=staged-in UserID=test1(1001)
JobID=556 CreateTime=2019-10-02T10:07:44 Pool=default Size=3200GiB State=staged-in UserID=test1(1001)

setup fails due to a brick race, then can't ever get deleted

Its possible two buffers try to claim the same bricks, we correctly go into the error state for the looser, and the transaction to claim the bricks fails.

However, when we try to clean up this buffer, we wait for ever stuck in the DeleteRequested state, because we never reached the bricks assigned state.

Possible fix ideas: we should track if bricks were ever assigned, and don't wait for dacd when deleting a volume that never got its bricks ever assigned.

Stage in fails if a burst buffer has a hyphen in its name

[dacd]$ bash -c "export DW_PERSISTENT_STRIPED_test-buffer='/mnt/dac/2476_persistent_test-buffer/global' && sudo -g '#1000' -u '#1000' rsync -ospgu --stats /mnt/cluster/home/test/file.txt \$DW_PERSISTENT_STRIPED_test-buffer/file.txt"
bash: line 0: export: `DW_PERSISTENT_STRIPED_test-buffer=/mnt/dac/2476_persistent_test-buffer/global': not a valid identifier

on restart, clean attempt not cleaning persistent volume

bricks get deleted, but volumes are still present

repeating teardown fails, because dacd is no longer watching that volume any more

idea: now we stop watching things when we hit Error state, does this break the dacd event watching when we hit errors??

Hitting a 5 minute timeout error after 1 minute

It looks like I can hit a 5 minute timeout after about 1 minute (assuming it is waiting for the mount to complete and not from the start of the session; in that case about 2 minutes). Here is some relevant information from the logs:

Jan 10 09:50:31 dac-e-16 dacd[8349]: starting action {Uuid:2fd7d0cb-94ff-4794-b8ce-dc1d3aa72365 Session:{Name:6153 [snip]
Jan 10 09:51:25 dac-e-16 dacd[8349]: Mount for: 6153
Jan 10 09:51:25 dac-e-16 dacd[8349]: Mounting 6153 on host: dac-e-16 for session: 6153
Jan 10 09:51:25 dac-e-16 dacd[8349]: SSH to: dac-e-16 with command: mkdir -p /mnt/dac/6153_job
Jan 10 09:51:25 dac-e-16 dacd[8349]: Completed remote ssh run: mkdir -p /mnt/dac/6153_job
Jan 10 09:51:25 dac-e-16 dacd[8349]: SSH to: dac-e-16 with command: mount -t lustre -o flock,nodev,nosuid dac-e-16-opa@o2ib1:/xpgSIqqB /mnt/dac/6153_job
Jan 10 09:52:25 dac-e-16 dacd[8349]: Time up, waited more than 5 mins to complete.
Jan 10 09:52:25 dac-e-16 dacd[8349]: Error in remote ssh run: 'mount -t lustre -o flock,nodev,nosuid dac-e-16-opa@o2ib1:/xpgSIqqB /mnt/dac/6153_job' error: signal: killed

Consider picking devices based on expected lifespace

Balance load between the NVMe drives, rather than just picking at random.

Likely should update drive health on restart of dacd and after delete of buffers. Need to look into secure erase / reset / discard at delete of buffer, to make sure Lustre does the correct thing.

Old device keys not removed in etcd

when we recompiled dacd to test all names on the DAC, previous nvme0n1 was not updated or removed. it should also be noted that dacd currently treats nvme0n1 as the mgt and the mdt in lustre

Specifying just a private buffer does not work

Using access_mode=private results in no buffer being created.

The $DW_JOB_PRIVATE environment variable is not set at least.

Is this expected or should one be able to request a private buffer without stripped?

"Reformat MDTs" Ansible task can fail

We occasionally hit an error "burst_buffer/datawarp: setup: error during ansible create: exit status 2".

Looking at the dacd journal, I see: mkfs.lustre FATAL: loop device requires a --device-size= param

TASK [lustre : Reformat MDTs] **************************************************
failed: [dac-e-12] (item={'key': u'nvme7n1', 'value': 3}) => 
{
  "ansible_loop_var": "item",
  "changed": true,
  "cmd": [
    "/usr/sbin/mkfs.lustre",
    "--mdt",
    "--reformat",
    "--fsname=hOvicamu",
    "--index=3",
    "--mgsnode=dac-e-20-opa@o2ib1",
    "/dev/nvme7n1p1"
  ],
  "delta": "0:00:00.004677",
  "end": "2020-01-09 13:37:15.215476",
  "item": {
    "key": "nvme7n1",
    "value": 3
  },
  "msg": "non-zero return code",
  "rc": 22,
  "start": "2020-01-09 13:37:15.210799",
  "stderr": "\nmkfs.lustre FATAL: loop device requires a --device-size= param\nmkfs.lustre: exiting with 22 (Invalid argument)",
  "stderr_lines": [
    "",
    "mkfs.lustre FATAL: loop device requires a --device-size= param",
    "mkfs.lustre: exiting with 22 (Invalid argument)"
  ],
  "stdout": "\n   Permanent disk data:\nTarget:     hOvicamu:MDT0003\nIndex:      3\nLustre FS:  hOvicamu\nMount type: ldiskfs\nFlags:      0x61\n              (MDT first_time update )\nPersistent mount opts: user_xattr,errors=remount-ro\nParameters: mgsnode=10.47.18.20@o2ib1",
  "stdout_lines": [
    "",
    "   Permanent disk data:",
    "Target:     hOvicamu:MDT0003",
    "Index:      3",
    "Lustre FS:  hOvicamu",
    "Mount type: ldiskfs",
    "Flags:      0x61",
    "              (MDT first_time update )",
    "Persistent mount opts: user_xattr,errors=remount-ro",
    "Parameters: mgsnode=10.47.18.20@o2ib1"
  ]
}

Swap accounting in Slurm doesn't match our implementation

Currently slurm picks up the #DW Swap and add that as additional storage request, and also rounds it up to the nearest granularity, and multiples by the max number of nodes.

Not sure we actually want it to work like how Slurm has it right now.

feature: python 3.X support

ansible has a drama with loop: {{ foo.keys()}} this can be changed to with_dict: {{ foo }} and {{ item.key}}

further investigation is required

Persistent burst buffers cannot use underscores in their names

If you name a persistent burst buffer with an underscore it causes creation to fail, but not only that, running an squeue for example prints out a huge error, the top of which I've pasted here but this is only a snippet:

(burst_buffer/datawarp: _create_persistent: panic: invalid session name: 'test_1' [recovered]
        panic: invalid session name: 'test_1'

goroutine 1 [running]:
main.main.func1()
        /home/circleci/data-acc/cmd/dacctl/main.go:187 +0xb9
panic(0xaccca0, 0xc0002c4060)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
log.Panicf(0xbf1b17, 0x1a, 0xc0001546c0, 0x1, 0x1)
        /usr/local/go/src/log/log.go:340 +0xc0
github.com/RSE-Cambridge/data-acc/internal/pkg/registry_impl.getSessionKey(0x7ffccbdd0e74, 0x7, 0x42c64f, 0x8)
[snip]

Not allowing underscores is fine (unless this is an easy fix) however maybe the DAC should handle this issue a little better?

mounting scales linearly with the number of compute nodes

Currently we do the mount in a serial loop of ssh requests. This gets very slow with large numbers of compute nodes.

We should move to ansible to do the mount on both clients and servers. That way we have a single way to manage concurrent ssh connections.

Per-job buffers are not unmounted when the job ends

I am seeing this issue on the compute nodes.

Also after unmounting we should issue an "rm -df" on the mountpoint to tidy up that directory without recursing into it (which should not be necessary). This also goes for persistent buffers; they are currently unmounted but not removed.

Consolidate where the buffers are mounted

I know you are working on this but here's a placeholder to track the issue.

Buffers are currently being mounted under /mnt/lustre/ on the DAC nodes but /dac/ on the compute nodes. Let's consolidate them to be /mnt/dac/. This will help with staging data in and out.

Multiple pools don't work

When adding hosts to new pools, the pool resources are not separated. For example the TotalSpace for the pools are the same, and requesting one pool spills over and uses hosts from another pool.

Persistent buffers using 100% of available space causes the job to fail

We can use 99% (technically 100% minus one unit of granularity) of available space however when we create a buffer to consume the remaining space, it looks to be created successfully but then squeue reports:

             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
                20     debug create-p    test2 PD       0:00      1 (burst_buffer/datawarp: setup: panic: runtime error: index out of range [recovered]
        panic: runtime error: index out of range

goroutine 1 [running]:
main.main.func1()
        /home/circleci/data-acc/cmd/dacctl/main.go:187 +0xb9
panic(0xb0d7c0, 0x12394c0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/RSE-Cambridge/data-acc/internal/pkg/dacctl/workflow_impl.sessionFacade.doAllocationAndWriteSession(0xce3440, 0xc000176d00, 0xcdf780, 0xc000179560, 0xce0e80, 0xc00017c750, 0xcccf
60, 0x126c0c8, 0x7ffff8268d30, 0x2, ...)
        /home/circleci/data-acc/internal/pkg/dacctl/workflow_impl/session.go:166 +0x6ee
github.com/RSE-Cambridge/data-acc/internal/pkg/dacctl/workflow_impl.sessionFacade.CreateSession.func1(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /home/circleci/data-acc/internal/pkg/dacctl/workflow_impl/session.go:110 +0xfb
github.com/RSE-Cambridge/data-acc/internal/pkg/dacctl/workflow_impl.sessionFacade.submitJob(0xce3440, 0xc000176d00, 0xcdf780, 0xc000179560, 0xce0e80, 0xc00017c750, 0xcccf60, 0x126c0c8, 0x7
ffff8268d30, 0x2, ...)
        /home/circleci/data-acc/internal/pkg/dacctl/workflow_impl/session.go:53 +0x3e8
github.com/RSE-Cambridge/data-acc/internal/pkg/dacctl/workflow_impl.sessionFacade.CreateSession(0xce3440, 0xc000176d00, 0xcdf780, 0xc000179560, 0xce0e80, 0xc00017c750, 0xcccf60, 0x126c0c8,
 0x7ffff8268d30, 0x2, ...)
        /home/circleci/data-acc/internal/pkg/dacctl/workflow_impl/session.go:107 +0x1b4
github.com/RSE-Cambridge/data-acc/internal/pkg/dacctl/actions_impl.(*dacctlActions).CreatePerJobBuffer(0xc000179580, 0xcdcac0, 0xc0000e71e0, 0xc000179580, 0x0)
        /home/circleci/data-acc/internal/pkg/dacctl/actions_impl/job.go:93 +0x6f5
main.setup(0xc0000e71e0, 0x0, 0x0)
        /home/circleci/data-acc/cmd/dacctl/actions.go:92 +0xa3
github.com/urfave/cli.HandleAction(0xade2a0, 0xc15140, 0xc0000e71e0, 0xc0000e71e0, 0x0)
        /go/pkg/mod/github.com/urfave/[email protected]/app.go:514 +0xbe
github.com/urfave/cli.Command.Run(0xbe38a2, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0a8db, 0x47, 0x0, ...)
        /go/pkg/mod/github.com/urfave/[email protected]/command.go:171 +0x4d2
github.com/urfave/cli.(*App).Run(0xc0000dc540, 0xc0000b4000, 0xe, 0xf, 0x0, 0x0)
        /go/pkg/mod/github.com/urfave/[email protected]/app.go:265 +0x733
main.runCli(0xc0000b4000, 0xf, 0xf, 0xbe21f8, 0x1)
        /home/circleci/data-acc/cmd/dacctl/main.go:172 +0x1255
main.main()
        /home/circleci/data-acc/cmd/dacctl/main.go:194 +0x1f1
)

At this point everything looks normal with the buffers with now 0 FreeSpace as expected:

Name=datawarp DefaultPool=default Granularity=1600GiB TotalSpace=24000GiB FreeSpace=0 UsedSpace=24000GiB
  Flags=EnablePersistent,PrivateData
  StageInTimeout=3600 StageOutTimeout=3600 ValidateTimeout=5 OtherTimeout=3600
  GetSysState=/usr/local/bin/dacctl
  GetSysStatus=/usr/local/bin/dacctl
  Allocated Buffers:
    Name=small CreateTime=2019-10-02T14:24:55 Pool=default Size=3200GiB State=allocated UserID=test2(1002)
    Name=full CreateTime=2019-10-02T14:23:14 Pool=default Size=20800GiB State=allocated UserID=test2(1002)
  Per User Buffer Use:
    UserID=test2(1002) Used=24000GiB

Burst buffer path environment variables are inconsistent

$DW_JOB_STRIPED includes the "global" directory in the path.
$DW_PERSISTENT_jobname does not, so users must specify $DW_PERSISTENT_jobname/global when using persistent buffers.

Can we make the behaviour of these consistent?

Have a configurable stage-in/out prefix

It would be useful to provide a prefix to stage in and out requests so that we can have storage mounted in a configurable location.

Paths that the user requests would be resolved first (with realpath or similar) to strip ../ and expand ~ and $HOME, and then prefixed with the configurable path.

dacd misses a new volume

Sometimes dacd never transitions to the BricksAllocated state, i.e. it doesn't notice its new volume appear.

This has happened twice. We do at least now delete things correctly if you do an teardown via scancel.

Setting to retain ansible inventory for all buffers created

It's useful to have the option to retain all ansible playbooks created for later debugging/review. Currently the method captures it along with the .venv and binaries which can fill the filesystem.
Using ansible tempdir: /tmp/fsPlQCMRnH_830051621

Perhaps call it DAC_RETAIN_INVENTORY

Persistent buffers can be given the wrong group permissions

It looks like a gid is assumed to be the same as a uid and so my persistent buffers have group rwx given to the wrong group. This could also lead to the chown failing if the gid doesn't exist.

Per-job buffers are unaffected. You can see below the gid is, and should be different to the uid but it incorrectly matches on the persistent buffer.

Example user:

$ id
uid=17121 gid=17124

Buffers:

/dac/2307_job:
total 4
drwxrwx--- 2 17121 17124 4096 Aug  7 12:19 global

/dac/2307_persistent_jsteel:
total 4
drwxrwx--- 2 17121 17121 4096 Aug  7 10:17 global

Private buffers get created with 777 permissions

Running stat $DW_JOB_PRIVATE inside a job that requests a private buffer, shows that the permissions are set to 777. It seems that shortly after these are changed to 700 but I think we should use "install" to create the directory with 700 permissions from the start, as they are not fixed before the user could start writing data there (eg my stat command runs and notices the issue).

Requesting fractional amounts of data gives unexpected results

On a per-job burst buffer, requesting for example 2.8TB (or any fractional value) causes the per-job buffer not to be created and therefore not mounted for the running job.

On a persistent burst buffer, requesting for example 2.8TB (or any fractional value) always gives 1.4TB.

Persistent burst buffer environment variable cannot be used for stage in/out

Specifying something like:

#DW stage_out source=$DW_PERSISTENT_STRIPED_mybuffer/myfile destination=[...snip...]

Attempts to copy /myfile so it looks like that environment variable is not set. That environment variable can be used in the job successfully, just not in stage in or out definitions.

Note that I can successfully use the $DW_JOB_STRIPED environment variable for per-job buffers in the stage in and out definitions so I was expecting the persistent one to also work in the same way.

Cannot stage data in to/out of private buffers

On the DAC nodes, The /mnt/dac/$JOBID_job_private symlink does not get created, I guess this is because it links to a compute hostname.

I guess one would want to copy to private buffers to give every compute node the same data but its own copy. I would expect it to copy to /mnt/dac/$JOBID_job/private/*/ and copy out would create a directory tree eg copying out $DW_JOB_PRIVATE/file would copy out for eg:

private/node1/file
private/node2/file

But how feasible is this or do we not support copying into and out of private buffers?

Feature: create ansible working dir for buffer

We should be able to create a working ansible directory to do ad hoc mantainance / debugging on any buffer.

Once added we can remove the /tmp/fs_* directories on failures. This stops /tmp filling up the local disk.

Lustre can fail to mount the buffer on a compute node

We are seeing an issue occasionally where a job will hit an error "Burst buffer pre_run error". I see this in the dacd journal:

Jan 09 13:38:07 dac-e-19 dacd[57485]: SSH to: cpu-e-335 with command: mount -t lustre -o flock,nodev,nosuid dac-e-19-opa@o2ib1:/RTsjlttY /mnt/dac/6038_job
Jan 09 13:38:08 dac-e-19 dacd[57485]: Error in remote ssh run: 'mount -t lustre -o flock,nodev,nosuid dac-e-19-opa@o2ib1:/RTsjlttY /mnt/dac/6038_job' error: exit status 16
Jan 09 13:38:08 dac-e-19 dacd[57485]: Warning: Permanently added 'cpu-e-335,10.43.1.79' (ECDSA) to the list of known hosts.
                                      mount.lustre: mount dac-e-19-opa@o2ib1:/RTsjlttY at /mnt/dac/6038_job failed: Device or resource busy
                                      Is the backend filesystem mounted?
                                      Check /etc/mtab and /proc/mounts

This might be due to the compute node mounting other file systems at the same time (at least that is how I can replicate the issue). In any case, should we add a retry, or put in a locking mechanism here to avoid this bombing out?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.