Giter Club home page Giter Club logo

aws-s3-zipper's Introduction

Amazon S3 Zipping tool (aws-s3-zipper)

What does it do?

1. Zips S3 files

Takes an amazon s3 bucket folder and zips it to a:

  • Stream
  • Local File
  • Local File Fragments (zip multiple files broken up by max number of files or size)
  • S3 File (ie uploads the zip back to s3)
  • S3 File Fragments (upload multiple zip files broken up by max number of files or size)

2. Differential zipping

It also allows you to do differential zips. You can save the key of the last file you zipped and then zip files that have been uploaded after the last zip.

3. Fragmented Zips

If a zip file has the potential of getting too big. You can provide limits to breakup the compression into multiple zip files. You can limit based on file count or total size (pre zip)

4. Filter Files to zip

You can filter out files you dont want zipped based on any criteria you need

How do i use it?

Setup

var S3Zipper = require ('aws-s3-zipper');

var config ={
    accessKeyId: "XXXX",
    secretAccessKey: "XXXX",
    region: "us-west-2",
    bucket: 'XXX'
};
var zipper = new S3Zipper(config);

Filter out Files

zipper.filterOutFiles= function(file){
    if(file.Key.indexOf('.tmp') >= 0) // filter out temp files
        return null;
    else 
      return file;
};

Zip to local file

zipper.zipToFile ({
        s3FolderName: 'myBucketFolderName'
        , startKey: 'keyOfLastFileIZipped' // could keep null
        , zipFileName: './myLocalFile.zip'
        , recursive: true
    }
    ,function(err,result){
        if(err)
            console.error(err);
        else{
            var lastFile = result.zippedFiles[result.zippedFiles.length-1];
            if(lastFile)
                console.log('last key ', lastFile.Key); // next time start from here
        }
});

Pipe zip data to stream (using Express.js)

app.all('/', function (request, response) {
    response.set('content-type', 'application/zip') // optional
    zipper.streamZipDataTo({
        pipe: response
        , folderName: 'myBucketFolderName'
        , startKey: 'keyOfLastFileIZipped' // could keep null
        , recursive: true
        }
        ,function (err, result) {
            if(err)
                console.error(err);
            else{
                console.log(result)
            }
        })
})

Zip fragments to local file system with the filename pattern with a maximum file count

zipper.zipToFileFragments ({
        s3FolderName:'myBucketFolderName'
        ,startKey: null
        ,zipFileName './myLocalFile.zip'
        ,maxFileCount: 5
        ,maxFileSize: 1024*1024
    }, function(err,results){
        if(err)
               console.error(err);
           else{
               if(results.length > 0) {
                   var result = results[results.length - 1];
                   var lastFile = result.zippedFiles[result.zippedFiles.length - 1];
                   if (lastFile)
                       console.log('last key ', lastFile.Key); // next time start from here
               }
           }
   });

Zip to S3 file

/// if no path is given to S3 zip file then it will be placed in the same folder
zipper.zipToS3File ({
        s3FolderName: 'myBucketFolderName'
        , startKey: 'keyOfLastFileIZipped' // optional
        , s3ZipFileName: 'myS3File.zip'
        , tmpDir: "/tmp" // optional, defaults to node_modules/aws-s3-zipper
    },function(err,result){
        if(err)
            console.error(err);
        else{
            var lastFile = result.zippedFiles[result.zippedFiles.length-1];
            if(lastFile)
                console.log('last key ', lastFile.Key); // next time start from here
        }
});

Zip fragments to S3

zipper.zipToS3FileFragments({
    s3FolderName: 'myBucketFolderName'
    , startKey: 'keyOfLastFileIZipped' // optional
    , s3ZipFileName: 'myS3File.zip'
    , maxFileCount: 5
    , maxFileSize: 1024*1024
    , tmpDir: "/tmp" // optional, defaults to node_modules/aws-s3-zipper
    },function(err, results){
    if(err)
        console.error(err);
    else    if(results.length > 0) {
        var result = results[results.length - 1];
        var lastFile = result.zippedFiles[result.zippedFiles.length - 1];
        if (lastFile)
            console.log('last key ', lastFile.Key); // next time start from here
    }

});

The Details

init

Either from the constructor or from the init(config) function you can pass along the AWS config object

{
    accessKeyId: [Your access id],
    secretAccessKey: [your access key],
    region: [the region of your S3 bucket],
    bucket: [your bucket name],
    endpoint: [optional, for use with S3-compatible services]
}

If using temporary credentials

{
    accessKeyId: [Your access id],
    secretAccessKey: [your access key],
    sessionToken: [your session token],
    region: [the region of your S3 bucket],
    bucket: [your bucket name],
    endpoint: [optional, for use with S3-compatible services]
}

filterOutFiles(file)

Override this function when you want to filter out certain files. The file param that is passed to you is the format of the aws file

  • file
/// as of when this document was written
{
  Key: [file key], // this is what you use to keep track of where you left off
  ETag: [file tag],
  LastModified: [i'm sure you get it],
  Owner: {},
  Size: [in bytes],
  StorageClass: [type of storage]
}

getFiles: function(params,callback)

Get a list of files in the bucket folder

  • params object
    • folderName : the name of the folder in the bucket
    • startKey: optional. return files listed after this file key
    • recursive: bool optional. to zip nested folders or not
  • callback(err,result): the function you want called when the list returns
    • err: error object if it exists
    • result: * files: array of files found * totalFilesScanned: total number of files scanned including filter out files from the filterOutFiles function

streamZipDataTo: function (params, callback)

If you want to get a stream to pipe raw data to. For example if you wanted to stream the zip file directly to an http response

  • params object
    • pipe: the pipe to which you want the stream to feed
    • folderName: the name of the bucket folder you want to stream
    • startKey: optional. start zipping after this file key
    • recursive: bool optional. to zip nested folders or not
  • callback(err,result): call this function when done
    • err: the error object if any
    • result: the resulting archiver zip object with attached property 'manifest' whcih is an array of files it zipped

zipToS3File: function (params ,callback)

Zip files in an s3 folder and place the zip file back on s3

  • params object
    • s3FolderName: the name of the bucket folder you want to stream
    • startKey: optional. start zipping after this file key
    • s3FilerName: the name of the new s3 zip file including its path. if no path is given it will defult to the same s3 folder
    • recursive: bool optional. to zip nested folders or not
  • callback(err,result): call this function when done
    • err: the error object if any
    • result: the resulting archiver zip object with attached property 'manifest' whcih is an array of files it zipped

zipToS3FileFragments: function (params , callback)

  • params object
    • s3FolderName: the name of the bucket folder you want to stream
    • startKey: optional. start zipping after this file key
    • s3ZipFileName: the pattern of the name of the S3 zip files to be uploaded. Fragments will have an underscore and index at the end of the file name example ["allImages_1.zip","allImages_2.zip","allImages_3.zip"]
    • maxFileCount: Optional. maximum number of files to zip in a single fragment.
    • maxFileSize: Optional. Maximum Bytes to fit into a single zip fragment. Note: If a file is found larger than the limit a separate fragment will becreated just for it.
    • recursive: bool optional. to zip nested folders or not
  • callback(err,result): call this function when done
    • err: the error object if any
    • results: the array of results

zipToFile: function (params ,callback)

Zip files to a local zip file.

  • params object
    • s3FolderName: the name of the bucket folder you want to stream
    • startKey: optional. start zipping after this file key
    • zipFileName: the name of the new local zip file including its path.
    • recursive: bool optional. to zip nested folders or not
  • callback(err,result): call this function when done
    • err: the error object if any
    • result: the resulting archiver zip object with attached property 'manifest' whcih is an array of files it zipped

zipToFileFragments: function (params,callback)

  • params object
    • s3FolderName: the name of the bucket folder you want to stream
    • startKey: optional. start zipping after this file key
    • zipFileName: the pattern of the name of the zip files to be uploaded. Fragments will have an underscore and index at the end of the file name example ["allImages_1.zip","allImages_2.zip","allImages_3.zip"]
    • maxFileCount: Optional. maximum number of files to zip in a single fragment.
    • maxFileSize: Optional. Maximum Bytes to fit into a single zip fragment. Note: If a file is found larger than the limit a separate fragment will becreated just for it.
    • recursive: bool optional. to zip nested folders or not
  • callback(err,result): call this function when done
    • err: the error object if any
    • results: the array of results

aws-s3-zipper's People

Contributors

andrewsomething avatar arturojain avatar bradleyfowler123 avatar danielhindi avatar dependabot[bot] avatar ntnbst avatar relly avatar rufushonour avatar scottmont avatar tmjoen avatar ygalbel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

aws-s3-zipper's Issues

Breaking Unexpectedly

I have a situation where I need to zip a large number of files on s3, but it is breaking unexpectedly without giving any error, is there a way I can get why it is breaking?? As it dies unexpectedly.I am running this on EC2 T2.Medium Instance.

Does this stream to Fragments for download?

This might just be me not understanding the documentation, but which one the functions can I use to Stream lots of files into multiple zip files to download FROM the s3.... just like how google drive does it when you download multiple files...

I have tried zipToFileFragments but it just puts zip files on my server and the client does not receive the file... So is this the right function? Or does the Stream only give 1 large file?

I was hoping that zipToFileFragments was just the Fragment version of "streamZipDataTo"

I just need some clarification/help to get this working

should it get credentials from AWS.config?

if AWS.config credentials are set it should use them instead of asking for them again.

I'm doing this as a fix:

AWS.config.getCredentials(function (err) {
  if (!err) {
    new S3zip({
      accessKeyId: AWS.config.credentials.accessKeyId,
      secretAccessKey: AWS.config.credentials.secretAccessKey,
      region: AWS.config.region,
      bucket: < bucket >
    })
  }
})

zipToFileFragments: maxFileCount not produced results as expected?

Having read through the readme, I'm looking to utilise the "Zip fragments to local file system with the filename pattern with a maximum file count" process.

As per the directions, I'm using the maxFileCount parameter but I'm not seeing it produce the expected results.

E.g
I have the following snippet:

zipper.zipToFileFragments ({
s3FolderName:'folder1'
,startKey: null
,zipFileName: './myLocalFile.zip'
,maxFileCount: 1
}, function(err,results){
if(err)
console.error(err);
else{
if(results.length > 0) {
var result = results[results.length - 1];
var lastFile = result.zippedFiles[result.zippedFiles.length - 1];
if (lastFile)
console.log('last key ', lastFile.Key); // next time start from here
}
}
});

My S3 bucket folder contains 3 files, and I wanted to test this out to create 3 zips locally, with one file in each zip, however I always seem to get just one zip with all the files in it. I've tested this in other S3 buckets with more files and altered this maxFileCount value, but every run puts all files in one zip - I'd expect this to produce multiple zips each contain the number of files specified as above, again based off of the readme.

Could anyone please advise?
Thanks in advance!

Create zip from a list of files rather than a directory

Is it possible to create the zip from a list of files rather than a directory?
I want to include files from different directories on s3 in the zip, scanning the whole bucket and excluding everything except my list seems inefficient.

Is it possible to do that with this script?
thanks

Recursive key doesn't works

I'm trying this with "zipper.streamZipDataTo()"

My current folder structure on S3 is as uploads/users/data/userId/files
So even after passing recursive: false, the out response file has nested folders from users i.e. folder structure of the response file is users/data/userId/files

NOTE: I've tried passing both true and false for recursive, in both cases the output is the same i,e, nested folder, while what I expect is to have the files directly when you open the zip.

Versions:
aws-s3-zipper: 1.3.2
Node.js: 12.13.0
OS: Windows 10

example for streamZipDataTo

Hello,

it would be very nice to have an example for this function as i don't quite get how to use it.
I want to zip a folder from s3 and stream it back to s3 without writing to the disk. Any suggestion how to accomplish this? What kind of object does the param pipe expect?

Thanks in advance,
Simon

Is accessKeyId and secretAccessKey required to get zipper object

We are using IAM roles to access the S3 bucket so i am passing only bucket and region in the config object. When above values are passed we are getting zipper object as {} . Would like to check if accessKeyId and secretAccessKey are mandatory or they can be bypassed

zip to s3 function with recursive flag throwing error.

Hello There,
var params_zip={
s3FolderName: req.params.sr_id,
s3ZipFileName:req.params.sr_id+'.zip',
recursive:true
};
//console.log("zipper params test: ",params_zip);
zipper.zipToS3File (params_zip
,function(err,result){
//console.log('zipper error: ',err);
//console.log('zipper result: ',result);
if(err){
console.error(err);
callback(true,false);
}
else{
var lastFile = result.zippedFiles[result.zippedFiles.length-1];
if(lastFile){
console.log('last key ', lastFile.Key); // next time start from here
}

                callback(false,true);
            }
    });

Please help me to understand the issue or if any solution is there please provide the same .

Root folder possible?

Is it possible to zip all files from the root of an s3 bucket; with no starting folder. I've tried s3FolderName: "",, s3FolderName: ".", s3FolderName: "/",

Authorisation mechanism not supported

Error:
InvalidRequest: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Due to the fact that all regions support V4, but US-Standard, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2"). Currently using V2 method. See stackoverflow for more details.

non-recursive?

Hi there,

Just wondering if this is meant to be non-recursive. I've got a "folder" in a bucket with several "sub-folders" (the quotes are to indicate that yes, I realize there are just keys, and files, but no real folders).

The structure is something like:

bucket
    base folder
        user folder
            <various sub folders with sub-folders and files underneath>

I'm calling the zip on "base folder/user folder" and because there aren't any files here, merely folders, I'm getting "no files zipped. nothing to upload".

If I drop a file in the "user folder" the zipToS3File call zips up that file, but doesn't find any of the files/folders living under the other folders.

I'm going to see if I can make a patch to make this work (I've actually done this by hand with the s3 sdk, but I want to see if your stuff is faster) but anything you can contribute would be much appreciated.

Cheers, and thanks for the npm module. Nice stuff, I think ;)

Corrupted zip file result in zipToS3File

Hi Daniel,

I was getting corrupted zip files when using the [email protected] with archiver version 0.21.*. The S3 bucket had around 3,500 images at about 1.8 KB each. I did not experience this on buckets with a small amount of images.

zipper.filterOutFiles = function(file){
				if(file.Key.indexOf('.zip') >= 0) // filter out zip files
					return null;
				else 
				  return file;
};
		  
zipper.zipToS3File({s3FolderName: `test`, s3ZipFileName: `test/export.zip`},(err,result)=>{
	if(err)
		reject(err);
	else{
		console.log(`Done: ${result}`);                                       
	}
});

After upgrading to the latest version [email protected] (This is not on npm) and updating archiver to 3.1.1, i was no longer getting corrupted zip files.

Can we please upgrade the archive version to 3.1.1, forgive my ignorance - I am not sure if/what this will break in aws-s3-zipper as this is a 3 major version upgrade.

https://github.com/DanielHindi/aws-s3-zipper/blob/master/package.json#L7

RAM did not release if downloading was cancelled and files are corrupted after complete download using streamZipDataTo

I'm using latest version of AWS-s3-zipper to download complete s3 folder if its get downloaded files were corrupted,
and if user cancel the downloading before completion, if hold RAM and did not release it.

steps -
create simple NodeJs Express server with download api -
hit the /download/{gameId} where game Id is the folder of s3 bucket
it will start the downloading

here is the code -
app.get("/download/:gameId", async (req, res, next) => { try { const { gameId } = req.params; const params = { Bucket: config.bucket.name, Prefix: cloudpokernight/${gameId}`,
};
const command = new ListObjectsV2Command(params);
const data = await s3.send(command);

if (!data || !data.Contents || data.Contents.length === 0) {
  return res.status(404).json({ code: 404, error: 'Folder not found in S3' });
}
res.set("content-type", "application/zip");
res.set("Content-Disposition", `attachment; filename=${gameId}.zip`);
zipper.streamZipDataTo(
  {
    pipe: res,
    folderName: params.Prefix,
    startKey: null,
    recursive: true,
  },
  function (err, result) {
    if (err) console.error(err);
    else {
      console.log(result);
    }
  },
);

} catch (err) {
next(err);
}
});`

Fix endpoint & add s3PathStyle

Hi,

There was a PR to add endpoint configuration: #23, but it was partially removed after this one: #36. The README still shows that there is a configuration for that, but there is not.

PR to add endpoint configuration and s3ForcePathStyle: #52

Not an issue: Tumblr is using this in production

Figured I would leave a quick response to let you know that Tumblr (Yahoo is the parent company) is seemingly using this as a way to manage backups to help export blogs.

Just something I found interesting this morning.

Zip creation of large folder on S3 using aws-s3-zipper and Node js

I'm using the aws-s3-zipper module Link - https://www.npmjs.com/package/aws-s3-zipper

I'm zipping large folder(including images of size 20 GB) on S3

var S3Zipper = require('aws-s3-zipper');
config = {} // AWS config details
var zipper = new S3Zipper(config);

zipper.zipToS3FileFragments({
s3FolderName: foldername
, s3ZipFileName: foldername.zip
//, maxFileCount: 50
, maxFileSize: 1024 1024 1024 // 1 GB for each fragments
, recursive: false
}, function (err, results) {
// code on file zipped successfully
}

Problem is s3-zipper creating zip folders which includes already zipped folder.

For Example:
Suppose my bucket folder contains 5000 images of size 20GB,
Now i have provide 1GB size for each fragments in s3-zipper
So as per this i'm expecting 20 zips of 1 GB each (see above code sample)

But it creating around 40 or more zips of size around 1GB

I am new to AWS please help me, Thanks

AccessDenied: Access Denied

I am getting the following error trying to use this module:
Debug: internal, implementation, error AccessDenied: Access Denied at Request.extractError (<somepath>/aws-s3-zipper/node_modules/aws-sdk/lib/services/s3.js:334:35) at Request.callListeners (<somepath>/aws-s3-zipper/node_modules/aws-sdk/lib/sequential_executor.js:105:20) at Request.emit (<somepath>/aws-s3-zipper/node_modules/aws-sdk/lib/sequential_executor.js:77:10) at Request.emit (<somepath>/aws-s3-zipper/node_modules/aws-sdk/lib/request.js:596:14) at Request.transition (<somepath>/aws-s3-zipper/node_modules/aws-sdk/lib/request.js:21:10)

When I am using the same credentials I am able to list the Objects in the Prefix using aws-sdk 2.56

In this code piece -
` var client = s3.createClient(options);

    var realParams = {
        s3Params: bucketParams,
        recursive: params.recursive
    };

    var emitter = client.listObjects(realParams);`

Looks like the aws version is being used here is 2.2 So, if I try to use a similar json format for listObjects with s3Params it says
`* MissingRequiredParameter: Missing required key 'Bucket' in params

  • UnexpectedParameter: Unexpected key 's3Params' found in params`

Found out that the emitter is not getting the data or end event.
Any Help is appreciated.

zips in s3 bucket are corrupted

Hello Daniel, I'm using your tool to zip images stored inside s3 bucket using your code as part of AWS Lambda, and facing the problem that for the larger images sets resulting zip is being corrupted
I set Lambda memory limit to the max possible by Amazon 1536 MB, and still having this issue even though cloud watch log shows that lambda call used 300-500 MB and it reports successful completion, zip is being created, though corrupted. When I repeat same with no more than 4-5 images with size 4-5 MB each, it creates healthy zip.

Any suggestion is highly appreciated.

Thank you for you great tool!

Piping streamZipDataTo directly to Express response results in corrupt file

I've got the following code:

app.all('/', function (req, res) {
    const options = {
        pipe: res,
        folderName: `/my/S3/folder`,
        recursive: true,
    };

    res.set('Content-Type', 'application/zip, application/octet-stream');
    res.setHeader('Content-disposition', 'attachment; filename=myZip.zip');

    return q.ninvoke(zipper, 'streamZipDataTo', options)
});

The response is received and interpreted as a zip file successfully. However, the file is unable to be opened as if it was corrupted.

I've tested the same code using zipToFile and the zip file saved locally can be opened and contains the right data. Furthermore, the streamed file is significantly smaller than the locally stored alternative. Its almost as if the response is ended too early?

The zip file size I've been testing with is just under 5mb. Not sure if the stream is chunked, but not cycled to the end? I'm just speculating here, I haven't had the chance to clone and test locally but I will if I come around to it in the near future.

In the mean time, do you have any ideas as to what the issue could be?

No folder structure inside Zip file

Hi,

I have the following structure inside my S3 bucket: my-bucket/2018/02/06/...files.json

When the zip file was generated, he creates the folder structure inside the zip: 2018/02/06/..files.json

Has some option to the zip only the files without folder structure?

myZip.zip

  • file1.json
  • file2.json

instead of:

  • 2018/
    - 02/
    -06/
    - file1.json
    - file2.json

cannot run with nodejs.12X environment

we have a demo lambda which invoke the 'aws-s3-zipper' running against Nodejs6.10, and it's working.
Since the nodejs6.10 is end of life. After i export the demo lambda to another environment with Nodejs12.X or Nodejs10.0, both of them return below error.
return message:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 05ebdd8f-3860-47e1-9ccb-c88e08b6bcf0 Error: Runtime exited with error: signal: aborted (core dumped)"
}

detail log:
START RequestId: 05ebdd8f-3860-47e1-9ccb-c88e08b6bcf0 Version: $LATEST
/var/lang/bin/node[8]: ../src/node_contextify.cc:649:static void node::contextify::ContextifyScript::New(const v8::FunctionCallbackInfov8::Value&): Assertion `args[1]->IsString()' failed.
1: 0x55788121a0f3 node::Abort() [/var/lang/bin/node]
2: 0x55788121a18c [/var/lang/bin/node]
3: 0x55788120baf6 node::contextify::ContextifyScript::New(v8::FunctionCallbackInfov8::Value const&) [/var/lang/bin/node]
4: 0x55788141cf9d [/var/lang/bin/node]
5: 0x55788141dd8c v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/var/lang/bin/node]
6: 0x557881c81619 [/var/lang/bin/node]
END RequestId: 05ebdd8f-3860-47e1-9ccb-c88e08b6bcf0
REPORT RequestId: 05ebdd8f-3860-47e1-9ccb-c88e08b6bcf0 Duration: 5073.67 ms Billed Duration: 5100 ms Memory Size: 128 MB Max Memory Used: 112 MB Init Duration: 120.55 ms

Breaking in AWS instance

I am getting unexpected failure. SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method. This happens only in Amazon AWS instance and it works fine locally. Weird and please help. I am using the the region eu-central-1 and providing accessKeyId and secretAccessKey. The same keys and region works fine in local machine.

Hitting this error: (.d.ts) file containing `declare module 'aws-s3-zipper' ?

Hi!

I am hitting this error:

Could not find a declaration file for module 'aws-s3-zipper'. '...node_modules/aws-s3-zipper/index.js' implicitly has an 'any' type.

  Try `npm install @types/aws-s3-zipper` if it exists or add a new declaration (.d.ts) file containing `declare module 'aws-s3-zipper';`

then I ran:

yarn add --dev @types/aws-s3-zipper

...

error An unexpected error occurred: "https://registry.yarnpkg.com/@types%2faws-s3-zipper: Not found".

I was wondering... Do we need to add something to the package.json file in order to prevent this error?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.