Giter Club home page Giter Club logo

q-io's Issues

Reader.forEach() does not support thisp

The signature of Reader.forEach() is not consistent with the same method in the BufferStream class. The latter supports a second argument "thisp", the former does not.

How can I copy a stream using promises?

I have a readable stream that I would normally pipe into a writable stream.

readable.pipe(writable);

How can I achieve the same "copying" of streams using q-io?

lastModified() generates an unhandled rejected promise warning

lastModified() generates an unhandled rejected promise warning on program exist nevertheless the call has .fail() handler. Example to reproduce:

var QFS = require('q-io/fs');

QFS.lastModified('/nonexistent')
.fail(function(err) {
    console.log('error caught!');
});

Add listTreeStream()

Provide a streaming version of listTree(), returns a stream of paths to be consumed on demand via a Reader. Implement in v2.

listTree performance

I noticed that listTree() was going slower than I expected for large numbers of files. Therefore, I decided to do some benchmarks to narrow down the slow down. As a result, it looks like an opportunity exists to close a performance gap in either Q or Q-IO. Performance becomes very noticeably affected in the benchmark code with Node-style callbacks versus Q promises, and therefore, that may be a good place to investigate optimization. HTH! More info and/or pull requests to follow, I hope.

Time information in seconds: lower Real time is better.

Test Paths Returned Real User Sys
require('q-io/fs').listTree('stuff') 96737 40.392 40.373 2.918
require('glob')('stuff/**', { dot:true }) 96737 12.873 12.955 2.431
syncListTree('stuff') 96737 3.302 2.203 1.103
asyncFindTree('stuff') 96737 3.175 3.290 2.563
asyncFindTreeQstat('stuff') 96737 18.138 18.064 2.769
asyncFindTreeQstatQreaddir('stuff') 96737 21.262 21.482 2.688
asyncFindTreeQ('stuff') 96737 27.629 27.736 3.027
find | stat 96737 1.776 0.619 2.962

Edit: I updated a wrong measurement for require('q-io/fs').listTree('stuff'). The correct measurement is ~40 seconds (not 35 seconds as I had written).

The vanilla sync and async times are very good. The shell find | stat result is also provided for a baseline reference outside of Node.js. I've also included the glob() tool, which looks like it could benefit from optimization as well. Our goal is to get Q and/or Q-IO almost as good as vanilla async.

Benchmark codes are:

require('q-io/fs').listTree('stuff')
.then(function(paths) {
  console.log('count =', paths.length);
});


require('glob')('stuff/**', { dot:true }, function(err, paths) {
  console.log('count =', paths.length);
});


var FS = require('fs');
var path = require('path');
function syncListTree(basePath) {
  var stack = [basePath];
  var result = [];
  while (stack.length) {
    var basePath = stack.pop();
    var stat = FS.statSync(basePath);
    if (stat.isDirectory()) {
      var list = FS.readdirSync(basePath);
      stack = stack.concat(list.map(function(x) { return path.join(basePath, x); }));
    }
    result.push(basePath);
  }
  return result;
}
console.log('count =', syncListTree('stuff').length);


var path = require('path');
var FS = require('fs');
function asyncFindTree(basePath, cb) {
  FS.stat(basePath, function(error, stat) {
    if (stat.isDirectory()) {
      FS.readdir(basePath, function(error, children) {
        children = children.map(function(child) { return path.join(basePath, child); });
        var count = children.length;
        var paths_ = [[basePath]];
        function checkCompletion() {
          if (count == 0) {
            var result = Array.prototype.concat.apply([], paths_);
            cb(null, result);
          }
        }
        checkCompletion(); // <-- for the case of children.length == 0
        children.forEach(function(child) {
          asyncFindTree(child, function(err, paths) {
            --count;
            paths_.push(paths);
            checkCompletion();
          });
        });
      });
    } else {
      cb(null, [basePath]);
    }
  });
}
asyncFindTree('stuff', function(error, paths) {
  console.log('count =', paths.length);
});


var Q = require('q');
var path = require('path');
var FS = require('fs');
function asyncFindTreeQstat(basePath, cb) {
  return Q.nfcall(FS.stat, basePath)
  .then(function(stat) {
    if (stat.isDirectory()) {
      FS.readdir(basePath, function(error, children) {
        children = children.map(function(child) { return path.join(basePath, child); });
        var count = children.length;
        var paths_ = [[basePath]];
        function checkCompletion() {
          if (count == 0) {
            var result = Array.prototype.concat.apply([], paths_);
            cb(null, result);
          }
        }
        checkCompletion(); // <-- for the case of children.length == 0
        children.forEach(function(child) {
          asyncFindTreeQstat(child, function(err, paths) {
            --count;
            paths_.push(paths);
            checkCompletion();
          });
        });
      });
    } else {
      cb(null, [basePath]);
    }
  });
}
asyncFindTreeQstat('stuff', function(error, paths) {
  console.log('count =', paths.length);
});


var Q = require('q');
var path = require('path');
var FS = require('fs');
function asyncFindTreeQstatQreaddir(basePath, cb) {
  return Q.nfcall(FS.stat, basePath)
  .then(function(stat) {
    if (stat.isDirectory()) {
      return Q.nfcall(FS.readdir, basePath)
      .then(function(children) {
        children = children.map(function(child) { return path.join(basePath, child); });
        var count = children.length;
        var paths_ = [[basePath]];
        function checkCompletion() {
          if (count == 0) {
            var result = Array.prototype.concat.apply([], paths_);
            cb(null, result);
          }
        }
        checkCompletion(); // <-- for the case of children.length == 0
        children.forEach(function(child) {
          asyncFindTreeQstatQreaddir(child, function(err, paths) {
            --count;
            paths_.push(paths);
            checkCompletion();
          });
        });
      });
    } else {
      cb(null, [basePath]);
    }
  });
}
asyncFindTreeQstatQreaddir('stuff', function(error, paths) {
  console.log('count =', paths.length);
});


var Q = require('q');
var path = require('path');
var FS = require('fs');
function asyncFindTreeQ(basePath) {
  return Q.nfcall(FS.stat, basePath)
  .then(function(stat) {
    if (stat.isDirectory()) {
      return Q.nfcall(FS.readdir, basePath)
      .then(function(children) {
        children = children.map(function(child) { return path.join(basePath, child); });
        return Q.all(children.map(function(child) {
          return asyncFindTreeQ(child)
        }))
        .then(function(paths_) {
          return Array.prototype.concat.apply([basePath], paths_);
        });
      });
    } else {
      return [basePath];
    }
  });
}
asyncFindTreeQ('stuff')
.then(function(paths) {
  console.log('count =', paths.length);
});
time find -L stuff | xargs stat --format '%Y :%y %n' | wc --lines

Progress on file reader

The file reader returned by FS.open should have an observable progress measurement. This would involve a Node.js fstat to get the size and assign it to the stream’s length.

Via #39

Specify Q as a peer dependency

Instead of specifying Q as a regular dependency, if you use peer dependency:

  "peerDependencies": {
    "q": ">=0.9.7 <1.1"
  },

then installation of Q-IO won't install another copy of Q, but will reuse parent project's copy of Q.

This will make versions of promises consistent across Q and Q-IO, avoiding gotcha in kriskowal/q#519

move(src,tar) does not work between network shares

You have to manually replace
move(src, tar)
by
copyTree(src, tar)
removeTree(src, tar)
since you are intended to use it with different network shares.

best regards and thanks

-------occurrence------------

  • windows 7
  • node 0.8

Feature: Make q-io/fs more graceful (workaround EMFILE errors)

I would like to request consideration of using graceful-fs rather than fs, in particular to avoid EMFILE errors like this one.

Here is one thought about how to implement this:

var FS = require('q-io/fs'); // <-- if user wants standard fs
var GFS = require('q-io/graceful-fs'); // <-- if user wants graceful-fs

Alternatively, would it be feasible to implement "graceful" functionality directly in q-io/fs? (by retrying operations that fail with EMFILE, for instance, like graceful-fs does)

(Btw, thanks for this awesome library. It's terrific!)

Empty file in FS.stat or FS.read directly after FS.write

I have a weird case where a FS.write succeeds but the next FS.stat or FS.read on that file return zero-size (or empty file content). When I open the file in my editor I do see the expected content.

Of course usually you wouldn't read same file instantly after you write it but this happens in my unit tests.

This only happens if the call to FS.stat or FS.read is very close behind the FS.write.

If I replace FS.stat with a node.js wrapper, like: Q.nfcall(fs.stat, file) I get the same behaviour.

If I put a some other promises between the write and reads (like a bunch chaining then()'s with some FS.stats on same file) then later the file checks out correctly.

If I replace those filler FS.stats with a Q.delay(100) and then check I do get the empty file again.

So it looks like there is something going on with the filesystem: is there some sort of stat caching behaviour going on in Q-io or Node?

This is in node 0.10.17, on Windows vista 64 and also on a Ubuntu VM using Vagrant.

Cannot serve binaries with Apps.file

When I try to serve images with Apps.file they are encoded with UTF-8 (and stops being images).

This is because FS.open is called without flags or charset: https://github.com/kriskowal/q-io/blob/master/http-apps/fs.js#L128 and the default behavior for FS.open is to use charset UTF-8 when no flags are set https://github.com/kriskowal/q-io/blob/master/fs.js#L79 which in turn makes the reader join the results with .join("") https://github.com/kriskowal/q-io/blob/master/reader.js#L47.

Image serving worked before, but default behavior for FS.open was changed some while ago (by mistake?) without any comment fbb2177#L5R80.

This could be fixed by either revert the default behavior of FS.open or by adding a binary flag to its use in Apps.file.

rerooted FS missing some functions

I'm not sure if I'm doing it wrong (an example in the docs would be great), but when I simply reroot() a FS the rerooted version is missing some functions, e.g.:

var FS = require('q-io/fs');

function checkFuncs(obj) {
    checkFunc(obj, 'makeDirectory');
    checkFunc(obj, 'makeTree');
}
function checkFunc(obj, name) {
    console.log(name + ": " + (obj[name]?"exists":"missing"));
}

console.log("Main FS:");
checkFuncs(FS);

FS.reroot('/')
.then(function(fs) {
    console.log("Rerooted FS:");
    checkFuncs(fs);
});

produces:

$ node rerooted_test.js 
Main FS:
makeDirectory: exists
makeTree: exists
Rerooted FS:
makeDirectory: missing
makeTree: exists

This causes problems in places because if I call makeTree() on a rerooted FS it fails because it will eventually call this.makeDirectory() internally (which isn't a function).

So am I doing something wrong or is this just a bug?

Undefnied agent causes error in node's HTTP module

HTTP read() function fails when given string URLs:

 require('q-io/http').read('http://example.com').done()

TypeError: Cannot read property 'defaultPort' of undefined
at new ClientRequest (_http_client.js:50:54)
at Agent.request (_http_agent.js:301:10)
at Object.exports.request (http.js:52:22)
at ./node_modules/q-io/http.js:295:29
at _fulfilled (./node_modules/q/q.js:798:54)
at self.promiseDispatch.done (./node_modules/q/q.js:827:30)
at Promise.promise.promiseDispatch (./node_modules/q/q.js:760:13)
at ./node_modules/q/q.js:821:14
at flush (./node_modules/q/q.js:108:17)
at process._tickCallback (node.js:598:11)

This may be because node doesn't handle undefined agent:

 require('http').request({agent:undefined})

TypeError: Cannot read property 'defaultPort' of undefined
at new ClientRequest (_http_client.js:50:54)
at Agent.request (_http_agent.js:301:10)
at Object.exports.request (http.js:52:22)
at repl:1:17

Node v0.11.9 on OS X (via homebrew), q-io 1.10.6.

Reader does not include `node`

The docs say that the Node.JS Stream is available on the Reader and Writer objects as the node attribute, but they don't have that.

It would be really nice if the Reader and Writer could be drop-in supersets of the Node 0.10 streams. I think it would only mean renaming read() to slurp() and exposing the underlying Stream methods as well as the JSGI forEach() and Q .next() methods.

(Actually, perhaps the JSGI forEach should be a decorator added by the JSGI framework when it encounters a stream object)

Proxy-fs

I want to be able to instanciate a proxy fs where the proxy loads and caches all lower level FS data as methods are called on the proxy object.

I want to then serialize the proxy data to a file and later instanciate a mock-fs object with it.

The idea is that I can pass a virtual FS object to a library, record all FS data needed while library executes and on second run feed the cached FS to the library instead of a real FS.

How would this fit into the q-io ecosystem?

Incorporate iconv

In v2, I would like to move away from accepting charset options on HTTP requests and file openers, or expand the range of accepted charsets using iconv. Regardless, I would like the internal and external interface for encoding, decoding, and transcoding should be revealed with intermediate streams, establishing a pipe or and pipeThrough convention.

Need FS.walk(path, cb)

Need an FS.walk(path, cb) to permit a file system traversal to stream results. It will need to specifically use Q.fcall to send a message to a remote promise.

MockFS.write doesn't truncate before writing.

Here's a modification to the existing test case which reveals the bug:

    it("should write a file to a mock filesystem", function () {
        return FS.mock(FS.join(__dirname, "fixture"))
        .then(function (mock) {

            return Q.fcall(function () {
                return mock.write("hello.txt", "Goodbye!\n");
            })
            .then(function () {
                return mock.read("hello.txt");
            })
            .then(function (contents) {
                expect(contents).toBe("Goodbye!\n");
            });
        });
    });

The result:

> jasmine-node spec

.....................................F........................................

Failures:

  1) write should write a file to a mock filesystem
   Message:
     Expected 'Hello, World!
Goodbye!
' to be 'Goodbye!
'.

FS.move and FS.rename

We currently expose the low level system call rename(2) as move. This does not move over file systems. The current implementation should be renamed to rename and a move command created composing copyTree and removeTree, perhaps only if rename fails because of a device boundary.

Re montagejs/minit#42

q-io/fs's write replaces the content, but q-io/fs-mock's write appends

var fs = require('q-io/fs');

fs.write('dummy', '123').then(function() {
    return fs.write('dummy', '456')
}).then(function() {
    return fs.read('dummy');
}).then(function(content) {
    console.log('content', content);
    // '456'
});


var MockFs = require('q-io/fs-mock');
fs = MockFs({
    'dummy2': '123'
});
fs.write('dummy2', '456').then(function() {
    return fs.write('dummy2', '789');
}).then(function() {
    return fs.read('dummy2');
}).then(function(content) {
    console.log('content mock', content);
    // '123456789'
});

Solve request body copy cancellation problems

In HTTP server requests, we wrap the response.body with Reader, then copy it to the NodeWriter(nodeRequest) to transfer it to the Node.js stack. This caused problems with hanging connections and processes (particularly our own test processes), if the user does not consume or cancel the request.body.

I made an attempt to address the issue by canceling the pipe, but this was foolish, since it broke transfers that exceeded one chunk.

In v2, request.body will have to explicitly cancelled if unneeded, and the tests need to accommodate this. The implicit cancel needs to be removed and a new Q-IO published. This impacts ASAP.

HTTP does not handle non-utf8 charsets

I encountered some issues trying to use q-io/http as a client of sites using latin1 character set:

  1. The documentation of the charset option is ambiguous in the context of http client. It could mean
    a. either the character set in which the response body will be returned (possibly after iconv) or
    b. the character set used by the server as the response (this seems to be implicitly the interpretation used by q-io http).

    I would certainly wish (a) would be the case, because I want to have the input I get in a consistent character set. Furthermore (b) is problematic because, in general, I have no way of knowing what character set a server will return until I see the content-type response header.

  2. In my tests, q-io silently fails when a character set other than utf-8 is used as an option. There may be a deeper issue with the NodeReader

Make HTTP requests cancelable

There is not presently a way to abort an outstanding HTTP request. This would require a mechanism for canceling a promise, which is not yet supported by Q.

Create Collections documentation micro-site

It should be easier to look up what collections implement a method and what methods are implemented by a collection. Create a micro-site that makes it easy to browse in both dimensions.

proposed change to copyTree() to allow pre-existing target directories

The CommonJS/Fileystem/A spec is vague about various behavior (unless there's standard expected behavior I'm not aware of), but having copyTree() fail because makeDirectory() fails due to an existing target directory, seems unfortunate and inconsistent with the behavior at the file level, which just overwrites existing target files (as long as they're writable).

I propose changing copyTree() to allow for preexisting directories. A less desirable (IMO) alternative would be to leave copyTree() as is and add this as, say, "mergeTree()".

  exports.copyTree = function(source, target) {
    var self = this;
    return Q.when(self.stat(source), function(stat) {
      if (stat.isFile()) {
        return self.copy(source, target);
      } else if (stat.isDirectory()) {
        return self.exists(target).then(function(dstExists) {
          var copySubTree = Q.when(self.list(source), function(list) {
            return Q.all(list.map(function(child) {
              return self.copyTree(self.join(source, child), self.join(target, child));
            }));
          });
          if (dstExists) {
            return copySubTree;
          } else {
            return Q.when(self.makeDirectory(target), function() {
              return copySubTree;
            });
          }
        });
      } else if (stat.isSymbolicLink()) {
        return self.symbolicCopy(source, target);
      }
    });
  }; 

Confused about json apps

There are HandleJsonResponse(s), Json and json http apps, and it's unclear how they should be used.

this.GET('foo').json().app(gimmeAnObject) stringifies and ok's but it doesn't add the json mime type. handleJsonResponse stringifies and adds the mime type but doesn't ok.

Tell me how it should be done and I'll fix the doc.

delete with files with wildcard

Hello
Is there a way for deleting files with wild cards with using q-io ?
Tried remove function but does no seem to support wildcards
Did I missed something ?
Thanks

HTTP POST w/ body troubles

I can do a post without a body, ei, basic auth. when i add a body i run into lots of problems (post stalls, or doesn't get sent or get's sent but doesn't make sense)

var api_login_req = {
            url: urls.api.session.post,
            headers: _.pick(req.headers, 'authorization'),
            method: "POST",
            body:"osdijfoisdjfosdf"
        };

        return HTTP.
        request(api_login_req).

the body in the above code causes this request to stall, it doesn't get sent at all. when i comment out the body eveything is ok.

looking in the Qhttp code i see that that body needs a forEach. however i'm not sure if Q.when(req.body... would transform it into something with a forEach... anyway i changed my body into this:

var api_login_req = {
            url: urls.api.session.post,
            headers: _.pick(req.headers, 'authorization'),
            method: "POST",
            body:["osdijfoisdjfosdf"]
        };

        return HTTP.
        request(api_login_req).

and my post gets sent. however on the receiving end (using connect, w/ bodyParser) my req.body is empty.

it would be great to have an example of how to use this function.

Recommendations for hitting file limit on copyTree?

I'm not seeing a way to manage a directory copy with copyTree() where the number of open files may exceed the OS limit.

If i'm missing something, please let me know.

Is there a way to limit the number of copies happening at once?

req.headers.host do not include port number when using q-io('http')

Hey guys,

I have a problem with q-io/http as I'm sending new request within the requested resource but req.headers.host turns out to be 'localhost' within it instead of 'localhost|:3000'. The request does actually get sent to the correct URL, but the Express request object does not contain the correct headers when it ends up there.

For reference, the behavior does work as expected with require('request'). So I'm switching there for now, if I can not find a fix myself.

Document that there is no main module

Per #6, there is an expectation that q-io exports a main module. Update the documentation to make it more clear that there is no main module, and the names of the public modules in the Q-IO namespace, q-io/fs, q-io/http, and q-io/http-apps.

odd behavior with copyTree

followup to #75

I put up a repository here that replicates behavior i'm seeing that I can't track down for the life of me. Every time I try to copy the directory in the reposiroty via copyTree the resulting directory contains only a subset of the files and the promise never seems to fall through to either the fail or success case.

This is probably something obvious and i'm just overtired, but it's killing me.

You can run the test with node test.js which will just do the copy. Running sh test.sh will remove the targetDir, run the test, and then compare the resulting directories.

My output :

$ sh test.sh
Copying from /Users/jsoverson/development/src/qiotest/sourcedir to /Users/jsoverson/development/src/qiotest/targetDir
Only in sourceDir/js/vendor/ace/build/src: mode-properties.js
Only in sourceDir/js/vendor/ace/build/src: mode-python.js
Only in sourceDir/js/vendor/ace/build/src: mode-r.js
[... cut ...]
Only in sourceDir/js/vendor/ace/build/src: worker-lua.js
Only in sourceDir/js/vendor/ace/build/src: worker-php.js
Only in sourceDir/js/vendor/ace/build/src: worker-xquery.js
files in sourceDir : 265
files in targetDir : 198

If you move or remove enough files so there are 198 or less in sourceDir then the operation will succeed every time.

Tested and saw this same unexpected behavior on:
MBA w/ OSX 10.9.1 - node 0.10.15
MBP w/ OSX 10.8.5 - node 0.10.17

Tested and saw normal, expected behavior on:
Ubuntu 13.10 - node 0.10.15

MockFs.move inconsistency

Given and existing directory myDirectory
Doing

fs.move("myDirectory", "myDirectory")

will throw "Can't move over existing entry myDirectory"
where as doing

MockFs.move("myDirectory", "myDirectory")

will succeed.

I'm unsure which one is correct. The fs behavior is strictly correct and might catch potential errors earlier but the mock behavior is much more convenient!

I tried simply removing the equality checks @ line 271 and 303 but it cause some test failures for symlink handling.

Canonical should accept non-existing paths

The reroot implementation assumes that canonical will expand any symbolic links along a path, but will just normalize any trailing terms that do not exist.

Canonical will need to be reimplemented in terms of lstat and realpath.

Alternately, reroot may need to be reimplemented in terms of the existing canonical. reroot needs to be weary of relative paths that would escape the jail.

Http server very slow

Hi,

I just got into the whole promise/generator issue and tried q-io/http like so:

var http = require("q-io/http");
var Apps = require("q-io/http-apps");

http.Server(function(req, res) {
return Apps.ok("Hello");
}).listen(8080);

Now I did a very small and simple http benchmark using wrk and boom against:

var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, '127.0.0.1');
console.log('Server running at http://127.0.0.1:8080/');

While I know that this not even remotely a serious test, I did notice a real difference.

Using the callback-hell version I got around 11200 req/sec and using q-io/http I got about 350 req/s. That's a couple of magnitudes slower.

Now, the difference can have multiple reasons. First, in the q-io/http version there is a lot more js involved and js is slow. I guess the callback version is so fast because basically there is almost no javascript involved and so it's a libuv benchmark basically.
Or it is some internal q-io stuff which takes some time.

Maybe you could help me solving this dilemma ;)

open() and copy() misbehave if the destination file is readonly

This seems to be happening because Writer(), in writer.js, is returning Q(self) instead of a deferred 'begin' promise, as Reader() does.

If I amend Writer, as below, using the open event, then it will work correctly for both writable and non-writable files (unit-tests pass). However, as far as I am aware HTTP streams do not emit the open event, so I'm not sure if this "fix" is the correct approach - happy to look at some other approach if anyone can offers hints.

(When using the open event in the Reader(), HTTP unit-tests fail, which confirms that reading HTTP streams does not emit the open event. Given that the unit-tests pass above, I assume writing HTTP streams via unit-tests is not done or does not invoke Writer().)

?

function Writer(_stream, charset) {
    var self = Object.create(Writer.prototype);

    if (charset && _stream.setEncoding) // TODO complain about inconsistency
        _stream.setEncoding(charset);

    var begin = Q.defer();
    var drained = Q.defer();

    _stream.on("error", function (reason) {
        begin.reject(reason);
    });

    _stream.on("open", function () {
        begin.resolve(self);
    });

    _stream.on("drain", function () {
        drained.resolve();
        drained = Q.defer();
    });

    :.............. (rest of code)................

    return begin.promise;
    //return Q(self); // todo returns the begin.promise
}

http.request() support socket pooling configuration?

Is there a way to control socket pooling via http.request()?

Will it just inherit nodejs' global http.agent config? Ex:
require('http').globalAgent.maxSockets = 10;

Somewhat related question - are there any "official" code examples of using require("q-io/http").request?

thanks for this and Q BTW. Awesome libs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.