Giter Club home page Giter Club logo

regional-australia-bank / adr-gateway Goto Github PK

View Code? Open in Web Editor NEW
27.0 12.0 18.0 4.02 MB

A proven Data Recipient Gateway (affectionately known as Dr G) that helps you get up and running and interacting with the Australian Consumer Data Right ecosystem today. Dr G enables Data Recipients to quickly deploy their software products and participate in the CDR without needing to develop the complexities of boiler-plate data recipient interactions.

License: MIT License

TypeScript 99.05% JavaScript 0.38% Handlebars 0.57%
cdr aus-cdr open-banking adr-gateway cdr-gateway consumer-data-right

adr-gateway's Issues

Services do not recover if started when database is unreachable

Summary

If the database is not up when the dependent service starts (e.g. adr-backend, mock-dh-server), then that service will not recover, even when the database comes back up.

To reproduce

To reproduce with Docker:

  1. Start all services except database
  2. Attempt new consent creation or DCR. Inspect logs to confirm that database is unreachable
  3. Start the database
  4. Reattempt new consent or DCR. Inspect logs for state

Expected behaviour:

Database connection has succeeded and new consent/DCR is complete

Actual behaviour:

Same "database unreachable" errors as before.

Discussion

I regard this as an important issue, even if it can be fixed by changing the load order. It begs the question, also, if the database connection will recover if the database has a momentary outage while the dependent services are already successfully started.

I will regard this problem as solved when:

  • Promise rejections successfully cause a docker container to finish
  • Docker containers restart on the error condition above
  • It can be shown that dependent services recover after a momentary outage of the database.

Consent IDs are in sequence. Make them UUIDs.

The main motivation for this is security.

This will ultimately result in a breaking change.

Probably we will accept UUIDs in place of ID's and handle these appropriately by type detection. The numerical index will become deprecated and phased out in the next major version.

Does not supply optimal accept header to the Data Holder status endpoint

Supplied:

Accept: application/json, text/plain, */*

Expected:

Accept: application/json

Specification says:

If specified, the media type must be set to application/json, unless otherwise specified in the resource end point standard. \n\n If set to an unacceptable value the holder must respond with a 406 Not Acceptable. If not specified, or a wildcard (/) is provided, the default media type is application/json.

For maximum interoperability, we will set to application/json.

Housekeeper overwrites DH client ID after explicit dynamic client registration

Hi,

I have noticed that the housekeeper for the DynamicClientRegistration triggers a new registration request which overwrites a previous explicit registration request as the DataHolderRegistrationManager cache does not get updated by /idp/register call. Is this by design?

The problem with this is that the clientId return by the /idp/register API call can't be used in subsequent maintenance requests.

To reproduce:

  1. Get Client Token from CDR Register (my own helper API)

  2. Get Software Statement Assertion /v1/banking/data-recipients/brands/:dataRecipientBrandId/software-products/:softwareProductId/ssa

  3. Create Dynamic Client Registration request DCR (my own helper API)

  4. Register Software at Data Holder using DCR /idp/register -> returns clientId

-> doesn't update DataHolderRegistrationManager cache and no db entry (sandbox adr.sqlite)

Housekeeper gets triggered at interval

-> checks DataHolderRegistrationManager cache and can't find registered productID

-> generates new DCR which creates a new clientID and sets status of record above to DELETED

-> updates DataHolderRegistrationManager cache and db

Could you please advise if this works as expected? Thank you very much.

Best, Nils

Test data for transaction list - negative amounts

Hi,

When retrieving the test transactions the generated negative amounts are incorrectly converted to positive amounts (Math.abs):

let amount = Math.abs(transactionAmounts[r]).toFixed(2);
let type:"TRANSFER_INCOMING"|"TRANSFER_OUTGOING" = transactionAmounts[r] >= 0 ? "TRANSFER_INCOMING" : "TRANSFER_OUTGOING";

As per standard negative amounts should be returned to reflect outgoing transactions:

"amount | AmountString | mandatory | none | The value of the transaction. Negative values mean money was outgoing from the account"

Can be changed to:

let amount = transactionAmounts[r].toFixed(2);

Thanks, Nils

Healing strategy is too agressive

The connectivity framework will retry resource endpoints in the case that a non-2xx level response is received.

However, this is a problem in a few ways:

  • There is no level of control for how many retries there should be
  • Without any control, and due to the nature of the connectivity neuron evaluation, it is possible that there are dozens of retries. This is due to quick job that was made of calculating connectivity pathway length. This could be construed as abuse by a data holder.
  • Due to the large number of requests that can result, healing process can take up to a minute.

The connectivity framework needs to be overhauled to reduce complexity.

Prolonged register outage can cause unnecessary downtime due to cache expiry

The dependency graph specifies that the maxAge for the data holder brands cache is 4 hours.

If there is an outage at the register after the cache expires, then subsequent requests /cdr/data-holders will fail in turn. This would have the effect of blocking new consents being established, and this would be a kind of customer impact that appears unnecessary due to the nature of a 500 internal server error, for instance.

The outage itself could be addressed by:

  • lifting maxAge to a value beyond any expected downtime
  • removing maxAge - this will entail other design challenges such as how changes will propagate at all
  • implement some new softFailWithCache mechanism so that we can operate on the expired cache

Add client_id, response_type and scope form parameters to PAR request

Some OP (OpenID Providers) may require parameters based on the interpretation of RFC 6749 and OIDC Core:

So that the request is a valid OAuth 2.0 Authorization Request, values for the response_type and client_id parameters MUST be included using the OAuth 2.0 request syntax, since they are REQUIRED by OAuth 2.0. The values for these parameters MUST match those in the Request Object, if present.

Even if a scope parameter is present in the Request Object value, a scope parameter MUST always be passed using the OAuth 2.0 request syntax containing the openid scope value to indicate to the underlying OAuth 2.0 logic that this is an OpenID Connect request.

Acceptance criteria

  1. Change the mock data holder to require these 3 form parameters, and that they are equal to the same mandatory parameters provided in the request object
  2. Update the PAR connector node logic to include these form parameters and pass the new test case

Carefully remove "jwt-bearer" grant-type to close standards gap while maintaining interoperability

For continued alignment with the standards, there is a need to remove the pushed grant_type during DCR.

o.grant_types.push("urn:ietf:params:oauth:grant-type:jwt-bearer") // TODO remove after release 1.1.1 https://github.com/cdr-register/register/issues/54#issuecomment-597368382

The commentary here, https://github.com/cdr-register/register/issues/54#issuecomment-597368382, indicates that there is some build impact for Data Holders to support the removal. Hence the perceived risk that removing this line may break existing connectivity with data holders.

On the other hand, with one conformance testing tool there is a requirement that this value urn:ietf:params:oauth:grant-type:jwt-bearer is not passed during DCR. This is despite the fact that it is supported as per version 1.2.2 of the standards (current at time of writing).

To resolve the conformance testing issue, we either need to implement the change, or have the conformance tool support the supported value.

GET /cdr/data-holders shows inactive DHs

Expectation for v1.x is that only active DH's are returned.

It may be in the next major release that more advanced filtering is supported, as well as more detailed responses, to allow filtering by industry and UI granularity based on DH status.

Healing requests to the Register are uncontrolled

If a request is made to the gateway for a resource at a data holder, and that resource predicable returns an unrecoverable non-2xx response, (i.e. a 403 or 404 as opposed to a 422), the healing process will eventually trigger a call to the GetDataHolderBrands under the deduction that this metadata could be wrong.

The problem is that this is rarely the case, and if it happens often enough it may be considered abuse by the register.

We need to be able to control how frequently the register is called.

The solution to #3 (reworking the connectivity module) should therefore allow a maximum frequency to be enforced, and ideally configured.

UpdateRegistration fails for second software product

I can go through the entire consent flow OK with software product 1 no probs. But then with software product 2, the mock DH throws an error because Dr G tries to update registration with it because Dr G has the SSA of software product 1 instead of 2. Tracing this back, it gets it from cache. This is because the cache id logic is not correctly incorporating the software product id into the cache id. So caches for both SSAs have the same name; ‘SoftwareStatementAssertion_undefined’. This is because the parameter defined in the YML file for SSA doesn’t actually exist. In this case, the parameter is defined as ‘SoftwareProductKey’, whereas the object only contains ‘SoftwareProductId’.

Error on Fresh Docker Build

Hi

I'm just checking out this repo for the first time. However, when I attempt to run npm run docker:build I get the following error on a freshly checked out project.

Do you have any thoughts on how to get around this?

Step 1/10 : FROM node:14 AS typescript-build
14: Pulling from library/node
76b8ef87096f: Pull complete
2e2bafe8a0f4: Pull complete
b53ce1fd2746: Pull complete
84a8c1bd5887: Pull complete
7a803dc0b40f: Pull complete
b800e94e7303: Pull complete
8e9f42962912: Pull complete
cc1c1f0d8c86: Pull complete
a42c31ab44dd: Pull complete
Digest: sha256:8eb45f4677c813ad08cef8522254640aa6a1800e75a9c213a0a651f6f3564189
Status: Downloaded newer image for node:14
 ---> d6602e31594f
Step 2/10 : ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
 ---> Running in e7cc6468fa76
Removing intermediate container e7cc6468fa76
 ---> 12c02e814c4b
Step 3/10 : COPY /build.sh build.sh
 ---> 85bb5c588b3f
Step 4/10 : COPY .work /adr-gateway
 ---> f588e32a0ea1
Step 5/10 : RUN chmod +x build.sh && ./build.sh
 ---> Running in fc78a30382bc
Installing packages...
npm WARN deprecated [email protected]: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated [email protected]: Debug versions >=3.2.0 <3.2.7 || >=4 <4.3.1 have a low-severity ReDos regression when used in a Node.js environment. It is recommended you upgrade to 3.2.7 or 4.3.1. (https://github.com/visionmedia/debug/issues/797)
npm WARN deprecated [email protected]: Legacy versions of mkdirp are no longer supported. Please update to mkdirp 1.x. (Note that the API surface has changed to use Promises in 1.x.)
npm WARN deprecated [email protected]: Please upgrade to @mapbox/node-pre-gyp: the non-scoped node-pre-gyp package is deprecated and only the @mapbox scoped package will recieve updates in the future
npm WARN deprecated [email protected]: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated [email protected]: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3.
npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated [email protected]: this library is no longer supported
npm WARN deprecated [email protected]: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated [email protected]: https://github.com/lydell/resolve-url#deprecated

> [email protected] install /adr-gateway/node_modules/sqlite3
> node-pre-gyp install --fallback-to-build

node-pre-gyp WARN Using request for node-pre-gyp https download
[sqlite3] Success: "/adr-gateway/node_modules/sqlite3/lib/binding/napi-v3-linux-x64/node_sqlite3.node" is installed via remote

> [email protected] postinstall /adr-gateway/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"

Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library!

The project needs your help! Please consider supporting of core-js on Open Collective or Patreon:
> https://opencollective.com/core-js
> https://www.patreon.com/zloirock

Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -)

npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@~2.3.1 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/cpx/node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.

added 1042 packages from 1261 contributors and audited 1050 packages in 24.742s

59 packages are looking for funding
  run `npm fund` for details

found 2 low severity vulnerabilities
  run `npm audit fix` to fix them, or `npm audit` for details
Typescript version:
Version 3.7.7
Building...

> [email protected] build /adr-gateway
> npx rimraf dist && npm run build-templates && npx tsc && npx cpx package.json dist


> [email protected] build-templates /adr-gateway
> node ./src/Common/Connectivity/Dependencies.generator.js

/adr-gateway/node_modules/js-yaml/index.js:10
    throw new Error('Function yaml.' + from + ' is removed in js-yaml 4. ' +
    ^

Error: Function yaml.safeLoad is removed in js-yaml 4. Use yaml.load instead, which is now safe by default.
    at Object.safeLoad (/adr-gateway/node_modules/js-yaml/index.js:10:11)
    at Object.<anonymous> (/adr-gateway/src/Common/Connectivity/Dependencies.generator.js:9:25)
    at Module._compile (internal/modules/cjs/loader.js:1063:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1092:10)
    at Module.load (internal/modules/cjs/loader.js:928:32)
    at Function.Module._load (internal/modules/cjs/loader.js:769:14)
    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
    at internal/main/run_main_module.js:17:47
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build-templates: `node ./src/Common/Connectivity/Dependencies.generator.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build-templates script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2021-04-29T06_19_44_908Z-debug.log
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `npx rimraf dist && npm run build-templates && npx tsc && npx cpx package.json dist`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2021-04-29T06_19_44_927Z-debug.log
The command '/bin/sh -c chmod +x build.sh && ./build.sh' returned a non-zero code: 1
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build:docker: `npm run copy-proj && docker build --build-arg HTTP_PROXY --build-arg HTTPS_PROXY examples/deployment/docker -t adr-gateway`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build:docker script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/ubuntu/.npm/_logs/2021-04-29T06_19_46_898Z-debug.log

Cheers

Consider migrating away from the use of "refresh_token_expires_at"

Currently Dr. G is using the refresh_token_expires_at claim to capture the expiry date of the refresh token. The alternative is to use the exp value from the token introspection endpoint.

While this has the advantage of being provided in the id_token response (negating the need for a call to the introspection endpoint), it has the disadvantage of being contentious from a standards point of view. In the future there may be Data Holder implementations that take issue with the request for refresh_token_expires_at.

An even simpler alternative is to not check the introspection endpoint either, and simply assume that the refresh token expires in 28 days or the end of the sharing duration (whichever is earlier)

Getting 500 Interval Server Error when invoking 'New Consent at specified Data Holder' postman request

Hi,

I booted up docker images and started invoking postman tests by importing the postman collection as documented in README.md.

I am getting 500 Internal Server Error when invoking New Consent at specified Data Holder request.

image

Is there any specific headers that should be provided ?

There are no logs captured on what went wrong written to docker-compose logs apart from what I see below:

docker-compose -f ./examples/deployment/docker/docker-compose.yml logs -f

b_1                | 2020-09-18 22:56:22.049 UTC [1] LOG:  starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1                | 2020-09-18 22:56:22.050 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1                | 2020-09-18 22:56:22.050 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1                | 2020-09-18 22:56:22.139 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1                | 2020-09-18 22:56:22.387 UTC [65] LOG:  database system was shut down at 2020-09-18 22:56:21 UTC
db_1                | 2020-09-18 22:56:22.520 UTC [1] LOG:  database system is ready to accept connections
docker_adr-db-migrate_1 exited with code 0
adr-frontend_1      | (node:1) UnhandledPromiseRejectionWarning: Error: connect EHOSTUNREACH 172.18.0.12:5432
adr-frontend_1      |     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1145:16)
adr-frontend_1      | (Use `node --trace-warnings ...` to show where the warning was created)
adr-frontend_1      | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
adr-frontend_1      | (node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
mock-dh_1           | (node:1) UnhandledPromiseRejectionWarning: Error: getaddrinfo ENOTFOUND db
mock-dh_1           |     at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
mock-dh_1           | (Use `node --trace-warnings ...` to show where the warning was created)
mock-dh_1           | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
mock-dh_1           | (node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
adr-backend_1       | (node:1) UnhandledPromiseRejectionWarning: Error: connect EHOSTUNREACH 172.18.0.12:5432
adr-backend_1       |     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1145:16)
adr-backend_1       | (Use `node --trace-warnings ...` to show where the warning was created)
adr-backend_1       | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
adr-backend_1       | (node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
adr-backend_1       | (node:1) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
adr-housekeeper_1   | (node:1) UnhandledPromiseRejectionWarning: Error: connect EHOSTUNREACH 172.18.0.12:5432
adr-housekeeper_1   |     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1145:16)
adr-housekeeper_1   | (Use `node --trace-warnings ...` to show where the warning was created)
adr-housekeeper_1   | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
adr-housekeeper_1   | (node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
adr-housekeeper_1   | (node:1) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
mock-register_1     | WARNING: x-forwarded-proto header not detected for an https issuer, you must configure your ssl offloading proxy and the provider, see documentation for more details: https://github.com/panva/node-oidc-provider/tree/v6.29.3/docs/README.md#trusting-tls-offloading-proxies

Thanks,
Madan N

Use of DR brand GUID for register authentication is deprecated. Need to use product GUID instead.

Per Client Authentication documentation (https://cdr-register.github.io/register/#client-authentication), the use of brand GUID for authentication with the register will become deprecated in the future. Dr G needs to accommodate this by using the GUID of the Software Product in context instead.

Most of the time this will be OK as most register calls will be in the context of a consent which will be tied to a Software Product GUID. But some calls will not be in the scope of a particular Software Product (e.g. getting Data Holder meta-data in preparation for DCR). In order for there to be a Software Product in scope at all times, all calls to Dr G should include a Software Product GUID at minimum. Ideally, this would be enforced via authentication with Dr G.

Upcoming changes for production readiness

We are planning to merge changes within the next week for our production readiness. There are multiple changes, and most are included in the list below:

  1. Configuration is moved away from configuration files and is performed exclusively in environment variables. This is to enable simple deployment with docker. Some conveniences will be afforded for JWKS and MTLS configuration points.
  2. Logging by default is to console only (not configuration files)
  3. JWKS configuration can now be by URL. This way, JWKS generation can also be offloaded to a separate service (e.g. a dedicated software product configuration endpoint).
  4. Software product configuration will no longer happen in Dr. G directly, but at a dedicated endpoint. This is to support scenarios where infrastructure needs to support multiple software products. Specifically, a Software Product service exists which takes care of the Product Id binding to the register, and default OAuth2 claim bindings. The ADR Gateway service takes care of JWKS, MTLS and the BrandId and LegalEntityId bindings to the register. As a result, the configuration interfaces have been significantly simplified.
  5. Convict is being used for startup configuration verification.
  6. ADR Gateway will ship with a sandbox configuration that works out of the box with docker-compose run.
  7. There will be better support for operating Dr. G behind a corporate proxy.
  8. A housekeeper exists to automatically notify Dataholders of consents, update Dataholder metadata from the register, and to automatically register software products with all data holders in the ecosystem.
  9. Defaults for userinfo and id_token claims can now be configured

Feedback is welcome, no matter how contentious.

I view this coming release as a "final alpha", which we will refine in a "beta" mode with complete functionality towards the end of June, after which we would consider this project in a full version 1 state. At this point, we want to be less authoritative and more consultative with changes.

Response code for requesting data for expired consent should be 422, not 500

If you request data (e.g. /cdr/consents/:id/accounts) for an expired consent (e.g. a one time consent where the access token is expired), the response code is 500. However, this is not correct, because it is not an internal server error, but a problem with the request that cannot be fulfulled.

Therefore I suggest that we handle this with a 422 - Unprocessable entity.

We should also include an end to end test case. This can easily be achieved using the E2E consent generation, and manually updating the locally stored expiry date of the token.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.