Giter Club home page Giter Club logo

pgdump-aws-lambda's Introduction

pgdump-aws-lambda

ci status Coverage Status

An AWS Lambda function that runs pg_dump and streams the output to s3.

It can be configured to run periodically using CloudWatch events.

Quick start

  1. Create an AWS lambda function:

    • Author from scratch
    • Runtime: Node.js 20.x
    • Architecture: x86_64
  2. tab "Code" -> "Upload from" -> ".zip file":

    • Upload (pgdump-aws-lambda.zip)
    • tab "Configuration" -> "General Configuration" -> "Edit"
      • Timeout: 15 minutes
      • Edit the role and attach the policy "AmazonS3FullAccess"
    • Save
  3. Give your lambda permissions permissions to write to S3:

    • tab "Configuration" -> "Permissions"
    • click the existing Execution role
    • "Add permissions" -> "Attach policies"
    • select "AmazonS3FullAccess" and click "Add Permissions"
  4. Test

    • Create new test event, e.g.:
    {
        "PGDATABASE": "dbname",
        "PGUSER": "postgres",
        "PGPASSWORD": "password",
        "PGHOST": "host",
        "S3_BUCKET": "db-backups",
        "ROOT": "hourly-backups"
    }
    • Test and check the output
  5. Create a CloudWatch rule:

    • Event Source: Schedule -> Fixed rate of 1 hour
    • Targets: Lambda Function (the one created in step #1)
    • Configure input -> Constant (JSON text) and paste your config (as per previous step)

File Naming

This function will store your backup with the following s3 key:

s3://${S3_BUCKET}${ROOT}/YYYY-MM-DD/YYYY-MM-DD_HH-mm-ss.backup

AWS Firewall

  • If you run the Lambda function outside a VPC, you must enable public access to your database instance, a non VPC Lambda function executes on the public internet.
  • If you run the Lambda function inside a VPC, you must allow access from the Lambda Security Group to your database instance. Also you must either add a NAT gateway (chargeable) to your VPC so the Lambda can connect to S3 over the Internet, or add an S3 VPC endpoint (free) and allow traffic to the appropriate S3 prefixlist.

Encryption

You can add an encryption key to your event, e.g.

{
    "PGDATABASE": "dbname",
    "PGUSER": "postgres",
    "PGPASSWORD": "password",
    "PGHOST": "host",
    "S3_BUCKET": "db-backups",
    "ROOT": "hourly-backups",
    "ENCRYPT_KEY": "c0d71d7ae094bdde1ef60db8503079ce615e71644133dc22e9686dc7216de8d0"
}

The key should be exactly 64 hex characters (32 hex bytes).

When this key is present the function will do streaming encryption directly from pg_dump -> S3.

It uses the aes-256-cbc encryption algorithm with a random IV for each backup file. The IV is stored alongside the backup in a separate file with the .iv extension.

You can decrypt such a backup with the following bash command:

openssl enc -aes-256-cbc -d \
-in [email protected] \
-out [email protected] \
-K c0d71d7ae094bdde1ef60db8503079ce615e71644133dc22e9686dc7216de8d0 \
-iv $(< [email protected])

S3 Upload Part Size

If you experience lamba timeouts while uploading file parts to S3 you can try increasing the part size of each file chunk (might need to increase lambda resources). For instance on a 2GB file using the default part size of 5MB would result on ~400 parts, pushing all this parts was exceeding the 15min timeout for lambdas, by increasing the part size to 1GB the transmit time was reduced to ~3 minutes.

{
    "S3_PART_SIZE": 1073741824,
}

IAM-based Postgres authentication

Your context may require that you use IAM-based authentication to log into the Postgres service. Support for this can be enabled my making your Cloudwatch Event look like this.

{
    "PGDATABASE": "dbname",
    "PGUSER": "postgres",
    "PGHOST": "host",
    "S3_BUCKET": "db-backups",
    "ROOT": "hourly-backups",
    "USE_IAM_AUTH": true
}

If you supply USE_IAM_AUTH with a value of true, the PGPASSWORD var may be omitted in the CloudWatch event. If you still provide it, it will be ignored.

SecretsManager-based Postgres authentication

If you prefer to not send DB details/credentials in the event parameters, you can store such details in SecretsManager and just provide the SecretId, then the function will fetch your DB details/credentials from the secret value.

NOTE: the execution role for the Lambda function must have access to GetSecretValue for the given secret.

Support for this can be enabled by setting the SECRETS_MANAGER_SECRET_ID, so your Cloudwatch Event looks like this:

{
    "SECRETS_MANAGER_SECRET_ID": "my/secret/id",
    "S3_BUCKET": "db-backups",
    "ROOT": "hourly-backups"
}

If you supply SECRETS_MANAGER_SECRET_ID, you can ommit the 'PG*' keys, and they will be fetched from your SecretsManager secret value instead with the following mapping:

Secret Value PG-Key
username PGUSER
password PGPASSWORD
dbname PGDATABASE
host PGHOST
port PGPORT

You can provide overrides in your event to any PG* keys as event parameters will take precedence over secret values.

Multiple databases

If you'd like to export multiple databases in a single event, you can add a comma-separated list of database names to the PGDATABASE setting. The results will return in a list.

{
    "PGDATABASE": "dbname1,dbname2,dbname3",
    "PGUSER": "postgres",
    "PGPASSWORD": "password",
    "PGHOST": "host",
    "S3_BUCKET": "db-backups",
    "ROOT": "hourly-backups"
}

NOTE: The 15 minute timeout for lambda still applies.

Developer

Bundling a new pg_dump binary

  1. Launch an EC2 instance with the Amazon Linux 2023 AMI (ami-0649bea3443ede307)
  2. Connect via SSH and:
# install packages required for building
sudo dnf install make automake gcc gcc-c++ readline-devel zlib-devel openssl-devel libicu-devel
# build and install postgres from source
wget https://ftp.postgresql.org/pub/source/v16.3/postgresql-16.3.tar.gz
tar zxf postgresql-16.3.tar.gz
cd postgresql-16.3
./configure --with-ssl=openssl
make
sudo make install
exit

Download the binaries

mkdir bin/postgres-16.3
scp ec2-user@your-ec2-server:/usr/local/pgsql/bin/pg_dump ./bin/postgres-16.3/pg_dump
scp ec2-user@your-ec2-server:/usr/local/pgsql/lib/libpq.so.5 ./bin/postgres-16.3/libpq.so.5
  1. To use the new postgres binary pass PGDUMP_PATH in the event:
{
    "PGDUMP_PATH": "bin/postgres-16.3"
}

Creating a new function zip

npm run makezip

Contributing

Please submit issues and PRs.

pgdump-aws-lambda's People

Contributors

adrian-gomez avatar bigpresh avatar felix-weizman-deel avatar hanswesterbeek avatar jameshy avatar namarinelli avatar nanocode012 avatar nison-jp avatar readcodelearn avatar scosman avatar sgomez17 avatar teimor avatar viktor-podzigun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pgdump-aws-lambda's Issues

Access Denied when running the lambda

START RequestId: ab6ee9a3-2ce8-460e-94c6-18a13c4f15c4 Version: $LATEST
2022-06-03T17:08:18.467Z ab6ee9a3-2ce8-460e-94c6-18a13c4f15c4 ERROR AccessDenied: Access Denied
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:710:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:686:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)

How to deal with backups that take more than 5 mins?

Hello!

I'm in need of backing up RDS to S3 and your repo seems like the perfect solution. However I'm not sure the backups will run within 5 mins given the size of the db. Is there a way to specify lambda to run with more resources (not sure about the internals of lambda) so it'll always finish in 5 minutes?

Thanks.

Cannot find Handler

Im trying to run this zip using the index.handler as a handler in the lambda function definition but wont find the handler. This is the output i get.

"errorType": "Runtime.HandlerNotFound", "errorMessage": "index.handler is undefined or not exported", "trace": [ "Runtime.HandlerNotFound: index.handler is undefined or not exported", " at Object.module.exports.load (/var/runtime/UserFunction.js:144:11)", " at Object.<anonymous> (/var/runtime/index.js:43:30)", " at Module._compile (internal/modules/cjs/loader.js:1133:30)", " at Object.Module._extensions..js (internal/modules/cjs/loader.js:1153:10)", " at Module.load (internal/modules/cjs/loader.js:977:32)", " at Function.Module._load (internal/modules/cjs/loader.js:877:14)", " at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12)", " at internal/main/run_main_module.js:18:47" ]

And my config

image

Info on rebuilding a pg_dump for newer pg versions?

Do you have any info on how to rebuild a new pg_dump - for example for pg version 9.6.2 which recently came out.

setting a -static in CFLAGS for src/bin/pg_dump in the postgresql source don't seem to generate a useful binary (something about getting homedirectories and password files not possible)

libldap_r-2.4.so.2: cannot open shared object file

Hi,

Thanks for the project, seems to be quite handy. I tried using it but when testing the lambda, I get the following error. Do you know if any extra .so files should be added to the bin/postgres-9.6.2 folder?

"errorType": "Error",
"errorMessage": "pg_dump process failed: bin/postgres-9.6.2/pg_dump: error while loading shared libraries: libldap_r-2.4.so.2: cannot open shared object file: No such file or directory\n",
"trace": [
"Error: pg_dump process failed: bin/postgres-9.6.2/pg_dump: error while loading shared libraries: libldap_r-2.4.so.2: cannot open shared object file: No such file or directory",

Keeping getting pg_dump process failed: error

Hello,

I'm getting an error while testing the lambda:

Event:

{
    "PGDATABASE": "db_test",
    "PGUSER": "postgres",
    "PGPASSWORD": "12345",
    "PGHOST": "database-test-1.cluster-abcde.us-west-1.rds.amazonaws.com",
    "S3_BUCKET" : "bucket1",
    "ROOT": "hourly-backups"
}

Error log:

START RequestId: 123 Version: $LATEST
2022-04-12T19:39:06.383Z	123	ERROR	Error: pg_dump process failed: pg_dump: error: connection to database "db_test" failed: timeout expired

    at ChildProcess.<anonymous> (/var/task/lib/pgdump.js:54:21)
    at ChildProcess.emit (events.js:400:28)
    at ChildProcess.emit (domain.js:475:12)
    at maybeClose (internal/child_process.js:1058:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:293:5)
2022-04-12T19:39:06.383Z	123	ERROR	Invoke Error 	{"errorType":"Error","errorMessage":"pg_dump process failed: pg_dump: error: connection to database \"db_test\" failed: timeout expired\n","stack":["Error: pg_dump process failed: pg_dump: error: connection to database \"db_test\" failed: timeout expired","","    at ChildProcess.<anonymous> (/var/task/lib/pgdump.js:54:21)","    at ChildProcess.emit (events.js:400:28)","    at ChildProcess.emit (domain.js:475:12)","    at maybeClose (internal/child_process.js:1058:16)","    at Process.ChildProcess._handle.onexit (internal/child_process.js:293:5)"]}
REPORT RequestId: 123	Duration: 15624.90 ms	Billed Duration: 15625 ms	Memory Size: 128 MB	Max Memory Used: 84 MB	Init Duration: 562.19 ms	

These are the logs I'm getting.

The credentials are correct and I can successfully connect to my db from my local machine.
Can it be due to my postgresql version is 10.14?

[Question] Which binaries did you retrieved ?

Hi there !

Thanks a lot for your rpeository, it helps me a lot since few years

I now need to add psql and pg_restore binary in postgresql13.
I tried to recreate what you did, taking binaries and lib from an amazonlinux2 where I install postgresql 13, however i'm not able to make them work due to the shared library libpq.so.5 (or the -private equivalent) that is never loaded on the Lambda (I tried a lot lof stuff since 3 days)

My thoughs are my binaries are not the correct one, even if I take them from an amazonlinux2, and they work on a docker version of a lambda...

May you please tell me which binaries / lib you took ?

Request for Node.js 20 Compatibility Update

With the upcoming end of support for Node.js 16 in AWS Lambda on June 12, 2024, and its EOL reached on September 11, 2023, I kindly request an update or migration guide for transitioning to Node.js 20.

Thank you for your attention to this matter!

can we do the same with 'psql' cli tool?

I was googling to find a quick way to just run 'psql' with my custom SQL against PostgreSQL inside lambda and google directed me to you, but it seems the solution is not here directly. My guess is that you could increase usability of your project by supporting this scenario. Regards!

How create the pg_dump for python

There are any insights around how should I create the same bin for a python env? can I reuse the bin folder from here?

besides I got the error below upon execution:

{
  "errorType": "Runtime.HandlerNotFound",
  "errorMessage": "index.handler is undefined or not exported",
  "trace": [
    "Runtime.HandlerNotFound: index.handler is undefined or not exported",
    "    at Object.module.exports.load (/var/runtime/UserFunction.js:144:11)",
    "    at Object.<anonymous> (/var/runtime/index.js:43:30)",
    "    at Module._compile (internal/modules/cjs/loader.js:955:30)",
    "    at Object.Module._extensions..js (internal/modules/cjs/loader.js:991:10)",
    "    at Module.load (internal/modules/cjs/loader.js:811:32)",
    "    at Function.Module._load (internal/modules/cjs/loader.js:723:14)",
    "    at Function.Module.runMain (internal/modules/cjs/loader.js:1043:10)",
    "    at internal/main/run_main_module.js:17:11"
  ]
}

Great! new funcionalities!

Thanks a lot!
Just some things you could add

  1. s3 region to be configurable too. I had to changed the code.
  2. add pg_restore (ask for the path of the .backup file that would be in s3)

Thanks again.

Chage file name please

Can you delete ss in file name please and readme file path for dump file different with utitl.js

Release to npm?

Hi,

Thanks for a great project.

Some of the changes we'd like have recently been merged - is it possible for a release to be made to npm?

Many thanks

Supporting calling via SQS events with backup config in SQS message

Hi,

I'm trying to set up a use of pgdump-aws-lambda where it would be triggered by an SQS queue, with the params for the backup run in the queue message.

However, the event that pgdump-aws-lambda sees has the details it would need under Records - e.g. event.Records[0].body containing JSON.

I was thinking of raising a PR so that, if the event being processed has Records, it will, if there is exactly one record, attempt to deserialize that record's Body from JSON and put the values into the event... so that you can queue an SQS message with the PGDATABASE, PGUSER etc to be used, and pgdump would be triggered by SQS and understand how to get that information from the event.

I would assume that you'd configure the triggering to have a batch size of 1, rather than attempt to add a whole lot more complexity in one run of the lambda trying to back up multiple DBs in one execution - I think that would both make things more complex, and in general be a bad idea as you'd be more likely to run in to execution time issues - better to just trigger it off multiple times concurrently.

Does this sound like a reasonable idea?

Unable to edit files in lib/ in AWS Console?

This is from a fresh upload of the repository uploaded to a new Lambda function using the 14.x nodejs runtime.

I'm not sure if this is a bug related to AWS, or if there's something else strange going on with the file. So far I've been able to edit other files in the repository.

Anything in the lib folder causes an error to popup, similar to this:
Failed to write to ' maximum time allowed to connect to postgres before a timeout occurs PGCONNECT_TIMEOUT: 15, USE_IAM_AUTH: false } '. Cannot read properties of undefined (reading 'some').

Bundling pg_restore binary as well as pg_dump?

Opinion: would you consider bundling pg_restore as well as pg_dump binaries? (Maybe starting only for 14+15 to save the effort of going back through each previous version to build them?)

Use case: I'm using pgdump-aws-lambda to automate backups of several databases from an Aurora DB cluster - but we want to be able to have a lambda to sync a live DB to a dev/QA DB cluster (for cases where there's an issue that only shows up when dealing with particular data in a live instance)... for that, we need to stream the most recent backup of that DB from S3 into pg_restore.

Obviously I can just manually bundle pg_restore into the lambda, but for the backup lambda it's nice to be able to just depend on pgdump-aws-lambda from npm rather than have to manually bundle stuff - so if pg_restore were part of it, that would be most helpful.

#13 asked for this too, and was closed as completed but it seems it didn't happen.

If I were to raise a PR adding pg_restore binaries for at least 14.x and 15.x, would you be likely to accept it, or do you think that having pg_restore included is out of scope?

Update Node.js to a newer version

Node.js should be updated to a newer version.

According to the AWS notification:

Your AWS Account currently has currently has Node.js v0.10 functions that were active since March 13 2017. All Node.js v0.10 functions in your account (active or otherwise) must be migrated to Node.js v4.3 or they will cease to operate after April 30th, 2017. Your Node.js v0.10 functions will continue to work until this date as-is and support all features including making updates to code and configuration. This is to keep your production workloads running as usual and to provide time to execute the migration. AWS Lambda is deprecating the Node.js v0.10 runtime, and invocations on Node.js v0.10 functions will fail after this date.

IAM authentication

Hello,
Thanks for your work, it works fine and very helpful !
2 things I would like to bring to your attention:

  • support Multiple database backups, but I think this request has already been submitted. I've workarouned this creating one Cloudwatch event per database.
  • more information regarding IAM authentication ? I could have the application to connect the RDS using a temporary token but how does the app generates new tokens ?

Thanks a lot and sorry for not contributing more as my skill level in Node.js is close to 0...

Nicolas,

Error: pg_dump gave us an unexpected response

I ran into your lambda today an managed to deploy it. Everything seems to work fine under normal circumstances, but I encounter an error when i try to change the output format (I tried -Fp and --format=plain).
When using the PGDUMP_ARGS to change the output format, I get the following exception:

"Error: pg_dump gave us an unexpected response".
I've looked through the code and this seems to be intended behavior, since the lambda defaults to -Fc and -Z 1 as params, so you can check for PGDMP as a start token and throw on everything else.
If I understand the code correctly, it might be possible to check for pg_dump: as token and flag the output as an error instead since pg_dump output seems to communicate to the user with that as a prefix, and otherwise assume pg_dump's output is useful. pg_dump also seems to set a sane exit code which might be useful.

error dump version 12.4

Hi I wonder, have you ever tried dump using a newer version ( 12.4 ), I tried to update the lib to 12.4 and return with error

"{"errorType":"Error","errorMessage":"spawn /var/task/bin/postgres-12.4/pg_dump EACCES","code":"EACCES","errno":"EACCES","syscall":"spawn /var/task/bin/postgres-12.4/pg_dump","path":"/var/task/bin/postgres-12.4/pg_dump","spawnargs":["-Fc","-Z1"],"stack":["Error: spawn /var/task/bin/postgres-12.4/pg_dump EACCES"," at Process.ChildProcess._handle.onexit (internal/child_process.js:267:19)"," at onErrorNT (internal/child_process.js:469:16)"," at processTicksAndRejections (internal/process/task_queues.js:84:21)"]}"

Feature request - please add support for multiple databases to be backed up, from same PGHOST

Hi,

First of all, thanks for your work and sharing this app with the community, it is great!
It would be nice if we could backup multiple databases from the same PGHOST.
Make PGDATABASE accept a CSV list of database names... or * if you wanted to backup all databases on that PGHOST.
Loop through each database in the list, and back them up individually, one after another.
Would that be difficult to implement?

thanks a lot,
Marius

spawn EACCES

I'm using the binary set at https://github.com/jameshy/pgdump-aws-lambda/releases/download/v1.1.4/pgdump-aws-lambda.zip with node 6.10

As far as I can tell I have full permissions for role im using to execute.

The error:

{ Error: spawn EACCES
    at exports._errnoException (util.js:1018:11)
    at ChildProcess.spawn (internal/child_process.js:319:11)
    at exports.spawn (child_process.js:378:9)
    at spawnPgDump (/var/task/lib/pgdump.js:17:12)
    at Promise (/var/task/lib/pgdump.js:28:25)
    at pgdumpWrapper (/var/task/lib/pgdump.js:23:12)
    at module.exports (/var/task/lib/handler.js:29:27) code: 'EACCES', errno: 'EACCES', syscall: 'spawn' }

The pg_dump file doesn't have the correct permissions in the zip file

The lambda doesn't work with postgres >= 11.6

Invoke Error 	{"errorType":"Error","errorMessage":"pg_dump process failed: pg_dump: server version: 12.5; pg_dump version: 11.6\npg_dump: aborting because of server version mismatch\n","stack":["Error: pg_dump process failed: pg_dump: server version: 12.5; pg_dump version: 11.6","pg_dump: aborting because of server version mismatch","","    at ChildProcess.<anonymous> (/var/task/lib/pgdump.js:54:21)","    at ChildProcess.emit (events.js:315:20)","    at ChildProcess.EventEmitter.emit (domain.js:467:12)","    at maybeClose (internal/child_process.js:1048:16)","    at Process.ChildProcess._handle.onexit (internal/child_process.js:288:5)"]}

SSL Support Issue in v2.0.0

After updating from version 1.5.1 to 2.0.0, I encountered an issue with SSL support when RDS requires SSL (setting the parameter rds.force_ssl to 1).

Error Encountered:
pg_dump: error: connection to server at "xxxxx.eu-central-1.rds.amazonaws.com" (10.0.32.99), port 5432 failed: FATAL: no pg_hba.conf entry for host "10.0.16.79", user "xxxx", database "xxxx", no encryption

After verifying that there were no network or security group issues, I identified the root cause as the no encryption error.

Attempted Solutions:

  1. Setting the Environment Variable PGSSLMODE to require:
    const env = { ...config, LD_LIBRARY_PATH: config.PGDUMP_PATH, PGSSLMODE: 'require' };
    Result:
    This approach failed with the following error, indicating that the pg_dump version used does not support SSL:
    pg_dump: error: sslmode value "require" invalid when SSL support is not compiled in

  2. Temporary Solution:
    Setting the parameter rds.force_ssl to 0 fixed the issue.

Request:

  • Please provide guidance on how to resolve the issue with SSL support in version 2.0.0.
  • If the package does not currently support SSL, please consider adding this functionality or documenting a workaround.

Thank you!

Backing up large databases

The code works perfectly for databases upto 5 GB. However when I tried it with 10GB the resulting object in S3 is unexpectedly small which simply means the it did not worked properly. Also streaming pg_dump seems like using a lot of memory almost as similar to the size of database.

Are there known issues with large databases?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.