Giter Club home page Giter Club logo

procrastinate's Introduction

Procrastinate: PostgreSQL-based Task Queue for Python

Deployed to PyPI Deployed to PyPI GitHub Repository Continuous Integration Documentation Coverage badge MIT License Contributor Covenant Discord

Procrastinate is looking for additional maintainers!

Procrastinate is an open-source Python 3.8+ distributed task processing library, leveraging PostgreSQL to store task definitions, manage locks and dispatch tasks. It can be used within both sync and async code, has Django integration, and is easy to use with ASGI frameworks. It supports periodic tasks, retries, arbitrary task locks etc.

In other words, from your main code, you call specific functions (tasks) in a special way and instead of being run on the spot, they're scheduled to be run elsewhere, now or in the future.

Here's an example (if you want to run the code yourself, head to Quickstart):

# mycode.py
import procrastinate

# Make an app in your code
app = procrastinate.App(connector=procrastinate.SyncPsycopgConnector())

# Then define tasks
@app.task(queue="sums")
def sum(a, b):
    with open("myfile", "w") as f:
        f.write(str(a + b))

with app.open():
    # Launch a job
    sum.defer(a=3, b=5)

# Somewhere in your program, run a worker (actually, it's usually a
# different program than the one deferring jobs for execution)
app.run_worker(queues=["sums"])

The worker will run the job, which will create a text file named myfile with the result of the sum 3 + 5 (that's 8).

Similarly, from the command line:

export PROCRASTINATE_APP="mycode.app"

# Launch a job
procrastinate defer mycode.sum '{"a": 3, "b": 5}'

# Run a worker
procrastinate worker -q sums

Lastly, you can use Procrastinate asynchronously too (actually, it's the recommended way to use it):

import asyncio

import procrastinate

# Make an app in your code
app = procrastinate.App(connector=procrastinate.PsycopgConnector())

# Define tasks using coroutine functions
@app.task(queue="sums")
async def sum(a, b):
    await asyncio.sleep(a + b)

async with app.open_async():
    # Launch a job
    await sum.defer_async(a=3, b=5)

    # Somewhere in your program, run a worker (actually, it's often a
    # different program than the one deferring jobs for execution)
    await app.run_worker_async(queues=["sums"])

There are quite a few interesting features that Procrastinate adds to the mix. You can head to the Quickstart section for a general tour or to the How-To sections for specific features. The Discussion section should hopefully answer your questions. Otherwise, feel free to open an issue.

Note to my future self: add a quick note here on why this project is named "Procrastinate" ;) .

Where to go from here

The complete docs is probably the best place to learn about the project.

If you encounter a bug, or want to get in touch, you're always welcome to open a ticket.

procrastinate's People

Contributors

abe-winter avatar adibsaad avatar agateblue avatar aleksandr-shtaub avatar ashleyheath avatar bracketjohn avatar charlesaracil-ulti avatar corbott avatar daindwarf avatar dependabot[bot] avatar ducdetronquito avatar eliotberriot avatar ewjoachim avatar ignaciocabeza avatar indrat avatar k4nar avatar katlyn avatar medihack avatar onlyann avatar pmourlanne avatar pre-commit-ci[bot] avatar renovate-bot avatar renovate[bot] avatar sbillion avatar sophie-ulti avatar stinovlas avatar thomasperrot avatar ticosax avatar tomdottom avatar turicas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

procrastinate's Issues

Add a "procrastinate_events" table logging job attempts

Linked to #6

When a job is re-tried, its started_at, scheduled_at (and one day ended_at?) are overwritten. We should make sure not to lose this data, for example by storing it in a "cabbage_events" table.
This table could be populated when a job is created / started / ended. As we see it, it should contain the following columns:

  • A foreign key to a job
  • A type of event: schedule, started, ended (more?)
  • A datetime with timezone field, containing what is today in cabbage_job.scheduled_at / cabbage_job.started_at
  • The current job.attempts (make sure it makes sense for scheduled, started and ended events)

started_at and ended_at (if it exists) may be removed from the job table. scheduled_at is still relevant on the job table. Although it may be renamed into "next_attempt_scheduled_at" (for example).

ping @mgu @ewjoachim

Add an order by clause in get_task

If tasks are to be executed in order (especially when they share the same lock) then we need to always select the first one, pk wise.

Retry strategies

We need to be able to retry a task on demand

  • SQL side: we need to know the "retry" number on a job

  • Python side: define a RetryStategy on a task (exponential/fibonnacci/whatever, cutoff, etc, to implement in a dedicated object), and autoretry or "on_raise=retry".

Delay / Run at time

Add a column with the scheduled date on the job table, and when selecting a job, only consider those whose scheduled date is in the past.

Defer a job without knowing it

Add a app.defer() method that allows deferring a job even if you don't have the Task object (for example if the task deferring and the task running are done in 2 different codebases)

Signature would be something like:

def defer(self, task_name:str, [all the arguments from task.defer]):

Get stalled tasks

Get a list of Job()s that are:

  • in a given queue or not
  • with a given name or not
  • in the doing state for more than X secondes

Migrations

For now, it's all fun and games, no one uses Procrastinate yet, and we haven't really thought about migrations, but it's probably time to setup something.

  • People might hop in and install procrastinate at any version (starting at 1.0, say)
  • Starting from there, people will need to upgrade procrastinate (including the schema)
  • Ideally, in a simple manner
  • Ideally, without impact and downtime
  • Optionally, using https://github.com/peopledoc/septentrion/ given we're developing this tool too.

One way of doing it could be:

  • Adapt the migration files to septentrion. Remove procrastinate migrate in favor of septentrion migrate.
  • Create a first migration for something trivial, just to test it.
  • Document the hell out of it, so that it's easy to understand. Don't forget Septentrion is by far not as properly documented 1. as it should be and 2. as procrastinate is.
  • (make it easy for people who want to read migrations instead of executing them)

(Note that in PeopleDoc, we actually don't need this part at all, because our fantastic DBA team will take care of the migrations for us, so this is strictly an effort to make procrastinate more usable by other people)

Dockerize the procrastinate development environment

To be able to quickly setup a development environment, we could wrap the procrastinate demo app in its own container in the docker-compose.yml. This would avoid the use all the exports in the CONTRIBUTING.rst and maybe some of the installs of the README.rst.

The new image should have the necessary variable to connect to the database and use the demo app, and its entrypoint should be the procrastinate command. Of course, it should bind the volume to the procrastinate directory and use a develop install, so that changes of code can be instantly checked on the demo app.

Logging

For maximum logging extensibility and effectiveness, here's what we could do:

  • Every message is bundled with all the context we can think of as extras, including a unique code for this log message (e.g. "action": "start_job")
  • All logs have a simple associated message, that's "good enough" for the job, without being too much
  • All loggers defined directly as logging.getLogger("cabbage")
  • And we should document how one could implement a log filter, and plug it on cabbage to:
    • Change the log level if they want
    • Change the associated message if they want, including using template variables (loaded from extras)
    • Delete a specific message
    • Implement a specific handler so that these logs are actually not sent to normal logging but, say, written to a file (and then disable bubbling)
    • ...

This way, cabbage logs may be adapted to anyone's need.

Wrap errors in aiopg connector

Currently, database errors are raised from psycopg2 or aiopg. However, we want to use our own errors, so that we have less dependancy with those librairies.

We need to catch all psyopg2/aiopg errors at the source so that we use our own custom-defined errors, with a from argument to not lose the base error.

This can probably be done inside a decorator that we apply on all aiopg_connector functions.

Better explain task names in the docs

This issue suggests improving the docs with respect to task names. How are task names used? Should people use specific task names? Or should they rely on default names? We should probably add a section to the "Discussions" chapter to discuss task names.

The integration tests don't pass with Python 3.8

The test_wait_for_jobs_timeout integration test produces an error with Python 3.8:

    async def wait_for_jobs(connection: aiopg.Connection, socket_timeout: float):
        try:
            await asyncio.wait_for(connection.notifies.get(), timeout=socket_timeout)
>       except asyncio.futures.TimeoutError:
E       AttributeError: module 'asyncio.futures' has no attribute 'TimeoutError'

TimeoutError is no longer in asyncio.futures. And the docs say to use asyncio.TimeoutError anyway, which should work with at least Python 3.5, 3.6, 3.7 and 3.8.

PR to come.

CLI entrypoint

[python -m] cabbage [--app dotted.path.to.app] [-v[v[v]]] worker [QUEUE_NAME[ ...]]
[python -m] cabbage [--app dotted.path.to.app] [-v[v[v]]] migrate
CABBAGE_APP=dotted.path.to.app cabbage ...

This will help integration.

Add an editorconfig

To facilitate edition following the black format, we could add an .editorconfig to the repository, so that code editors will be able to help with formatting from the start.

https://editorconfig.org

DeprecationWarning with Python 3.8

While running the tests on Python 3.8 we get a DeprecationWarning:

.../procrastinate/lib/python3.8/site-packages/aiopg/connection.py:90: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal
 in Python 3.10.

We get this warning several times.

Fix Quickstart

Errors in quickstart:

  • Depending on whether the script is lanched or loaded by procrastinate, main module is named __main__ or tutorial, and procrastinate is lost
  • Also, when using procrastinate CLI, tutorial.py is not in the pythonpath
  • Missing from time import sleep

Finish the doc

Write all the missing parts.

Split how-tos in several pages (maybe join them in the final build, but it would be easier to work with smaller files)

Custom json encoder/decoder

Let users define how they want job payloads to be encoded/decoded, especially if they want to instantiate / format special types.

Ping @hsmett who was up for contributing :)

Implement an administration prompt with cmd

Python's standard lib has https://docs.python.org/3/library/cmd.html , which will help us implement easily an interactive prompt.

We could probably use the following features:

  • List queues (and number of queued & processing jobs per queue, with stats like processing rate, average wait time etc.)
  • List tasks (and ... same stats)
  • List jobs & search for a specific job (by task, queue, lock, id, maybe args)
  • List stalled jobs and error jobs

(it would be nice if those pages could stay on and autorefresh)
(also, use colors ?)

  • Display a job, and provide specific actions: discard it, retry it (now or later), manually set it to success or failure (well it's all about changing the state of the job)
  • Batch action ?

And finally, the cmd prompt should just be an interface, and the real actions should be implemented independently in an admin module, because we'll probably have exactly the same needs for the API and/or the web admin panel.

pre-register a maintenance task that will remove old tasks in a configurable way

It's up to the user to run the task periodically, but we could provide a task to ease this. Something along the way of:

procrastinate defer procrastinate.builtin_tasks.remove_old_jobs max_hours=72 remove_error=false

It's kinda explicit what it does: delete all finished jobs and optionally also error jobs. When removing error jobs, it produces a log that contains all its informations, so that log exploitation could maybe recover them if need be.

I'm not sure whether it's more relevant to use days or hours but hours feels like it's easier for both use cases.

Note: I've used a syntax for cli defer that has not been implemented yet but I'm considering.

Automate release

What I'd really like is:

  • PR merged should automatically add a line in a draft release in GitHub
  • Tags created in GitHub should trigger the corresponding version to be released on pypi

(in this schema, both changelog and version releasing are expected not to require any code change)

Locks have a race condition

Ok, we have, I think, our first real bug.

TL;DR

If I launch procrastinate_fetch_job quickly twice, I can get 2 jobs that share the same lock.

Proof

$ psql -c 'SELECT * FROM procrastinate_jobs;'
 id | queue_name |           task_name            | lock |   args    | status | scheduled_at | started_at | attempts
----+------------+--------------------------------+------+-----------+--------+--------------+------------+----------
  1 | sleep      | procrastinate_demo.tasks.sleep | yay  | {"i": 20} | todo   |              |            |        0
  2 | sleep      | procrastinate_demo.tasks.sleep | yay  | {"i": 20} | todo   |              |            |        0
(2 rows)

$ psql -c 'SELECT procrastinate_fetch_job(NULL);' & psql -c 'SELECT procrastinate_fetch_job(NULL);'&
[1] 22792
[2] 22793
                                       procrastinate_fetch_job
-----------------------------------------------------------------------------------------------------
 (1,sleep,procrastinate_demo.tasks.sleep,yay,"{""i"": 20}",doing,,"2019-10-27 20:14:12.230384+00",0)
(1 row)

[2]  + 22793 done       psql -c 'SELECT procrastinate_fetch_job(NULL);'
                                       procrastinate_fetch_job
-----------------------------------------------------------------------------------------------------
 (2,sleep,procrastinate_demo.tasks.sleep,yay,"{""i"": 20}",doing,,"2019-10-27 20:14:12.233998+00",0)
(1 row)

[1]  + 22792 done       psql -c 'SELECT procrastinate_fetch_job(NULL);'

The second call should have returned (,,,,,,,,).

For the sake of it, if I put the 2 calls in the same transaction, I DO have the expected result:

$ psql -c 'SELECT * FROM procrastinate_jobs;'
 id | queue_name |           task_name            | lock |   args    | status | scheduled_at | started_at | attempts
----+------------+--------------------------------+------+-----------+--------+--------------+------------+----------
  1 | sleep      | procrastinate_demo.tasks.sleep | yay  | {"i": 20} | todo   |              |            |        0
  2 | sleep      | procrastinate_demo.tasks.sleep | yay  | {"i": 20} | todo   |              |            |        0
(2 rows)

$ psql -c 'SELECT procrastinate_fetch_job(NULL), procrastinate_fetch_job(NULL);'
                                      procrastinate_fetch_job                                       | procrastinate_fetch_job
----------------------------------------------------------------------------------------------------+-------------------------
 (1,sleep,procrastinate_demo.tasks.sleep,yay,"{""i"": 20}",doing,,"2019-10-27 20:17:46.28228+00",0) | (,,,,,,,,)
(1 row)

Leads

  • When we call FOR UPDATE OF procrastinate_jobs, should we somehow also lock procrastinate_job_locks? But there's no row to lock yet...
  • Should there be a UNIQUE constraint on procrastinate_job_locks.object? I guess it would help but if we add just that, we'll get a crash, and still not the expected behaviour. Or should we also add a ON CONFLICT DO NOTHING statement? The procedure would probably need quite a refactor, because in this case it needs to not do `UPDATE procrastinate_jobs SET status = 'doing' and return nothing.
  • Is it because the procrastinate_job_locks is UNLOGGED?

Retry depending on the exception

  1. The retry strategy should be provided the exception that triggers the retry
  2. A builtin mechanism should allow to retry only if the exception is of a given type or among a few types.

Documentation

  • For devs
  • For sysadmins / support teams

Ideally following Daniele Procida's 4 part doc template.

Testing tools

For Cabbage to be used in a real codebase, we'll need a few tools that will make testability easier:

  • A pluggable in-memory list of created tasks
  • A way to launch these task synchronously
  • A way to launch a single task

I'm thinking something along the lines of:

from cabbage.testing import TestTaskManager
tm = TestTaskManager()

something = []

@tm.task(queue="yay")
def t(task_run, a):
    something.append(a)

t.defer(a=1)

# A pluggable in-memory list of created tasks
assert tm.scheduled == [{"name": "t", "args": {"a": 1}]

# A way to launch these task synchronously
tm.run_scheduled()

assert something == [1]

# A way to launch a single task
t.run(a=2)

assert something == [1, 2]

(For now, no idea how we'd substitute the TaskManager with the TestTaskManager in a real case, but we can think of something)

Add a app.monitor() method

This should:

  • Connect to the database and launch a simple query (SELECT TRUE)
  • Check the existence (and version?) of the table procrastinate_jobs
  • Count the number of jobs in each status

Return a dict describing all this.

def app.monitor(self) -> Dict[str, Any]:

Execute asynchronous tasks in parallel

This is the third and last step in asynchronous compatibility.

  • #9 is for defering jobs asynchronously
  • #105 is for accepting a task to be a coroutine
  • This ticket is for executing asynchronous tasks in parallel

For now, our tasks can be asynchronous, but there's still only one task executed at a time. We need a way to launch a pool of workers in the same event loop.
We need to check precisely where the right place is to place our pool. I don't know yet if we want 1 DB connection per process or per worker, and the potential impact on transactions. We'll probably need to setup a connection pool etc.

Note: we'll need some kind of ContextVar for the log context in Worker, because if we start parallelizing tasks, contexts will be ALL OVER THE PLACE. But ContextVar is py>=3.7 and we're supporting 3.6.

What if a worker that is setup to run parallel async tasks recieves a non-async task ? Should it run (and block all the other tasks running at the same time) or not run (but this breaks our model of never taking a task in the DB if we can't run it) ? To be determined before implementation.

Also we'll need to switch to a pool of pg connections, because one single connection cannot hold concurrent queries (we could have a specific parameter for concurrent pg connections vs concurrent workers)

Interrupt the "select" call on signal handle

The idea is to register a pipe, then in the select call in postgres.py, listen to the pipe too, and in the signal handler, write in this pipe. This way, we'd leave immediately without waiting for the timeout.

I know we can't do everything in a signal handler, but if we can write to a pipe, then it may help us to stop faster (and make the "timeout" argument really much less important).

https://stackoverflow.com/a/4661284

Complete/fix mypy integration

This is a 2-fold ticket:

If this ticket is succesful, then from a new python project using mypy, installing procrastinate from PyPI, on the following lines:

app = procrastinate.App(...)
app.run_worker(["hello"])  # should pass mypy
app.run_worker("hello")  # should fail mypy

Never-awaited coroutine RuntimeWarning in the unit tests

The execution of the unit tests currently leads to two RuntimeWarning: coroutine 'xxxxxx' was never awaited:

====================================================================================== warnings summary =======================================================================================
tests/unit/test_testing.py::test_listen_for_jobs_run                                                                                                                                           
  /home/elemoine/src/procrastinate/tests/unit/test_testing.py:259: RuntimeWarning: coroutine 'BaseJobStore.listen_for_jobs' was never awaited                                                  
    job_store.listen_for_jobs(queues=["a", "b"])                                                                                                                                               
                                                                                                                                                                                               
tests/unit/test_testing.py::test_wait_for_jobs                                                                                                                                                 
  /home/elemoine/src/procrastinate/tests/unit/test_testing.py:264: RuntimeWarning: coroutine 'InMemoryJobStore.wait_for_jobs' was never awaited                                                
    job_store.wait_for_jobs()

It would be good to fix that.

PR to come.

Disable the "Warning task name different from python path" upon request

It would be nice to provide a simple way to disable the warning that triggers everytime you define a task with a name that is not it's python import path. Of course, using a logging filter, people can already do that, but it lacks a bit of user friendliness. I'm thinking about a boolean parameter in App.__init__() ?

https://github.com/peopledoc/procrastinate/blob/81fe265a7a1865db77f93302cd82e8dcf9b22421/procrastinate/tasks.py#L83-L93

Log the result of the task

It could be interesting to accept that task can return anything, and the return value will be added to the logs.

This way, the logs would naturally be enriched without having to add log to the tasks. Of course, a task could always return None if there's nothing smart to say.

I can imagine this being a nice help for ops more often than not.

Fix coverage

Coverage is reported under 100%, but when launched locally, coverage is 100% Something is wrong in codecov.

How to discover tasks ?

When we launch a worker, it will receive jobs from the database. We have to link them to tasks, but 100% of the codebase may not have been imported, which means some tasks may not have been registered yet.

It seems there are 3 solutions:

  • Provide the worker with a list of imports to do before starting, àla Celery
  • Start the worker on a given module and only gather the tasks from this module. If we want tasks implemented elsewhere, we have to import them in this module (àla Dramatiq, if I'm not mistaken). This module may or may not contain the task manager itself, and this may or may not replace the "registration" par of the task decorator. I think this can only work if the task manager lives in cabbage instead of in the user's module, which I'm not a huge fan of, or with metaclasses which I'd rather avoid.
  • Don't register tasks, but have a task name be a python importable path, load tasks lazily when we receive them. This means we can't do a list of tasks, which might appear limitating in the long run.

Ping @mgu @Evelf @sdispater

Add an exponential backoff

For now, the only retry strategy we've implemented is the constant backoff, but an exponential one makes sense too.

Improve cli defer syntax

Current syntax is

procrastinate defer dotted.path.to.task '{"json": "arguments"}'

I'm thinking:

procrastinate defer dotted.path.to.task json=arguments

Task argument and defer parameters would be recognized through the face defer parameters are flags with --, - not being a valid python identifier (though it could be passed as **kwargs but...).

But then how to do typing ?
I'm not sure the solutions we used for the similar problem in https://github.com/peopledoc/vault-cli apply here. And I'd like to avoid yaml here, because it's nowhere else in the project.
Interestingly, ansible has the same problem with -e and it's a mess.

  • I'd like explicit strings, like json="arguments" (so all values right of equal would be passed to json.loads() but double quotes are going to be eaten by bash.
  • But we could document that it's necessary to wrap double quotes in simple quotes.
  • We could use the task type annotations but it really feels dangerous.
    We could type variables (scrapy/scrapy#356 (comment)) (read the rest of the ticket)
  • Or we could stay with json.
  • Finally, we could have multiple ways, as long as it's not complexifying the code too much. A json for complex cases, simple keypairs for the rest (but like, I can live without lists, but bools and ints should be 1st class citizens)

Interesing reads:

Any idea ?

get rid of get_global_connection

  • Link the connection object to the TaskManager,
  • Make the TaskManager extensible (inheritance or, preferably, composition) so that someone could implement their own method of connecting to the DB (maybe from Django), with a default implementation that would take no argument and read from env vars, like currently
  • Standardize whether functions receive the task manager, the connection or the cursor as parameter. Find the right one and use it consistently where applicable.
  • Use consistent connection options and cursor classes.

If tasks are coroutines, await them.

This is the second step in asynchronous compatibility.

  • #9 is for defering jobs asynchronously
  • This ticket is for accepting a task to be a coroutine
  • #106 is for executing asynchronous tasks in parallel

Add more details to default log messages

In #4, we implemented "simple" message logs. They're so simple that they actually lack precision .The details are in the structured part, but it's not printed by defaut. Someone analyzing logs in a structured way will have everything they need, but someone just using the default log formatter will be in the dark.

I think it would be nice if the default logger could log the extra informations too
OR
The default message could be made dynamic (though f-strings) and include the extra info.

Rename a few things

curs -> cursor
conn -> connection
task_worker.py -> worker.py
worker() -> run_worker() (or something else ?)
task run -> job

Add a very simple cron mechanism

I'm being told the project would be easier to use if we provided a simple cron-like process. It might be interesting to look.

@EliotBerriot, you mentionned you were maybe interested to try a contribution ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.