Giter Club home page Giter Club logo

jobber's Introduction










if anime girls were real (and they all had a crush on me) i think that would be pretty sweet











my ✨BRAND✨NEW✨ "polished" project list

bigger things

smaller things

  • recaller - async/await based function retry utility

i'll list more here as i go through my repos and clean em up

jobber's People

Contributors

greenkeeperio-bot avatar seapunk avatar

Stargazers

 avatar

Watchers

 avatar  avatar

jobber's Issues

job runners w/ redux-saga

I really like the idea of generators, and I love how cleanly redux-saga manages failure handlers, as well as how it "cancels" the sagas.

Things to note:

  • This is a performance optimization. This should not be high-priority.
  • Cancellable promises are a WIP, so we might use those instead.

ID collisions

Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID. Stop worrying about ID collisions. Just use a UUID.

But regardless, I'd like to have an idea of what can happen if an ID collides.

runner registration

How should runners be registered, and how should they be managed?

Things to consider:

  • Runner fails
  • Server fails
  • Connection between runner and server fails

server event system

Currently, I will be using eventemitter3.

Will generator middleware be a better fit?

I'm also being a bit paranoid about functions that could fail that are responsible for processing the jobs.

Manually removing jobs?

Related: #17

  • Should we let clients manually remove jobs?
  • When should a job be able to be removed?
  • If #17 is implemented, do we "remove" the job, or just mark it as no longer persistent?
  • When should a job be able to be even marked as no longer persistent?

Also, how do clients know when a job is removed, and they will no longer receive updates on it?

Timeouts

Runners' transports have timeouts, but not the runners themselves.

In the case of the HTTP transport:

  • timeout (Number): Sets the timeout.check and timeout.update values.

...

  • timeout.check = 120000: How long to wait before timing out request to server checking for jobs to process
  • timeout.update = 120000: How long to wait before timing out request to server updating job run status

Jobs have three timeouts:

  • timeout (Number): Sets the timeout.total value.

...

  • timeout.total = 120000: The "total" timeout, how long a job can be in the server before giving up. This is idle time + run time. This timeout is used if defined along with timeout.idle and timeout.run.

OR

  • timeout.idle = 120000: How long a job can be in queue before giving up.
  • timeout.run = 120000: How long a job can be running before giving up.

Server (and server transports) have one timeout:

  • timeout (Number): Sets the timeout.client value.

...

  • timeout.client = 120000: How long a client (runner or subscriber) be unresponsive before cleaning up resources related to the client(s).

Timeouts of 0 = no timeout.

Job handling: No handlers?

What do we do if a job is created that there are no handlers for?

Right now, I'll default to fast-failing the job, but we might want to do something else, like waiting for a handler before timing out a job, or something.

Optimized job cancellation

It'd be nice to have a job stop mid-process, instead of just ignoring the runner's incoming data.

Let's wait for cancellable promises to come through, and see how that will work. Ultimately, I want it to run a bit like how redux-saga does cancellations.

Server logger

Sometime later on, I'd like to have a way to persistently log everything that happens with the server, may it be job creation, job status updates, etc.

This might be a separate package.

Delayed jobs

semver-minor

Adds three fields to Job objects:

  • state: Job.PENDING
  • delayed_until: Date.getTime() that the job should run at
  • queued_at: #20

Server tasks:

  • Loads PENDING jobs at startup
  • Sets timers for the jobs to move them out of PENDING, and into QUEUED

If an error occurs at this detached state, then loudly log error, and try again after 1s.

Priority queues

Do some research into them, and figure out how they work.

Frontend

For XBPF, I'll need a friendly frontend to look at to keep track of jobs and maybe even manage them.

The frontend will need to support multiple job servers, and if there is only one job server configured, that will be the "default" job server that is selected on visit.

job mergability

In manager.create(), I am documenting the function, and explaining the options.mergable property.

Here's the question: When is a job mergable?

  • When name matches?
  • When both name and data matches?
  • When name, data, and options match?
  • Do we allow a custom function for selecing a job to merge with?

Do we even do job merging like this? We could just do a job search API, and create handlers from those IDs. There really probably is no reason or rhyme for job mergability.

Server listeners

Currently, there are no listeners, and thus, just the local API.

How would we configure those listeners?

  • What listeners - REST and WebSockets.
  • Configuration

Job retry backoff

semver-minor

Adds three fields to Job:

  • state: DELAYED_ATTEMPT
  • attempt_wait_until: The time the next attempt will take place.
  • backoff_config: Arbitrary JSON data that configures the backoff algorithm.

This changes the job retry logic to not put the job in a QUEUED state if the backoff configuration exists, and instead, put it in a DELAYED_ATTEMPT state, which also sets the attempt_wait_until field. Sets the timer for the jobs to move them out of DELAYED_ATTEMPT, and into QUEUED.

On failure, server loads DELAYED_ATTEMPT jobs at startup, loading the timers that will move them out of DELAYED_ATTEMPT and into QUEUED.


If an error occurs at this detached state, then loudly log error, and try again after 1s.

Job ID generation

Ignore the babble, I'm using UUIDs now


Job IDs will be done by an ID broker (which is the manager) to prevent race conditions. The Job ID format should be Date.now() + counter. The counter resets to 0 every time the Date.now() time changes, so you can have multiple jobs created in the same ms. Date.now() will be a good safeguard in the sense of if we do not "store" job IDs, we will not need to keep track of collisions.

Job retrying

semver-minor

Adds three fields to Job:

  • state: Job.FAILED_ATTEMPT
  • max_attempts: Number that cannot go below 1. The amount of attempts the job can run after failing before giving up.
  • current_attempt: The current job attempt.

This changes the failure behavior, by putting the job into an intermediate FAILED_ATTEMPT state instead of the final FAILED state.

Once job fails, the attempt logic runs for the job, which either puts the job from FAILED_ATTEMPT state, to a final FAILED state, or back to the QUEUED state, incrementing the current_attempt field.

On failure between two atomic operations (job intermediate failure logic -> job retry logic), the server starts back up, and fetches all FAILED_ATTEMPT jobs, re-running the job retry logic.

Multiple servers

As at least a fun idea, I want to figure out how I can make jobber work with multiple servers, instead of one.

Challenge: Server is currently the source of truth. By having multiple servers, the source of truth will probably need to be moved elsewhere.

idea tracker 2: architecture nightmare boogaloo

Things to keep in mind when building this thing:

  • THE SERVER STORE IS THE SOURCE OF TRUTH AND ATOMIC OPERATIONS
  • THE STORE IS JUST A PERSISTENCE LAYER
  • All-atomic operations where it counts
  • Server API architecture: #7
  • Fault tolerance: #9
  • Server listeners (external APIs): #10
  • How the runner "registers" its runs: #8
  • XBPF requirement Merging of jobs: #4
  • Alternative job data formats: #3
  • Timeouts: #12
  • Job ID generation: #1

fault tolerance

Here are a few cases I need to figure out:


Scenario 1: Server is started with a persistent backend, runners and jobs are created, server abruptly crashes, and then restarts immediately after.

Scenario 1 questions:

  1. Do we recover/keep the jobs?
  2. Do we recover/keep the runners?

Scenario 2: Server is started, runners and jobs added, server abruptly crashes, but does not get restarted.

Scenario 2 questions:

  1. How do the clients handle this?
  2. What do the runners do when they cannot connect to the server, and as a result, cannot send job responses?

Completed job persistence

Have a setting that lets a job not get deleted when it is complete, and without any handlers pending status query on the job.

Job time fields

  • created_at - Time job was added to the DB. Never empty.
  • queued_at - Time job was removed from initial delay. If no delay, same time job was created. (#19)
  • started_at - Time job was started. Field gets overwritten every retry attempt.
  • finished_at - Time job went in its "final" state, COMPLETE or FAILED.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.