Giter Club home page Giter Club logo

active_job_status's People

Contributors

arachman avatar cdale77 avatar dferrazm avatar dmitrypol avatar mparramont avatar zenizh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

active_job_status's Issues

Make use of ActiveSupport::Cache?

You may be able to abstract out the Redis dependency by using ActiveSupport::Cache as your datastore.

This would make your gem useful to people using arbitrary implementations of ActiveJob (eg. people using delayed_job with no Redis in their stack) and ActiveSupport::Cache, and remove a bit of complexity to boot.

Simple batch usage.

I have to import products from huge xlsx:

in my mind:
JOB: Products(data split)(generate lot of JOB: Create product)

  • after_cb: JOB: Mark other products as discontinued

JOB: Create product

  • after_cb: JOB: Create Prices, JOB: Create Properties

How I can make sure all JOB: Create product& JOB: Create Price & JOB: Create Properties are processed/finished before JOB: Mark will start processing?

LoadError: cannot load such file -- active_job_status/redis

README.md mentions to use:

require "active_job_status/redis"

…but adding this to the class I'm creating jobs in will lead to the error:

LoadError: cannot load such file -- active_job_status/redis
    from …/active_support/dependencies.rb:274:in `require'
    from …/active_support/dependencies.rb:274:in `block in require'
    from …/active_support/dependencies.rb:238:in `block in load_dependency'
    from …/active_support/dependencies.rb:647:in `new_constants_in'
    from …/active_support/dependencies.rb:238:in `load_dependency'
    from …/active_support/dependencies.rb:274:in `require'

I'm using Rails 4.2.5, Ruby 2.2.2, Sidekiq 4.0.1, Redis 3.2.2 and ActiveJobStatus 0.0.5.

I believe the problem is that the documentation references a newer build than what's released as a gem (version 0.0.5 is from March 2015).

query batch completion by batch_id?

What is the suggested way to query batch completion with a given batch_id?

My use case: I have a ActiveJob class that queries many URLs, one after the other. Only after all URLs have been queried, I want to run a second ActiveJob class. Both these jobs are called from a rake task.

My approach:

  1. enqueue all jobs of the first class, return their job_ids
  2. create a unique batch_id
  3. create ActiveJobStatus::JobBatch.new with this batch_id and the job_ids
  4. call #perform_later on the second job class, pass the batch_id
  5. in that second job class find the batch, query if it's completed
  6. if completed, run the job, if not, reschedule it

The problems:

  1. There is ActiveJobStatus::JobBatch.find(batch_id:), but it chains #to_a onto #fetch without checking for nil.
  2. This (btw, undocumented) method returns a job_id array, not a batch object. Therefor one can't use the #completed? method but has to use a custom method.

…or do you have a different suggestion on how to ensure the execution order of multiple ActiveJob classes? Thanks!

active job 5.0.0 support?

any chance of active job 5.0.0 support?

Bundler could not find compatible versions for gem "activejob":
  In snapshot (Gemfile.lock):
    activejob (= 5.0.0)

  In Gemfile:
    rails (~> 5.0.0) was resolved to 5.0.0, which depends on
      actionmailer (= 5.0.0) was resolved to 5.0.0, which depends on
        activejob (= 5.0.0)

    rails (~> 5.0.0) was resolved to 5.0.0, which depends on
      actionmailer (= 5.0.0) was resolved to 5.0.0, which depends on
        activejob (= 5.0.0)

    active_job_status (~> 1.1) was resolved to 1.1.0, which depends on
      activejob (~> 4.2)

Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.

batch.completed? returning false for completed jobs

batch.completed? seems to be returning false even when all the jobs in the batch has :completed status.

[33] pry(main)> batch.completed? => false

`[34] pry(main)> batch.job_ids.each { |jid|
[34] pry(main)* job_status = ActiveJobStatus.fetch(jid)
[34] pry(main)* puts job_status.inspect
[34] pry(main)* }

<ActiveJobStatus::JobStatus:0x007fbe03a23ca8 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe0402b150 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe039fbdc0 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe039bc1c0 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe0391efb0 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe03954db8 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe038c0d98 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe03849658 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe03838330 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe08f62728 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe0b5c3228 @status=:completed>

<ActiveJobStatus::JobStatus:0x007fbe0b427360 @status=:completed>`

Common batch params

Is there a way to store common job params?

For example, I want to import spreadsheet with thousands of records.
I convert each row into separate job and run them as a batch.
I store results (error or success) or each row in separate Redis List
Then when batch is done I go through my success and error Lists and email results to Batch Owner.

Anyone has experience with such issue? Ideally I would store batch_owner, batch_error and batch_success as common batch params.

I currently use my own solution but looking into switching to using this gem.

Job status not expiring from cache

We are using Rails 6.1 and ActiveSupport::Cache::RedisStore as the store and recently found that status keys are not expiring.

The expires_in value is only specified when a job is enqueued, not for the other statuses: https://github.com/cdale77/active_job_status/blob/master/lib/active_job_status/job_tracker.rb#L11-L31

It appears that when the status is updated, the expiration on the key is cleared. This may be behavior that has changed at some point.

It would be better to specify an expires_in value for all status updates to ensure that keys are cleaned up.

We worked around this issue by configuring a default expires_in value for the store:

ActiveJobStatus.store = ActiveSupport::Cache::RedisStore.new(expires_in: 3.days)

Reproduction of the issuing using a console:

Running via Spring preloader in process 77990
Loading development environment (Rails 6.1.3.2)
[1] pry(main)> job_id = SecureRandom.uuid
=> "ca1222e4-5bb3-4b02-978d-dfec4f03f108"
[2] pry(main)> ActiveJobStatus.store.write(job_id, "queued", expires_in: 3.days)
=> "OK"
[3] pry(main)> ActiveJobStatus.store.send(:read_entry, job_id)
=> #<ActiveSupport::Cache::Entry:0x00007fadd78f5818 @created_at=1621937433.6319191, @expires_in=259200.0, @value="queued", @version=nil>
[4] pry(main)> ActiveJobStatus.store.redis.ttl(job_id)
=> 259170
[5] pry(main)> ActiveJobStatus.store.write(job_id, "working") # update without expires_in
=> "OK"
[6] pry(main)> ActiveJobStatus.store.send(:read_entry, job_id)
=> #<ActiveSupport::Cache::Entry:0x00007fadd63b18f8 @created_at=1621937483.174815, @expires_in=nil, @value="working", @version=nil> # expires_in value cleared from entry
[7] pry(main)> ActiveJobStatus.store.redis.ttl(job_id)
=> -1 # querying the key directly shows no TTL in redis
[8] pry(main)> ActiveJobStatus.store.class
=> ActiveSupport::Cache::RedisCacheStore

[FEATURE REQUEST] Web Interface

It would be nice if you can copy Sidekiq's or Resque's web interface for providing an overview from job statuses. Especially because Sidekiq's Status plugin can't work together with ActiveJob jobs.

Batch callbacks

Summary

It'd be great if the batches supported callbacks.

A plausible workaround is to check if a batch is completed whenever a job finishes, but that's not very elegant.

Not detecting deleted jobs

If I have a job with an exception that hits the retry queue, then I delete it out of the retry queue, the status stays as working. It should move to a failed state, and we should have a failed? method to check on that.

Seems like https://github.com/cdale77/active_job_status/pull/18/files is addressing this, although not sure if the way it is catching exceptions handles the retry queue case?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.