Giter Club home page Giter Club logo

sidekiq-unique-jobs's Issues

If a job is deleted from the enqueued list, it's still unique and new jobs can't be added.

For example I have a job:

class TestJob
  include Sidekiq::Worker

  sidekiq_options queue: 'high',
                  unique: true,
                  unique_args: ->(args) { [ args.first ] }

  def perform(arg)
    # do something
  end
end

And calling it:

[1] pry(main)> TestJob.perform_async 1
=> "de5ff32394dbcdab2128d5ee"

Then if I go to sidekiq web interface and delete it from the enqueued list before it processed, I can't add a new one with the same argument.

[2] pry(main)> TestJob.perform_async 1
=> nil

I'm pretty sure this is not an expected behaviour. Am I right?

Scheduled Unique Jobs Not Being Enqueued

Since updating Sidekiq to 2.12.1, unique jobs that I try to schedule 5 minutes in the future with:

JobClass.perform_in(5.minutes, args)

do wind up in the schedule, but then when five minutes rolls around they go away and are not enqueued or performed. It looks like there was a change to the way Sidekiq uses middleware for scheduled jobs (it now calls client middleware when scheduled jobs or retries are put on the queue).

Is it possible that the sidekiq-unique-jobs middleware is setting the payload hash key when the job is scheduled, and since the new Sidekiq version also calls client middleware when the scheduled job is then enqueued, it's not enqueued because the hash is already there?

Optimize Redis usage

Right now the uniqueness check requires three network round trips to Redis for each job pushed:

  1. watch
  2. get
  3. (multi / setex) || unwatch

Redis 2.6.12+ has new flags for set which potentially allow this operation with a single command:

conn.set(payload_hash, 1 || 2, nx: true, ex: expires_at)

You don't document a minimum Redis version but requiring Redis 2.6+ is your choice.

Documentation Not Clear

I'm trying to understand what this gem does, but the documentation isn't very clear on a basic level. So we're making jobs unique based on worker and arguments. What does that mean? Is only one unique job allowed in the queue at a time? (Meaning that if I put two different jobs in the queue that each match the unique_args constraint, is the second job never executed?) Or are two unique jobs allowed in the queue, but the latter always executed after the former has concluded?

Using with sidekiq delayed extensions

I'm currently using unique to guarantee I don't send multiple emails after jobs complete.

I end up having to wrap the email call in a worker like:

class UniqueMailer
  include Sidekiq::Worker

  sidekiq_options unique: true, unique_job_expiration: 30 * 60, unique_unlock_order: :never

  def perform(resource_id)
   Mailer.delay.mail(resource_id)
  end

It would be nice if something like the following worked:

Mailer.delay.mail(resource_id, unique: true, unique_job_expiration: 30 * 60, unique_unlock_order: :never)

or a new method

Mailer.delay.unique_mail(resource_id, unique: true, unique_job_expiration: 30 * 60, unique_unlock_order: :never)

Thoughts?

Runtime uniqueness when using :before_yield as unlock order

I have a specific use case for sidekiq uniquness for a project I work on. We need the possibility to enqueue more jobs while a job is running but we never want two unique jobs to run concurrently. As far as i understand there is no way to do it as :after_yield unlock order would not allow to enqueue more jobs while a worker is running and :before_yield would allow simultaneous workers. I made a fork adding a locking mechanism for each running job to fix our problem. You can find it here:
https://github.com/tsubery/sidekiq-unique-jobs/tree/runtime_uniqueness
I also added some documentation describing it.
Do you think that is something that might be interesting to other users of the gem?

Crash handling

Currently Unique Jobs does not handle crashes. So if a sidekiq worker crashes whatever it was doing is lost (except for Sidekiq Pro). When using a worker with unique those unique job keys persist in Redis with no way of clearing them out (short of deleting them manually).

Server middleware removes payload hash key before expiration

Hi,

I'm using this gem to throttle duplicate jobs queued within a 24 hour window.

Unfortunately, the server-side middleware is not letting me achieve this. Once the job is processed by the server middleware, the payload_hash key is removed, whereas I expect it to just expire after my TTL. To get around this I've put in a hack to set the "unique_unlock_order" option to -1, so that the key is never deleted.

I'm a bit confused because if the purpose of the gem is to ensure unique jobs, why would the key ever be removed and rather than just letting it expire on it's own?

I'm also a bit unclear of the use case for the server side piece entirely, so maybe you could provide an example?

Short jobs are not unique for the given time window

I ran into a bug at work where the same job was being run multiple times (sending out emails to users). So I did some testing and came up with a minimal reproduction of the problem here: https://github.com/hqmq/sidekiq-not-unique-jobs

In order to run it just bundle install and then in one terminal start sidekiq like normal:

$ bundle exec sidekiq -r config/bootstrap.rb

Then in a second terminal run:

$ ruby test_uniqueness.rb
2014-02-26T21:50:12Z 67772 TID-znnu9k INFO: Sidekiq client with redis options {:url=>"redis://localhost:6379", :namespace=>"sk"}
Expecting the counter to = 1
counter = 9

mock_redis and the mess

We are running into an issue that the test pass on local machine but failed on CircleCI.
Upon close inspection, I can see that between tests the mock_redis instance that SidekiqUniqueJobs is pointing to need to be cleared so that the pre-condition of the spec can be guaranteed.

Though, with the presence of mock_redis, I think it makes the gem complicated and doesn't worth the hassle. Per my opinion, sidekiq-unique-jobs should just use what ever the redis sidekiq is using, even during test mode. This lead to more predictable behavior of the tests cross environments.

clarification on unique_args

Does the args filter need to return an array?
Or can it return any unique string (or object that responds to .hash, etc)?

For example would this work?

class SomeJob
  include Sidekiq::Worker
  sidekiq_options queue: :critical, unique: true, unique_args: :args_filter

  def self.args_filter(*args)
    args.first
  end

  def perform(object_id, attempts=1)
    ...
  end
end

Second question: does args_filter need to include the name of the job ("SomeJob" in this case)? Or is this added for you?

The docs don't make it super clear because they shown an example like this (but don't show the corresponding perform method signature, so it's unclear what these variables are actually referring to.

  def self.unique_args(name, id, options)
    [ name, options[:type] ]
  end

Thanks!

Missing info from README

I just found this project while googling how to make sure certain Sidekiq jobs are not executed multiple times. sidekiq-unique-jobs seems to do exactly that... awesome!

I think there is some info missing in the README though, specifically:

  • Are worker arguments taken into account? So if I have a HardWorker and I call HardWorker.perform_async('bob', 5) multiple times, that job should obviously only be queued once. But what if I call HardWorker.perform_async('bob', 5) and HardWorker.perform_async('jane', 10)? Are both those jobs queued? I suppose so but I'm not 100% sure.
  • Why is the expiration parameter needed? Does it mean that by default the same job cannot be enqueued again up to 30min after it was removed from the queue?

I think both these points (and possibly more) should be explained in the README.
I'm happy to prepare a pull request for it, if you answer my questions in here.

Thanks for your work on this!

undefined `configuration` when using .configure

When doing

SidekiqUniqueJobs.configure do |c|
…
end

It fails with:

/Users/mrfoto/.gem/ruby/2.0.0/gems/sidekiq-unique-jobs-3.0.9/lib/sidekiq-unique-jobs.rb:26:in `configure': undefined local variable or method `configuration' for SidekiqUniqueJobs:Module (NameError)

Will a second job lose if the job is already queued, or is already scheduled?

A uniq job is already queued or scheduled, and then a new job coming. Will it lose ?

class QueueWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'test', unique: true, unique_args: :unique_args

  def self.unique_args user_id, client_id, options
    [user_id, client_id]
  end

  def perform(*args)
    sleep 10
  end
end

QueueWorker.perform_sync(1,1, {})  # No.1
QueueWorker.perform_sync(2,1, {})  # No.2
QueueWorker.perform_sync(1,1, {})  # No.3

After reading the source code, I found that if the No.1 job was previously scheduled and is now being queued, the No.3 job will lose!

image

LoadError: cannot load such file -- mock_redis

I am receiving the following error when running my tests (MiniTest via rake test command):
LoadError: cannot load such file -- mock_redis

Some excerpts of how I use sidekiq-unique-jobs:

Gemfile:

# Use sidekiq for background tasks
gem 'sidekiq'

# Use sidekiq enhancement for unique jobs
gem 'sidekiq-unique-jobs'

Worker:

class MyWorker
  include Sidekiq::Worker
  sidekiq_options queue: :my_queue, unique: true, 
                  unique_job_expiration: 24 * 60 * 60

  def perform(user_id)
    # some code
  end
end

test_helper:

require 'sidekiq/testing'
Sidekiq::Testing.fake!

I'm not sure if I'm doing something wrong here or if there is an issue with the combination of Rails 4.1.4, most current sidekiq and sidekiq-unique-jobs gems. Can anybody help me or fix this issue?

I think it is an issue because if I remove this gem from the project then all tests succeed.
Thanks for your help! :)

undefined method `get_sidekiq_options' for "MyScheduledWorker":String

Our Sidekiq redis instance is shared among multiple services.
So workers are available on one of the repo while not on the others.

When a scheduled or retried job is being consumed by the sidekiq poller running on another service, we need to to safely re-enqueue the task without exception.

Currently Sidekiq does re-enqueue jobs without the worker class without issue, but apparently the unique jobs middleware is not accounting for this case.

I have workaround the issue with a monkey patch, but an official fix is much appreciated. Thanks for the hard work!

Throttling jobs

I have received a lot of questions about how to throttle jobs using sidekiq-unique-jobs. After searching for throttling sidekiq jobs I ended up with sidekiq-throttler. Which seams like a pretty straight forward way of doing what most people want with the uniqueness expiration.

Is throttling jobs something that should exist in the sidekiq-unique-jobs gem or should we remove all expiration completely and tell people to use sidekiq-throttler instead?

The reason for clearing jobs in the first place was that some jobs never got cleared ever so no new such jobs could be scheduled however in a recent release of sidekiq @mperham added clearing of stale jobs after 60 minutes meaning we could only ever keep jobs for 60 minutes or we would have to turn to another solution. I am still undecided on how to proceed here so any suggestions you have are greatly appreciated.

What is the use case for the uniqueness window?

I'm puzzled about the uniqueness window. Can someone please illustrate an example of how it's useful?

For my app I want each job performed once, which I why I sought out this plugin. I don't see how waiting any amount of time would make it ok to allow this "same" job to get enqueued.

Also, it would be nice to have a little more background in the Readme on how uniqueness is implemented. Is there a Redis query each time (performance penalty), or are all the existing job signatures stored in memory (memory constraint)?

mock redis dependency

Hi,
after this commit, 9fdc855 related with #46, when I run my app specs I receive an error because mock_redis can not be loaded.
I think that it should be like it was before, a gem dependency, because if not people using this gem in their tests will have to require in the Gemfile mock_redis explicitly and that is something that I do not like to do.

Another option would be to allow people to do not have to use mock_redis when using the gem in test mode. Having some kind of configuration.
What are your thoughts?

ConnectionPool used incorrectly - causes deadlocks

Been trying to hunt down some mysteriously stalling dynos on our heroku app, and have traced back the source of our woes:

https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/lib/sidekiq_unique_jobs/connectors/sidekiq_redis.rb#L5

The redis connector classes should not be returning the connections for use outside the #with or #redis blocks - that block is used to guarantee exclusive access to the connection and prevent other threads from touching the connection while it's working. This would blow up a lot more massively, but the redis connections themselves are intended to be thread safe, so the bugs end up being a lot more subtle:

  1. The deadlocks I've been hunting down:
    https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/lib/sidekiq_unique_jobs/middleware/client/strategies/unique.rb#L49-L51 the connection the multi is started on will not necessarily be the same one setex is called on. To be thread safe, redis throws a separate mutex around multi blocks (https://github.com/redis/redis-rb/blob/master/lib/redis.rb#L2147) and every other command as well - so it's possible you try to call setex on a connection that's currently locked for a multi, and then inside that multi locks on the connection waiting for the setex to unlock.
  2. Potential race conditions allowing jobs to be added multiple times - watch needs to be called on the same connection you call multi on, but since #conn is potentially a different connection from the pool every time, there's no guarantee this happens.

There might be other issues stemming from this as well. I was able to reproduce the error case we're seeing with the following script: https://gist.github.com/adstage-david/d1057fb6e4b1a676cce4

Redis not mocked in testing

When I add sidekiq-unique-jobs to my Rails app I noticed that Redis calls aren't stubbed during testing.

I have the following in my spec_helper.rb

require 'sidekiq/testing'

Jobs not being executed anymore??

Hi, upgrading the gem from 3.0.2 to 3.0.9 seems creating issue while executing some kinds of jobs.

I don't know how to build a simple test case to replicate it. It happens every time we are executing a particular job, but this job is not different from the others, and it doesn't even have the unique attribute so it should be totally ignored.
I mean, if the issue is present with this job, than it should be present with all the jobs...

The job contains an options object with { data => { match_id => "XXXXYYYYY"} }, I think something about the hashing didn't work properly because if one of them was enqueued when another one was already waiting to be executed (both using the "perform_in"), then the second was not executed, it was totally forgotten, like if it was never been added to the queue.

I downgraded to 3.0.2 and the issue is solved.

Middleware not added to chain?

I added the gem to my Gemfile and I couldn't get UniqueJobs to work. I pry'ed my code and it looks like the middleware is not automatically inserted in the chain.

I tried adding it in an initializer as well as manually in a Pry session but to no avail. Oddly enough, the Client middleware is added. Any idea what might be going on? Here's some code:

[1] pry(#<NotificationPushWorker>)> Sidekiq.server_middleware
=> #<Sidekiq::Middleware::Chain:0x007fd675435e10
 @entries=
  [#<Sidekiq::Middleware::Entry:0x007fd675435d70
    @args=[],
    @klass=Sidekiq::Middleware::Server::Logging>,
   #<Sidekiq::Middleware::Entry:0x007fd675435cd0
    @args=[],
    @klass=Sidekiq::Middleware::Server::RetryJobs>,
   #<Sidekiq::Middleware::Entry:0x007fd675435c30
    @args=[],
    @klass=Sidekiq::Middleware::Server::ActiveRecord>,
   #<Sidekiq::Middleware::Entry:0x007fd675435b90
    @args=[],
    @klass=Sidekiq::Middleware::Server::Timeout>]>
[2] pry(#<NotificationPushWorker>)> Sidekiq.configure_server do |config|
[2] pry(#<NotificationPushWorker>)*   config.server_middleware do |chain|  
[2] pry(#<NotificationPushWorker>)*     require 'sidekiq-unique-jobs/middleware/server/unique_jobs'    
[2] pry(#<NotificationPushWorker>)*     chain.add SidekiqUniqueJobs::Middleware::Server::UniqueJobs    
[2] pry(#<NotificationPushWorker>)*   end    
[2] pry(#<NotificationPushWorker>)* end  
=> nil
[3] pry(#<NotificationPushWorker>)> 
[4] pry(#<NotificationPushWorker>)> Sidekiq.server_middleware
=> #<Sidekiq::Middleware::Chain:0x007fd675435e10
 @entries=
  [#<Sidekiq::Middleware::Entry:0x007fd675435d70
    @args=[],
    @klass=Sidekiq::Middleware::Server::Logging>,
   #<Sidekiq::Middleware::Entry:0x007fd675435cd0
    @args=[],
    @klass=Sidekiq::Middleware::Server::RetryJobs>,
   #<Sidekiq::Middleware::Entry:0x007fd675435c30
    @args=[],
    @klass=Sidekiq::Middleware::Server::ActiveRecord>,
   #<Sidekiq::Middleware::Entry:0x007fd675435b90
    @args=[],
    @klass=Sidekiq::Middleware::Server::Timeout>]>
[5] pry(#<NotificationPushWorker>)> Sidekiq.client_middleware
=> #<Sidekiq::Middleware::Chain:0x007fd698cc2628
 @entries=
  [#<Sidekiq::Middleware::Entry:0x007fd698cb6328
    @args=[],
    @klass=SidekiqUniqueJobs::Middleware::Client::UniqueJobs>]>

Thanks!

Unique jobs sets Sidekiq testing to inline! mode

I am having an issue that as soon as I enable unique jobs for a worker, Sidekiq starts to operate in inline mode, thus requiring Redis connection. I would like to continue using it in fake mode. Is it an expected behavior? Here are the versions I use:

Using sidekiq 3.3.4
Using sidekiq-unique-jobs 3.0.13

Duplicated Jobs With Nested Sidekiq Workers

I have have an issue when using nested workers, where uniqueness is not followed, and leads to duplicate jobs.

When executing RunAJobWorker multiple times it leads to duplicate LongRunningWorker instances.

I am using Sidekiq 2.6.5 and Unique-jobs 2.3.2

Here is the follow code that will reproduce the problem:

class RunAJobWorker
    include Sidekiq::Workder
    sidekiq_options unique: true

    def perform
        # Do some db lookup to find params for this 
        # the job
        id = 10
        LongRunningWorker.perform_async(id)
    end
end

class LongRunningWorker
    include Sidekiq::Workder
    sidekiq_options unique: true

    def perform(id)
        # Find model
        # model = Model.find(id)
        # model long running task
        sleep(10)
    end
end

Test suite unclear on what happens when duplicate job is attempted

The test suite says that a duplicate job is not added, which is the desired behavior.

But what else happens? Is an error raised? Is false returned? How does one know if success occurred or not? The nearest I can tell is that perform just won't return a job_id:

TestJob.perform_async :arg => 1
# => "1234..."

TestJob.perform_async :arg => 1    # a duplicate!
# => nil

The test suite doesn't make it clear how to check this, since it works by looking at the queue size, which definitely isn't the correct strategy for production, since jobs are being added and removed all the time.

Sidekiq tests failed when sidekiq-unique-jobs is used

My sidekiq tests with Sidekiq::Testing.fake! were passing until I've added sidekiq-unique-jobs and enabling it in my worker. All other tests are passing except line 19 and 20. Here is my test case :

class MyWorker
  include Sidekiq::Worker
  sidekiq_options :queue => :working, :retry => 1, :backtrace => 10
  sidekiq_options :unique => true

  sidekiq_retries_exhausted do |msg|
    Sidekiq.logger.warn "Failed #{msg['class']} with #{msg['args']}: #{msg['error_message']}"
  end

  def perform(param)
    puts param
  end
end
require "spec_helper"

describe MyWorker do

    context "as a resque worker" do
        it "reponds to #perform" do
            MyWorker.new.should respond_to(:perform)
        end
    end

    it { should be_processed_in :working }
    it { should be_retryable 1 }
    it { should be_unique }

    it "enqueue a job" do
        param = 'work'
        expect(MyWorker).to have_enqueued_jobs(0)
        MyWorker.perform_async(param)
        expect(MyWorker).to have_enqueued_jobs(1)
        expect(MyWorker).to have_enqueued_job(param)
    end

    it "performs a job" do
        MyWorker.new.perform('chocolate').should be_true
    end
end
Failures:

  1) MyWorker enqueue a job
     Failure/Error: expect(MyWorker).to have_enqueued_jobs(1)
       expected MyWorker to have 1 enqueued job but got 0
     # ./spec/workers/my_worker_spec.rb:19:in `block (2 levels) in <top (required)>'

Incorrect README re: uniqueness time?

"For jobs scheduled in the future it is possible to set for how long the job should be unique. The job will be unique for the number of seconds configured (default 30 minutes) or until the job has been completed. Thus, the job will be unique for the shorter of the two."

the SETEX doesn't care about the job finishing, and if the args stay the same, the same hash will be used to look at the lock. So if you set it to be unique for 30 minutes but it finishes in one, how would it get enqueued again?

Retries duplicates unique jobs

Hi,

The problem is: when a job fails for some reasons Sidekiq requeues it creating many duplicates despite the fact that it's unique.

Actually the problem is described here but it's not a Sidekiq issue anymore.

Is there any way to avoid duplicating?

UPD:

I'm runnig Puma as a web server

Example Test using Sidekiq::Testing.inline

I have a few service objects I'd like to write integration tests for, so I'm using Sidekiq::Testing.inline! so they run synchronously. This doesn't appear to work with sidekiq-unique jobs. Is there an example or workaround on how to get the job to execute immediately?

Scheduled workers

Was this intended to work with workers that need to be scheduled?

Support for sidekiq 3?

@mperham has recently released sidekiq 3.0, but sidekiq-unique-jobs is versioned at ~> 2.6.

What's the roadmap like to support 3.0?

Lock remains when running with Sidekiq::Testing.inline!

When running within Sidekiq::Testing.inline! mode, my jobs seemed to be forever locked. I believe the reason is that when running within inline mode, the server middleware is not run.

I am thinking there should be a way to disable uniqueness when running within inline mode.

thx!

Jobs are unlocked if they fail and are retried

I just discovered that jobs are unlocked if they fail. This happens regardless if there are retries left or if the job dies. I'm wondering if this is the intended behavior. As I would understand (and need) that fact a job that will be retired should still be unique.

The code responsible for this is the following: (in /lib/sidekiq_unique_jobs/middleware/server/unique_jobs.rb)

def call(worker, item, _queue, redis_pool = nil)
  ...
  yield
  ensure
    if after_yield? || !defined? unlocked || unlocked != 1
      unlock(lock_key)
    end
end

So are there any reasons for this behavior or do I miss anything?

Update

So after some research I start to understand why unlocking works the way it does. I was not aware of the fact that jobs in the schedule/retry queue are pushed to the worker queues via client push (involving sidekiqs client middlewares). If in that situation the job is still locked the job will never be pushed into its worker queue.

To circumvent this issue it might be possible to save the jid of the job that acquired the lock instead of the [1,2]. So the client middleware can check for the jid and let the job reenqueue if they match.

What do you think of this idea?

The second job does not run, even if it has different arguments

The second job does not run, even if it has different arguments.... Why is this?

# WORKER
class FooWorker
  include Sidekiq::Worker

  sidekiq_options({
    # Mitigates from race conditions Should be set to true (enables uniqueness for async jobs)
    unique: :true,
    unique_job_expiration: 5.minutes.to_i, # Unique expiration (optional, default is 30 minutes)
    unique_args: :unique_args,

    retry: false,
    backtrace: true
  })

  def self.unique_args(foo, bar)
    [foo, bar]
  end

  def perform(*args)
    sleep(30.seconds)
  end
end


# In Rails console
FooWorker.perform_async(4, 4) # => "defa813c6ff16a6b9dba6f6a"
FooWorker.perform_async(5, 5) # nil

sidekiq-unique-jobs prevents not unique jobs creation event with sidekiq inline test mode

Hi, sidekiq-unique-jobs doens't allow to create not unique jobs event if sidekiq inline test mode is turned on.
sidekiq have inline test mode for testing jobs. It simply invokes perform method instead of perform_async. I faced a problem in my tests, that sidekiq-unique-jobs doesn't allow me to create a new not unique job, when the first one job was already performed.

What does uniqueness mean in case of this gem?

I have a simple use-case:

  1. web server receives http request and schedules sidekiq job
  2. sidekiq job upon completion send http request (e.g. completion callback)

I want my sidekiq jobs to be unique during (for example) 2 hours. It means that after pushing some job with arguments {'a'=>1,'b'=>2} I do not want any other jobs with exactly same arguments to appear in queue and/or in "working" jobs regardless that first job state(successfully finished or failed). The result behavior that after first job finished I can add to queue other job with same arguments which is definitely not expected behavior (or am I missing something?).

So does it bug or feature?

Thanks!

Change log level to info rather than warn

Attempting to create a job that isn't unique shouldn't be such an important event that it shows up as a warning in the logs. I think the ability to log the attempts is great, but I think the value isn't there unless you're engaged in a level of diagnosis in which you're checking logs at the info or debug levels. Could we switch the logging level to be info?

Happy to do the PR to make the change if there's agreement.

What is the exact behavior?

Great gem, thank you for making it.

I have been looking through the code and reading up and can't figure out precisely what the behavior of this gem is regarding what it looks at when determining whether or not to keep or throw away a job. Which already existing jobs are considered? There are processed, failed, busy, enqueued, retries, scheduled and dead jobs. Which of those does this gem care about when deciding whether or not to keep the second job?

Also, if a job is part-way through/currently being processed, what is the behavior? In my case I want the second job to be kept if the first job is already started as the second job may have new information that makes the first job out of date.

Thanks again.

Not all sidekiq:sidekiq_unique keys are removed from Redis

I am seeing weird behavior in production where sidekiq:sidekiq_unique are not always removed after completing a job.

I am running an hourly import job, that queues over 1000 jobs to fetch and process data from an API. To prevent multiple workers processing the same job, I am using sidekiq-unique-jobs with a unique_job_expiration of 1.day.

When I run this on my development machine (OS X), everything is fine. When running in production (Linux, the uniqueness keys are not always removed. This causes import jobs not the run for a whole day.

Normally (and what I see in on my development machine) is that the number of sidekiq:sidekiq_unique keys is equal to the number of currently running jobs plus the queue size. When I running the same import on production, I see over 120 sidekiq:sidekiq_unique keys not being unlocked.

My first thought was that this is caused by some worker jobs, queueing other worker jobs. But I could also reproduce this in production by performing the same worker multiple times.

At this moment I don't have any clue what the cause of this is. But maybe someone has the same issue or is able to provide debugging instructions.

Unique key inconsistency between server and client

Hi,

I encountered a problem today while trying to use this gem, when using custom uniqueness parameters, I found that the key added to Redis to enforce uniqueness isn't the same as the key later deleted by the server.
In my case, this means that once my first job is pushed to the Sidekiq queue, no more jobs can be added even after the first one is processed, since the lock key is still present in Redis.

It might be because of another middleware misbehaving, but I believe sidekiq-unique-jobs should be able to avoid this kind of deadlock.
After looking a bit at the code, I saw that the name of the key used to enforce uniqueness is added to the job's payload. Is there a reason why this key isn't then used by the server to perform the unlock, instead of trying to recompute it?

I'd be happy to provide a pull-request if you want.

Thanks a lot in advance !

Usage of sidekiq-unique-jobs with activejob

To use the uniqueness with active job:

Sidekiq.default_worker_options = {
   'unique' => true,
   'unique_args' => proc do |args|
     [args.first.except('job_id')]
   end
}
SidekiqUniqueJobs.config.unique_args_enabled = true

Maybe you can update the readme for this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.