Giter Club home page Giter Club logo

Comments (3)

brianlmoon avatar brianlmoon commented on July 17, 2024

That is a suitable approach. It's not what the other issue was talking about. In his case, the job was failing in an unknown state. i.e. it was likely throwing a fatal PHP error and exiting.

In your case, you are handling retries at the application level. Which is fine. It's not part of the Gearman spec. But a lot of the things I do in workers are not part of the spec. They are what I need to do to make my application work.

Are you having an issue with this approach?

from net_gearman.

brianlmoon avatar brianlmoon commented on July 17, 2024

My only concern with this approach is that each retry will get a unique job id. This means that another job with the same $args[paraeters] could get into the queue with a different retry count. You can't really avoid this however. If you submit this job with the same unique id, it will not get worked because this worker is already working this job.

from net_gearman.

bartclarkson avatar bartclarkson commented on July 17, 2024

Ok, cool. Yeah, it seems to work well. But I sure feel better having had someone in the Gearman game give it the old eyeball.

I wondered if you were going to say, "What you need to do is do a die() right after you call ->xyz($args) to change and persist your changes to the arguments before causing the job to be retried." Or some kind of funky deal hinging on custom callback functions with the client. I was apprehensive about the complexities, and seriously Loathe to mess with anything that my rather vanilla implementation of Gearman Manager is presently making simple (hat tip).

I think I'm safe from multiple (re)tries tied to the same essential result being pushed up to the client from multiple sources. The application produces the unique parameters a single time and simply waits for a database update (which is driven by a successful worker).

I understood the other issue of retries to be more of a global protection mechanism, and setting that on the gearmand daemon makes perfect sense. A vanilla Health Check is on the lookout for EC2's under the worker load balancer that are exhibiting the sorts of problems that would cause a true WORK_FAIL event.

Hope you have an awesome day. Thanks!

from net_gearman.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.