Giter Club home page Giter Club logo

firebase-queue's Introduction

Important: Google Cloud Functions for Firebase

There may continue to be specific use-cases for firebase-queue, however if you're looking for a general purpose, scalable queueing system for Firebase then it is likely that building on top of Google Cloud Functions for Firebase is the ideal route.

Firebase Queue Build Status Coverage Status GitHub version

A fault-tolerant, multi-worker, multi-stage job pipeline built on the Firebase Realtime Database.

Status

Status: Frozen

This repository is no longer under active development. No new features will be added and issues are not actively triaged. Pull Requests which fix bugs are welcome and will be reviewed on a best-effort basis.

If you maintain a fork of this repository that you believe is healthier than the official version, we may consider recommending your fork. Please open a Pull Request if you believe that is the case.

Table of Contents

Getting Started With Firebase

Firebase Queue requires Firebase in order to sync and store data. Firebase is a suite of integrated products designed to help you develop your app, grow your user base, and earn money. You can sign up here for a free account.

Downloading Firebase Queue

You can download Firebase Queue via npm. You will also have to install Firebase separately (that is, they are peerDependencies):

$ npm install firebase firebase-queue --save

Documentation

Contributing

If you'd like to contribute to Firebase Queue, please first read through our contribution guidelines. Local setup instructions are available here.

firebase-queue's People

Contributors

abeisgoat avatar asciimike avatar christianalfoni avatar dreadjr avatar firebase-ops avatar ginovva320 avatar m-tse avatar marcbachmann avatar mbleigh avatar mtsegoog avatar pdesgarets avatar rafalsobota avatar samtstern avatar startupandrew avatar tylermcginnis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

firebase-queue's Issues

Routing tasks to a specific process for debugging

We have a node.js app running on AWS with Firebase Queue listening to our firebase. At times we want to debug some things or develop new features and it's easier to run the node app locally to see log statements

Is there any way to guarantee that tasks being sent to a queue will be handled by the local process of the running servers without killing the process on the servers? It would also be ok to completely re-route all queues to the local instance if doing just single queues isn't possible.

What exactly does "numWorkers" do?

I have a queue where there's a lot more tasks being added to it than I expected, so I have a long backlog of tasks, that is only getting bigger. That's when I found the "numWorkers" option on firebase-queue. Now I've set it to 2, but I don't fully understand what it does?

I have my queue running on a hobby dyno on Heroku, but from what I know, you can't scale a hobby dyno. If I output "queue.getWorkerCount()", it still shows me 2 though. So what exactly is it that firebase-queue is scaling when I set "numWorkers" to 2 and when will I know when it has reached the limit for what my hobby dyno can take?

Typescript Support

Would love to have support for typescript and typings for the library. Thanks!

What happens if the worker dies while processing?

If I run the test from the readme, and kill the node process between when it accepts the job and when it resolves it, it just sits in the tasks queue forever.

Is this acceptable behavior for a messaging system? It will collect garbage over time. I would expect a robust system to detect that the old job has expired and move it to an error state or something.

Asynchronous errors cause workers to hang in the "busy" state

If an asynchronous error is thrown, the worker stays in the "busy" state. If workers stay in the "busy" state they are not free to pick up more jobs off the queue

Example 1 - synchronous error (works smoothly)

If a synchronous error is thrown the job automatically (and correctly) gets rejected reject() (commit thanks @drtriumph !)

var queue = new Queue(ref, function(data, progress, resolve, reject) {
  // Read and process task data
  console.log(data);

  // synchronous error
  throw new Error("Something went wrong")
});

Example 2 - asynchronous error (worker hangs in the "busy" state)

If an asynchronous error is thrown the job hangs and never gets cleaned up.

var queue = new Queue(ref, function(data, progress, resolve, reject) {
  // Read and process task data
  console.log(data);

  // asynchronous error
  setTimeout(function(){
      throw new Error("Something went wrong")
  }, 100);
});

Possible solutions.

  1. Latch on to process.on("uncaughtException"). When the asynchronous exception is thrown somehow figure out what job it came from so that job can be rejected.
  2. In all async processing of a job the implementer should be diligent and make sure that resolve() or reject() is called in every possible situation.
  3. Create a new configuration for timeout_to_fail.
    • Currently, it looks like the timeout configuration re-queues the job so in this scenario the job get's requeued and then the worker hangs again.
    • This new configuration, instead of re-queueing can simply fail the job so that it doesn't get picked up again and cause another worker to hang.
    • This solution would also require re-naming the current timeout configuration to timeout_to_requeue).
  4. Any other ideas?...

2 is a good practice anyway and it should be done regardless, but there's still always a possibility of an uncaught exception happening and causing the worker to erroneously stay "busy".

How are folks dealing with this? Is there an easy solution that I'm missing?

Enhancement - Time delay, process after time, or peek, function for queue.

Would love to see a feature that would allow a "process after" setting. This would be similar to the "peek" issue #28 that asks for some sort of pre-check to see if it can be fired off to a worker.

@drtriumph (Chris) I came across your other post in the google group, and i'd have to say i'm VERY interested in this! Any idea where on the feature bored this one may fall????
(https://groups.google.com/forum/#!msg/firebase-talk/suJTZXG3CgU/wNjfw3ZnCQAJ)

I'm a big fan of the queue, its an awesome addition to the firebase stack!

Mutate the task data when job fails

I am using the firebase queue to submit to external systems. The problem i am running into if the task fails half way through, some have been successfully submitted and others have not, i am unable to update the task, either through reject or progress to remove parts of the task to avoid duplicate requests.

I could submit to another firebase queue, which does the submittal process but i wanted to get your thoughts on that as well, since it seems to have the same problems.

Tasks left unprocessed when indexOn: ["_state"] enabled

I am having an issue when i have IndexOn: ["_state"] and tasks are being left unprocessed until the process is restarted.

I created a simple reproducible repo with the code and package versions. Maybe i am doing something stupid, please let me know if you can reproduce or what is wrong.

Is this repo dead?

Is this repository still maintained? As last issue responses and commits are from months ago. Any status update?

Graceful Shutdown

There should be a way to shutdown a queue worker so it finishes its current task before destroying itself

Task in queue expires immediately

I'm somewhat sure that this isn't a issue with firebase-queue but with my production environment. I'm documenting it here in case it helps others figure out why firebase-queue isn't working for them.

In queue_worker.js is this line:

var expires = Math.max(0, startTime - now + self.taskTimeout);

Since 'now' is later that 'startTime' the expression 'startTime - now' should be a small negative. But on my production server its a very large positive. And all sorts of hell breaks loose because of it.

When we put firebase-queue into production this wasn't an issue. But something has changed and I'll be looking into what that might be.

I solved the issue with this hack, changing the code to:

var expires = self.taskTimeout;

Workers intermittently stop pulling tasks off the queue

This is an issue I have seen twice now, and I don't exactly know why and I can't reliably replicate it. I apologize for not having more information or a reliable use case. If I can gather any more information I will add it.

What I have experienced is that I will have a worker process running and processing jobs like normal, and then all of a sudden when more jobs get added to the queue the workers, which are not actively processing any jobs will stop pulling down new jobs.

Any idea what might be going wrong here?

Node.js strategy : firebase-queue vs building API

Hello, I have a question on a general strategy of using Firebase on Node.js. I haven't seen many example applications of this aside from strategies involving firebase-queue. I want my application to scale when deploying to Google App Engine, which uses automatically scaling to add/remove CPUs as needed. I chose GAE because it sounded easy to deploy and scale Node.js apps. I am not a veteran node.js developer, hence opting for PaaS solutions.

My backend does not currently need to do any blocking actions, in fact it just needs to perform Firebase operations like updating or transactions on a database reference (at least I believe these are non-blocking). I've chosen to use the firebase-queue anyway, since I really like the ease of pushing "requests" onto a tasks queue and have firebase-queue automatically delegate a worker to processing this task, across all my Node app instances.

I've stress tested firebase-queue and noticed that the queue can't keep up with heavy load, such as 1000 tasks/second. I understand this is because of the limitation of the transaction with the queue. So now I'm thinking that using firebase-queue for all my requests is not as efficient as building an API to handle my operations on a firebase database, such as the example below:
https://github.com/NickMoignard/node-firebase-restful/blob/master/server.js

I assume an API built like the above would route requests to only one instance of a Node application, which could then modify a Firebase reference. My assumption is this would be faster than a queue, at the tradeoff of holding up traffic if blocking requests happen (still assuming firebase operations are non-blocking). Is this correct? Is there a better strategy?

Bug in README.md under Queue Security

In README.md, under Queue Security, the validate rule under rules/queue/tasks contains

newData.hasChildren(['_state', '_state_changed', '_progress']) 

Notice the presence of the _progress property. However, in queue_worker.js, in _resetTask, we have:

if (task._state === self.inProgressState) {
  task._state = self.startState;
  task._state_changed = Firebase.ServerValue.TIMESTAMP;
  task._owner = null;
  task._progress = null;
  task._error_details = null;
  return task;
}

So, if the validate rule does contain the _progress property, the transaction in queue_worker will keep failing because _progress is set to null. We should either update the doc or update queue worker.

Firebase 3.x support?

I didn't see anything in the recent announcements concerns Firebase Queues? Is this going to continue to be supported?

Queue worker stops working when _state is indexed

Firebase Queue stops working on queues that have a few tasks sitting in the queue, and the worker function is asynchronous, and the tasks node's _state node is indexed. For example, I have the following example queue worker (taken from the README) using firebase version 2.4.0 and firebase-queue version 1.2.1:

var Queue = require('firebase-queue'),
    Firebase = require('firebase');

var ref = new Firebase('https://<your-firebase>.firebaseio.com/queue');
var queue = new Queue(ref, function(data, progress, resolve, reject) {
  // Read and process task data
  console.log(data);

  // Do some work
  progress(50);

  // Finish the task asynchronously
  setTimeout(function() {
    resolve();
  }, 1000);
});

The queue worker's function works fine when a single task is added, but when multiple are added the queue processes one or two of them then stops, although there are still a few in the queue. The same issue happens when 3 or more tasks are already in the queue before the worker is launched.

For example, if I have the following queue set up in my Firebase db:

"queue": {
  "tasks": {
    "keyName": {
      "fileName" : "images.zip"
    },
    "keyName2": {
      "fileName" : "database.db"
    },
    "keyName3": {
      "fileName" : "otherFile.txt"
    },
    "keyName4": {
      "fileName" : "config.xml"
    }
  }
}

With the following security rules:

{ 
  "rules": {
    "queue": {
      ".read": "true",
      "tasks": {
        ".indexOn": "_state",
        ".write": "true"
      }
    }
  }
}

When I start the worker, it will process 2 tasks, then sit idle. Even if I kill and restart the worker process, it won't process any more, as if there are no tasks to process. But it isn't frozen, because if I add a new task to the queue, it'll process 2 of the oldest tasks (not necessarily the most-recently added one), then idle again. After a few minutes of sitting it may eventually process the rest of the queue, but it seems inconsistent.

This behavior is only seen when the worker's callback function does something asynchronous. If I comment out the progress() or setTimeout() functions above, it'll churn through any number of tasks (I tested about 70 at once) without hanging.

Also, this behavior appears to be caused by having an index on the queue. If I remove ".indexOn": "_state" from the queue's security rules, the worker immediately continues processing the remaining items in the queue.

For now I've removed the index, and everything works fine. Except for the fact that I'm always warned that I should have an index on my queue whenever I launch the application.

Retrieve job id from within callback

I personally end up using it sometimes in my Firebase workers. It's definitely possible to do achieve the same results differently, so I'd understand if not enough people are interested in this!

Authorization rules in bolt

We're using the bolt compiler for our authorization rules. Can we get a port of the authorization rules to bolt please?

Handling tasks multiple times without chaining specs?

Is there any way to create a single task and then handle it with multiple distinct workers without chaining specs?

For a simple example, in a messaging app, lets say I want to

  • Fan out the message
  • Send a push notification to a user's device with the message.
  • Send an email to the user with the message.

Each of these should occur in its own separate node.js process. It doesn't matter what order they occur in.

I know that it's possible to chain specs (i.e. spec_1's finished_state is spec_2's start_state). And I have a simple example working this way.

For my use case, I would potentially have 10 - 12 distinct workers for a single task. My concerns are 1) chaining specs will cause a lot of complexity and 2) if one of the processes is killed or re-started, the following workers will be delayed when they don't have to be b/c they have no dependence on the previous workers.

I have tried the following:

  • Chaining specs. It works, but like I said it's sub-optimal for my use case.
  • Creating multiple specs with starting_state set to null. Then running a separate process for each spec (i.e. process 1 takes spec_1, process 2 takes spec_2). Still, only one of the processes picks up the task, and it's now random.

I know it doesn't work this way, but an optimal solution would be as follows:

  • have a spec for each process: fan_out, push_notification, email_notification.
  • define a queue with specId fan_out, a queue with specId push_notification, and a queue with specId email_notification.
  • any time a task is pushed, all three queues can handle the task.

Thanks for building an awesome product!

Remove failed jobs after a certain amount of time

I like how failed jobs are still in the queue, so I can debug those failed jobs. I have a good deal of them though, so it would be nice if the failed jobs deleted themselves after 48 hours or so. Is there any way to do this?

Reference Samples, Firebase Queue Patterns

It would be really stellar to have some reference samples for firebase-queue or maybe like a Firebase Patterns cookbook to reduce the learn curve for new Firebase developers such as myself.

Does anyone have a Heroku app starter and or an app engine starter they'd be will to share and or contribute to repo of such examples?

Queue dies or doesn't process all tasks when using Node 0.11.x

Hi there,

so far I've been unable to get even the most basic of queue working properly: see the following (coffeescript) code:

Firebase = require 'Firebase'
Queue = require 'firebase-queue'

ref = new Firebase("https://myhost.firebaseio.com/testQueue")

# set up queue and listen for messages
queue = new Queue ref, (data, progress, resolve, reject) ->
  console.log "packet on queue: "
  console.dir data
  resolve()

# post 10 messages to the queue
for num in [1..10]
  ref.child('tasks').push({ msg: 'this is message ' + num })
  console.log 'added message ' + num

this usually outputs the following:

added message 1
added message 2
added message 3
added message 4
added message 5
added message 6
added message 7
added message 8
added message 9
added message 10
packet on queue:
{ msg: 'this is message 1' }
packet on queue:
{ msg: 'this is message 3' }
packet on queue:
{ msg: 'this is message 4' }
packet on queue:
{ msg: 'this is message 5' }
packet on queue:
{ msg: 'this is message 6' }
packet on queue:
{ msg: 'this is message 7' }
done.

yes, it usually skips message 2 and typically leaves 4-6 messages on the queue that it never picks up.
Am I missing something here or is this library plain broken?
thanks
Peter

Tasks with start_state=null not working

I am having problems with the library using the getting started example. The queueWorker doesn't receive more than 2 or 3 tasks, and only one or two are processed. If I don't index _state then it works.

I realised that the problem was on doing orderBy('_state').equalTo(null).on('child_added', ....). For some reason it doesn't work any more, it only brings one or two new childs.

If I set the spec with start_state: 'something' and I create tasks with '_state': 'something' it works like a charm.

Can someone try the example on the readme indexing '_state'? to see if I am the only one. I have other projects in production with firebase-queue and for some reason all of them are working.

Chaining three tasks

I am trying out firebase-queue and mostly enjoying the experience so far :).

I am experiencing one issue though. When I chain tasks, say:

{
"task_a" : {
"error_state" : "queue_error_queue",
"finished_state" : "task_b",
"in_progress_state" : "task_a_in_progress",
"retries" : 5,
"start_state" : "task_a",
"timeout" : 300000
},
"task_b" : {
"error_state" : "queue_error_queue",
"finished_state" : "task_c",
"in_progress_state" : "task_b_in_progress",
"start_state" : "task_b",
"timeout" : 5000
},
"task_c" : {
"in_progress_state" : "task_c_in_progress",
"start_state" : "task_c",
"timeout" : 5000
}
}

When I start with task_a, task_a and task_b run as I expect. When I start with task_b, task_b runs as expected, but task_c does not.

The task is left in this state:

{
"_progress" : 100,
"_state" : "task_c",
"_state_changed" : 1438093533931,
"myData" : {content: 'removed' }
}

If I restart the server, the task runs to completion as I expect.

Any suggestions please?

Tasks Not Disappearing

I am calling the resolve function, but the task seems to be reclaimed instantly by the worker even after it has already been completed. In other words, the tasks never complete - they keep happening over and over infinitely... I have added a screenshot of my spec if that helps
screen shot 2016-05-26 at 7 24 19 am

Implementing auth.canAddTasks, auth.canProcessTasks, auth.canAddSpecs

I've implemented JWT on my server with admin:true. I'd prefer to use the custom token fields for more restricted access. There is mention in the README of custom tokens:
auth.canAddTasks, auth.canProcessTasks, auth.canAddSpecs

If someone has an example of these, specifically how the token field is connected with a user id, I'd really appreciate it.

Cheers!

How do I monitor load and scale?

Hi, I asked this question on one of the examples @drtriumph provided.

I think I'd use the firebase-queue to handle incoming messages (like the sanichat example). Except I want them to get handled after exactly 60 seconds from when they are "published" (from that example). So as long as a worker gets to it within the 60 seconds it can run a timeout for the time left, process it, and resolve it.

But, I'm wondering how to know when is too many workers for the cpu. Because that's the number of workers I'd use.

Then, I'd set the cpu threshold to something less than that - when it gets reached, Azure will add another vm with the same number of workers...

Any ideas? Go Firebase!
(Label this a question of course)

Scaling horizontally with more servers => Slow down to pull jobs off the queue

When increasing the number of servers processing jobs, it feels like I'm seeing a dramatic slow down in the workers pulling jobs off the queue when there's a lot of jobs (~700) queued at the same time.

  • Normal queue processing jobs immediately without heavy load
  • Scale up ~20 servers (2x heroku worker dynos)
  • numWorkers option on each server is set to 50

Result
Seeing a dramatic slow down from when jobs get added to the queue, to the time that jobs get pulled off.

My best guess here is that an individual server is not able to pull down the job until the other servers (which are all trying to read and write it at the same time) know about it to resolve which worker claims it.

Inability of worker to lock/claim task --- infinite loop

We've been using firebase-queue for a while now. We saw some odd behavior in production last night and haven't been able to reproduce locally in development.

Our queue workers appeared to be in infinite loop of processing same queue task over and over again. I watched on the firebase dash as the same task turned yellow (claimed), and then green again (as if recreated from scratch) repeatedly.

I was able to resolve the issue by clearing the tasks from the queue and doing multiple server restarts of the code running node with the queue workers.

I watched the problem occur on two separate queues. The queue worker code for those queues are different, separate and has been stable.

If you have any ideas for what to look into or how to reproduce, please let me know

peek

Would be nice if a worked could peek a task before actually trying to process it. i.e. to check whether or not it can process it.

Feature request: Add a "Does not Inherit" clause in docs for spec

Just a thought here, if its possible-- add a "does not inherit clause in the docs for the specs.

Ran into some trouble, i assume that if defaults were not specificed ,

Say my spec was:

 "spec_1": {   
    "timeout": 10000
  }

I assumed it would translate to:

  "spec_1": {
    "start_state": null,
    "in_progress_state": "in_progress",
    "finished_state": null,
    "error_state": "error",
    "timeout": 300000, // 5 minutes
    "retries": 0 // don't retry
  }

But the defaults (as far as i could tell) are note appended to the object that i create. Just a thought, could be completely off base, but thought I would at least raise the question!

Worker is great, thank you for the great product!

Reject Continues

I've noticed calling reject() doesn't end the worker/queue instantly.
For example:

if(!myVar){
    reject();
}
console.log("my var exists if you got this far");

Having read and re-read the firebase-queue guide, this wasn't obvious to me. I've gotten around it by not relying on reject()

obvious example:

if(!myVar){
    reject();
} else {
    console.log("my var exists if you got this far");
}

Perhaps this was never the intention of reject(). But it may be useful to add or highlight this in the guide?

Currently on the following, if relevant:

   "firebase": "^3.4.1",
    "firebase-queue": "^1.5.0",

The in_progress state

We had some weird bugs in our application and narrowed it down to having multiple specs with the same in_progress ID.

What happened is that randomly, often related to errors in our code, the first reqistered Queue would pick up tasks that it should not (based on its start_state).

It seems that in_progress state ID affects which queue handles it. I can not find any documentation on how this happens and how in_progress state might be picked up by Queues.

Would love some information on this and would be happy to contribute to docs, just not sure how this actually works :-)

Possible bug in QueueWorker.prototype._reject

QueueWorker.prototype._reject creates a function that takes a parameter "error".
The transaction complete handler masks "error" with its own "error" which is passed to the recursive attempt upon failure.

I would expect the behavior to keep trying to log the original error instead of the failed transaction attempt.

Is this intended or just a bug?

Possible to grab the taskId generated by Firebase?

I'd like to leverage the unique id created by Firebase when pushing a task on the queue. Is it possible to extract this from the data from the worker side or does it need to be explicitly added to the data payload?

Better handling of malformed queue items

Yesterday, I purposely created a malformed queue item, hoping it would be ignored, for the purpose of keeping the queue subtree visible within the Firebase interface. Otherwise, the queue subtree would disappear, and I'd have to manually recreate it (just 1 extra annoying step) if I wanted to manually add any new queue entries. It looked like this:

{
"backups": {
  "backup-metadata": {
  ...
  },
  "jobs": {
  ...
  },
  "queue": {
    "placeholder": "placeholder"
  }
}

I don't have the exact error, but it should be easy to reproduce. What basically happened was all my workers kept trying to process this malformed queue entry over and over again, and failed, causing CPU/network/disk space usage to spike up. Eventually, the workers started dying off (not at the same time, they died off hours apart from each other over the course of 24 hours), presumably due to some race condition. I was notified of this error when all the workers died, and the nagios check warned me.

In the event of malformed entries being put into the queue, I think we should handle them a bit more gracefully.

Semantics issue

Looks like creating a new Queue does not create separate processes for each worker, it's all still a regular single process node.js process. So you can have set your Queue to have 100 workers, but it's all in a single node.js process.

But in the docs it says:

The basic unit of the queue is the queue worker: the process that claims a task, performs the appropriate processing on the data, and either returns the transformed data, or an appropriate error.

This doesn't really explain how a worker really works in this library. My only guess is that the number of workers dictates the maximum amount of work that can happen concurrently? My beef is that using the word "process" suggests separate node.js processes, but that's not the case? Or is it?

Timedout Queues

Setting numWorkers to 100, and running tasks until they timeout, appears to be not releasing queues. I haven't isolated code to prove yet, but I wanted to ask for more detail about how timeouts are intended to be used.

Is the above by design, should I be manually timing out my tasks and resolving like the guide example does, or is this a bug?

Queue does not throw an error if it fails to connect to the provided reference.

If you pass in a firebase ref that for instance doesn't have the correct read permissions when instantiating a Queue it does not throw it just logs the error? There's really no way of responding to this.

https://github.com/firebase/firebase-queue/blob/master/src/lib/queue_worker.js#L598

I can't see a way to tell programmatically if a queue worker subscription has been successfully established.

Should there be a function like Queue.shutdown such as Queue.status() that returns a promise that is resolved once all workers successful subscription to the ref has been made or rejected if it fails?

I realise the QueueWorker would have to have a status function that returns its connection state as well.

A less drastic change would be that the workers are not spawn until the ref connection is tested.

I'm happy to take a crack at a PR for this if you guys have suggestions?

A breaking change suggestion would be that Queue has a function called connect() or spawn(). That you call once you instantiate the object.

Cheers

Task was malformed error

Hi,

I am getting this error in the queue worker

FIREBASE WARNING: Using an unspecified index. Consider adding ".indexOn": "_state" at /queue/tasks to your security rules for better performance
FIREBASE WARNING: Using an unspecified index. Consider adding ".indexOn": "_state" at /queue/tasks to your security rules for better performance

I am following the guide. Not sure what I have done wrong.

This is the snapshot of
screen shot 2016-09-28 at 3 12 13 pm
the queue

UPDATE: I got past the permission warning, but still the task is malforming.

Here is the stack trace

"Error: Task was malformed\n at Object.update (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase-queue/dist/lib/queue_worker.js:446:27)\n at ei (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:217:395)\n at U.h.transaction (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:232:466)\n at /Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase-queue/dist/lib/queue_worker.js:439:30\n at c (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:153:58)\n at /Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:144:614\n at Qb (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:43:165)\n at sc (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:31:216)\n at rc (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:30:1104)\n at yg (/Users/kanishkanagaraj/JeetLab/BetMe/firebase-queue/node_modules/firebase/database-node.js:215:313)"

Version info

Firebase: 3

Firebase Queue: latest

Node.js: latest

Other (e.g. operating system) (if applicable): mac os x

Test case

Steps to reproduce

Expected behavior

Actual behavior

Forceful shutdown (Question)

Hi Firebase queue,
Is it possible to forcefully shutdown a queue?
If not, do you have a recommendation?

I'm running tasks, but they aren't typical linear tasks as they never call resolve(). I use them to watch a shopping cart, and make adjustments. However, when receiving sigint, I'd like these tasks to revert back to 'new' by preferably calling something like queue.forceShutdown(); Allowing another instance/queue to pick these tasks up.

Really the only reason I want this is because of the long 'x' minute wait on queue.shutdown() which delays shutting down my instances on each redeploy. I could make the timeout shorter, but I think I would prefer to just kill the queue instantly for this particular type of task. Any recommendations would be welcome, let me know if I should move this to SO. Regards, Alan

Process never terminates.

Version info

Firebase: 2.4.2

Firebase Queue: 1.3.1

Node.js: 5.11.0

Other (e.g. operating system) (if applicable): OS X 10.11.5

Test case

var Firebase, Queue, queue;
Firebase = require('firebase');
Queue = require('firebase-queue');
queue = new Queue(new Firebase('https://fir-queue-forever.firebaseio.com'), function() {});
queue.shutdown().then(function() {
  return console.log('shut down');
});

Steps to reproduce

Run script above from a file with node on the command line.

Expected behavior

The process should terminate after shutdown.

Actual behavior

The process hangs forever.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.