Giter Club home page Giter Club logo

Comments (22)

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @stephenplusplus on September 14, 2016 21:10

We do use an exponential backoff retry strategy before calling it a failure and returning an error. What errors are you getting? How often are you calling the API/are you waiting for a response before calling again?

You can pass a "maxRetries" number in the Bigtable constructor. The default is 2.

bigtable({ projectId: '...', maxRetries: 5 })

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on September 14, 2016 21:44

Thanks, I'll try out maxRetries

Here is the stacktrace:

Error: Secure read failed
  File "/app/packages/@apphub:logrocket-server-storage-bigtable/node_modules/grpc/src/node/src/client.js", line 189, in ClientReadableStream._emitStatusIfDone
    var error = new Error(status.details);
  File "/app/packages/@apphub:logrocket-server-storage-bigtable/node_modules/grpc/src/node/src/client.js", line 158, in ClientReadableStream._readsDone
    this._emitStatusIfDone();
  File "/app/packages/@apphub:logrocket-server-storage-bigtable/node_modules/grpc/src/node/src/client.js", line 229, in readCallback
    self._readsDone();

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @stephenplusplus on September 14, 2016 21:59

Interesting. Not sure what that error is. We only retry after certain error types, I'm not sure this is one we would retry. @lesv @murgatroid99 have you heard of this one?

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @stephenplusplus on September 15, 2016 20:18

I found @murgatroid99's comment on this issue, which says that this is a 503, so we do in fact retry on this error. maxRetries should work in this case in place of writing your own retry logic.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on September 19, 2016 18:9

Hi, I'm still seeing this error message come up with both maxRetries: 6 and a custom retry wrapper around the getRows method.

Is there any more data that I could collect which would help identify the problem?

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @stephenplusplus on September 19, 2016 18:12

Can you either show code or estimate how many requests you're making at once? Since this is a 503, it's an issue of either too many requests at once so the server needs a break, or the upstream API is actually broken in some way.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on September 19, 2016 18:23

Absolutely, here is a snippet of the relevant code:

return new Promise((resolve, reject) => {
  this.table.getRows({
    decode: false,
    start: 'foo|',
    end: 'foo||,
    filter: [{
      column: {
        cellLimit: 1,
      },
    }],
  })
    .on('error', reject)
    .on('data', row => {
      processRow(row);
    })
    .on('end', () => {
      resolve();
    });
});

There could be potentially ~1000 rows in any given key range.

We only have one server node making these requests at any given moment of time.

Perhaps there is a better way to handle error here instead of immediately rejecting?

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @stephenplusplus on September 23, 2016 14:39

@callmehiphop when you have a chance, would you mind trying to recreate this scenario?

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @mbrukman on November 24, 2016 4:27

FWIW, this behavior (partial failure in batch operations) is expected from the Bigtable perspective. A bulk read or write operation can affect many rows, and what can happen is that some of the reads or writes will succeed, while others may fail (because different parts of the bulk request may go to different backing Bigtable servers, of which some may be busy, unavailable, or simply timeout) — Bigtable does not provide atomicity guarantees across multiple rows, so any single operation within the batch can succeed or fail independently of any others.

However, these are typically not permanent errors, so they should be retried, but as an optimization, rather than retrying the entire batch request, the client library needs to iterate over the response statuses, and only retry the ones that were marked as having failed or timed out. This is precisely what we do in other Bigtable client libraries.

The upside is that even with the occasional retries, the overall performance is much higher than with a single read or write operation per API call.

/cc: @garye, @sduskis

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on December 11, 2016 16:58

@mbrukman @stephenplusplus given this, what is the recommended approach here? Is the user responsible for handling this retry logic?

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @sduskis on December 11, 2016 17:1

Java and golang both have automated retries. Retries are nuanced for long running scans and bulk writes.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on December 11, 2016 17:5

@sduskis I see, so for now we might have to include retry logic in our calls with this nodejs library? Is this expected for both bulk read calls and streaming reads?

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @stephenplusplus on December 12, 2016 14:4

You are free to implement this in your application, but it's something we will eventually support in this library.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on December 12, 2016 15:55

How does this work for a streaming application? Should we restart the stream at the failed point?

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @garye on December 13, 2016 0:2

@arbesfeld @stephenplusplus Yes, for streaming reads it's best to restart the stream after the last successfully received row. For multi-row mutations that call mutate_rows under the hood, only mutations that received an error should be retried.

As @stephenplusplus said, "smart" retries should definitely be handled in the library (should I create a separate issue to track that?). To make that effort a bit easier for node and other languages I'm putting together a little server that can be used to validate client retry behavior. I still need to push that out to a public place but, in the meantime, you can look at the test script to get some idea of what it will be testing:

https://gist.github.com/garye/e7f4fa9694dd5b04580aa7cdd6adf16f

You can also consult the java or go client retry logic, such as:
https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/bigtable/bigtable.go#L149
https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/bigtable/bigtable.go#L556

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on December 16, 2016 3:56

We are having a bit of difficulty implementing this at the application level, since it seems like we are just getting thrown a generic error, so we end up having to retry the entire read.

@stephenplusplus happy to make a contribution here if it makes sense, though I could use a bit of direction as to where to start looking.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on December 17, 2016 17:58

Alternatively, some recommendation for how to handle this at the application level would also be greatly appreciated.

We are currently doing something like this:

return new Promise((resolve, reject) => {
      eventsTable
        .createReadStream({
          decode: true,
          start: 'foo',
          end: 'bar',
        })
        .on('data', function handleRow(row) {

        })
        .on('error', reject)
        .on('end', () => {
          resolve();
        });
});

Would it work to just wrap this in a try/catch and then restart from the last-seen row? It's hard to reproduce the BigTable failure so we have no idea if our approach is working.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on January 30, 2017 15:40

Hi @callmehiphop any updates on this issue? I would be happy to submit a PR if you wouldn't mind pointing me to where I should address the issue.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on January 30, 2017 15:49

At the very least, we'd like to be able to handle this at the application level.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @callmehiphop on February 8, 2017 20:4

@arbesfeld sorry, we've been pretty busy with other items, but I'm going to try and get on this within the next week or so.

from nodejs-bigtable.

lukesneeringer avatar lukesneeringer commented on June 12, 2024

From @arbesfeld on March 3, 2017 15:12

@callmehiphop sorry to keep bugging you. I'd be happy to take a look if you could give me a bit of direction on the implementation :-)

from nodejs-bigtable.

sduskis avatar sduskis commented on June 12, 2024

I'm going to be writing a design doc for Cloud Bigtable resumable reads and partial failures of bulk writes. It's also worth having an offline discussion for these features.

from nodejs-bigtable.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.