googleapis / nodejs-bigtable Goto Github PK
View Code? Open in Web Editor NEWNode.js client for Google Cloud Bigtable: Google's NoSQL Big Data database service.
Home Page: https://cloud.google.com/bigtable/
License: Apache License 2.0
Node.js client for Google Cloud Bigtable: Google's NoSQL Big Data database service.
Home Page: https://cloud.google.com/bigtable/
License: Apache License 2.0
Reading with multiple prefixes is a common use case in our project. Currently the way we do this is using the internal createPrefixRange_
to create an array of ranges. Could having a prefixes
option be a small enhancement to the API?
The autoCreate option does not work for family
OS: macos 10.13.3
Node.js version: 8.10.0
npm version: 5.6.0
yarn version: 1.3.2
google-cloud-node version: master branch (0.13)
const table = bt.table('TestAutoCreate');
await table.get({ autoCreate: true });
const fam = table.family('AutoCreateFamily');
await fam.get({ autoCreate: true });
FamilyError: Column family not found: projects/teo-dev-de/instances/teo-dev-de/tables/TestAutoCreate/columnFamilies/AutoCreateFamily.
at /Users/moander/dev/uzi/api/node_modules/@google-cloud/bigtable/src/family.js:355:17
at /Users/moander/dev/uzi/api/node_modules/@google-cloud/bigtable/src/table.js:814:5
at Immediate._onImmediate (/Users/moander/dev/uzi/api/node_modules/@google-cloud/bigtable/src/table.js:871:16)
Branch | Build failing ๐จ |
---|---|
Dependency | uuid |
Current Version | 3.2.0 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
uuid is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
Currently if we add a boolean or double we lose information on the bigtable side since we can't properly encode the data. Some exploration needs to happen regarding how to go about resolving this issue.
Some ideas
const bigtable = const bigtable = new Bigtable({
bufferConverter: new ThingWithEncodeAndDecodeMethod(),
});
....
table.insert({
key: `someKey`,
data: {
[COLUMN_FAMILY_NAME]: {
foo: 1.23,
bar: true
},
},
})
const coder = new ThingWithEncodeAndDecodeMethod()
table.insert({
key: `someKey`,
data: {
[COLUMN_FAMILY_NAME]: {
foo: coder.encode(1.23),
bar: coder.encode(true)
},
},
});
const [rows] = table.getRows();
for (const row of rows) {
const foo = coder.decodeDouble(row.data.foo)
const bar = coder.decodeBoolean(row.data.bar)
}
Hey @kolodny, can you take a look at these issues?
From https://circleci.com/gh/googleapis/nodejs-bigtable/1109:
> @google-cloud/[email protected] presystem-test /home/node/project
> git apply patches/patch-for-v4.patch || git apply patches/patch-for-v6-and-up.patch || true
error: node_modules/through2/node_modules/readable-stream/node_modules/process-nextick-args/index.js: No such file or directory
error: patch failed: node_modules/process-nextick-args/index.js:5
error: node_modules/process-nextick-args/index.js: patch does not apply
/home/node/project/system-test/read-rows-acceptance-tests.js:27
const builder = ProtoBuf.loadProtoFile({
^
TypeError: ProtoBuf.loadProtoFile is not a function
Currently we default the timestampMicros
to -1 when no value is provided. This can cause unexpected values when retries happen since rows that should have been inserted at the same time will have different timestamps.
Instead that line should change to timestampMicros: timestamp || new Date(),
From @arbesfeld on September 14, 2016 20:58
My calls to bigtable getRows()
fail intermittently, so I have had to wrap all of these methods in retry blocks. I was wondering:
Thanks!
Copied from original issue: googleapis/google-cloud-node#1595
The instance admin API allows the creation of AppProfileIds. We need to be able to access that functionality from instance.js.
Branch | Build failing ๐จ |
---|---|
Dependency | sinon |
Current Version | 4.2.2 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
sinon is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
The new version differs by 7 commits.
b5968ab
Update docs/changelog.md and set new release id in docs/_config.yml
9cbf3f2
Add release documentation for v4.2.3
45cf330
4.2.3
8f54d73
Update History.md and AUTHORS for new release
a401b34
Update package-lock.json
a21e4f7
Replace formatio with @sinonjs/formatio
f4e44ac
Use comments in pull request template to get better descriptions with less template text
See the full diff
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
I'm seeing this result. bool and bool_t should be true, bool_f should be false, and dec should be 1.23
{
"id": "gwashington",
"data": {
"fam1": {
"bin": [
{
"value": "abc",
"labels": [],
"timestamp": "1523953458013000"
}
],
"bool": [
{
"value": "",
"labels": [],
"timestamp": "1523953458013000"
}
],
"bool_f": [
{
"value": "",
"labels": [],
"timestamp": "1523953458013000"
}
],
"bool_t": [
{
"value": "",
"labels": [],
"timestamp": "1523953458013000"
}
],
"dec": [
{
"value": 1,
"labels": [],
"timestamp": "1523953458013000"
}
],
"jadams": [
{
"value": 1,
"labels": [],
"timestamp": "1523953458013000"
}
],
"str": [
{
"value": "hello",
"labels": [],
"timestamp": "1523953458013000"
}
]
}
}
}
const entries = [
{
method: 'insert',
key: 'gwashington',
data: {
fam1: {
jadams: 1,
bool: true,
bool_t: true,
bool_f: false,
dec: 1.23,
str: "hello",
bin: Buffer.from('abc'),
}
}
}
];
await table.mutate(entries);
I suggest changing the behaviour of deleteRows to make it less easy to delete all rows in a table by mistake.
We just upgraded to 0.13 from 0.11 and are seeing a regression with decode: true
.
One of our numeric values is being decoded while in a ROW_IN_PROGRESS
state in the ChunkTransformer here. Passing isPossibleNumber: true
fixes the problem, but I'm not sure what other side effects that might have.
Happy to outline more of what we're doing if that helps narrow it down. Thanks!
Hello,
I am replicating a BigTable Instance. My script insert new rows after making some changes to the data from the old version to the new one.
The problem is that after a lot of insertion, I am getting the following error :
{ Error: 4 DEADLINE_EXCEEDED: Deadline Exceeded
at createStatusError (/node_modules/grpc/src/client.js:64:15)
at ClientReadableStream._emitStatusIfDone (/node_modules/grpc/src/client.js:270:19)
at ClientReadableStream._receiveStatus (/node_modules/grpc/src/client.js:248:8)
at /node_modules/grpc/src/client.js:749:12
code: 4,
metadata: Metadata { _internal_repr: {} },
details: 'Deadline Exceeded' }
Am I inserting too many rows at the same time, do I need to set a timeout after a certain amount of insertions ?
Environement:
NodeJs: 8.9.4
google-cloud/bigtable: 0.13.1
The location property is not correctly framed as required in createCluster method.
Throws following error:
Error: 3 INVALID_ARGUMENT: Error in field 'cluster' : Error in field 'location' : When parsing 'projects/projects/grass-clump-479/locations/us-central1-b' : Location name expected in the form 'projects/<project_id>/locations/<zone_id>'.
From @kolodny on November 5, 2017 17:59
const bigtable = require('@google-cloud/bigtable');
const bigtableClient = bigtable();
const instance = bigtableClient.instance(process.env.INSTANCE_ID);
const table = instance.table('doesnotexist');
Promise.resolve()
.then(() => table.getRows())
.then(([rows]) => console.log(`Read ${rows.length}`))
.catch(error => console.error(`caught ${error}`))
I chased this bug down to https://github.com/GoogleCloudPlatform/google-cloud-node/blob/a64ad81517045196cf5a3f468ea15aad1e2c25da/packages/common-grpc/src/service.js#L376-L385
This "fake" response call causes streamResponseHandled
in retry-request
to be set to true on the fake response, that has the consequence of never firing the error callback. https://github.com/stephenplusplus/retry-request/blob/4181eec8187c3603d4e4e68db1ee6ac27725afa3/index.js#L113-L133
I tried reverting the code in #1444 to see if I could replicate the bug referenced at #1443 to see if I could find a different solution that would avoid this nasty regression, but I wasn't able to repro (I always got a response). I suspect that the code can be safely reverted. Reverting that bit of code did fix the bug of silently ignoring erorrs!
Thanks!
Copied from original issue: googleapis/google-cloud-node#2724
I am able to successfully create tables and families and add rows into my table. However when I need to use the Filter command, Filter comes off as undefined. I can get all rows in the table, however if I need to filter out some rows using the Filter, I am not able to.
@google-cloud/bigtable
version: 0.13.0const Bigtable = require('@google-cloud/bigtable');
console.log(Bigtable); // Returns below output
/*
{ [Function: Bigtable]
Cluster: { [Function: Cluster] getLocation_: [Function], getStorageType_: [Function] },
Instance: [Function: Instance],
v2:
{ BigtableClient: [Function: BigtableClient],
BigtableInstanceAdminClient: [Function: BigtableInstanceAdminClient],
BigtableTableAdminClient: [Function: BigtableTableAdminClient] } }
*/
console.log(Bigtable.Filter); // Returns Undefined
Any idea if anything I am doing is incorrect?
Today I was playing around with this Library and noticed that it ate up all my BigTable Admin API quota, while I was inserting rows into a pre-created table.
Shouldn't any row mutation be performed through the BigTable API that doesn't have a quota?
I am running a Development BigTable instance at the moment.
const bigtable = new BigTable({
projectId: process.env.GCLOUD_PROJECT,
credentials: JSON.parse(process.env.BIGTABLE_SERVICE_ACCOUNT) // BigTable User service account
})
const instance = bigtable.instance('my-instance')
const table = instance.table('my-table')
for (var i = 0; i < 1000; i++) { // 7000 is the daily quota limit on Admin API Table Writes
table.insert([{
key: 'my-key-' + i,
data: {
family {
column: i
}
}
}])
}
โ๏ธ Greenkeeperโs updated Terms of Service will come into effect on April 6th, 2018.
Branch | Build failing ๐จ |
---|---|
Dependency | mocha |
Current Version | 5.0.2 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
mocha is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
This patch features a fix to address a potential "low severity" ReDoS vulnerability in the diff package (a dependency of Mocha).
generateDiff()
in Base
reporter (@harrysarson)The new version differs by 6 commits.
da6e5c9
Release v5.0.3
70d9262
update CHANGELOG.md for v5.0.3 [ci skip]
aaaa5ab
fix: ReDoS vuln in [email protected] โบ [email protected] (#3266)
8df5727
Tidies up code after review
660bccc
adds unit tests covering Base.generateDiff
bdcb3c3
exposes generateDiff function from base reporter
See the full diff
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
From @briangruber on October 27, 2017 16:4
I'm confused about using the exists()
method on a row(key)
, hoping to get clarification.
If I want to check if a row exists I thought I would do this
table.row(key).exists().then(result => {
let exists = result[0]
}
If the row does exists then I do step into the .then
and exists will be true
. But if it doesn't exist I actually get an exception of Unknown row: key
. So, then what is the point of exists
if when it doesn't exist it throws an error? Is result[0]
expected to always be true
and if it doesn't exist I'm supposed to use the exception? Or am I missing something?
Thanks!
Copied from original issue: googleapis/google-cloud-node#2702
We currently use node-int64
which is slower and less correct than int64-buffer
:
const Int64BE = require('int64-buffer').Int64BE;
const Int64 = require('node-int64');
const test = (number) => {
const int64 = new Int64(number);
const int64BE = new Int64BE(number);
console.log('int64', int64.toNumber())
console.log('int64BE', int64BE.toNumber())
console.log({
MAX_SAFE_INTEGER: Number.MAX_SAFE_INTEGER,
isNumberTooBig: int64BE.toNumber() > Number.MAX_SAFE_INTEGER,
})
console.log('int64', int64.toString())
console.log('int64 Matches', int64.toString() === number)
console.log('int64BE', int64BE.toString())
console.log('int64BE Matches', int64BE.toString() === number)
console.log('equals', int64.toBuffer().equals(int64BE.toBuffer()))
}
const a = number => new Int64(number).toBuffer();
const b = number => (new Int64BE(number)).toBuffer();
console.time('Int64');
for (var i = 0; i < 100000; i++) {
a(123)
}
console.timeEnd('Int64');
console.time('Int64BE');
for (var i = 0; i < 100000; i++) {
b(123)
}
console.timeEnd('Int64BE');
test('1234567890123456789')
test('9007199254740991')
outputs:
Int64: 162.114ms
Int64BE: 20.978ms
int64 Infinity
int64BE 1234567890123456800
{ MAX_SAFE_INTEGER: 9007199254740991, isNumberTooBig: true }
int64 Infinity
int64 Matches false
int64BE 1234567890123456789
int64BE Matches true
equals false
int64 -Infinity
int64BE 9007199254740991
{ MAX_SAFE_INTEGER: 9007199254740991, isNumberTooBig: false }
int64 -Infinity
int64 Matches false
int64BE 9007199254740991
int64BE Matches true
equals false
0.12.0 was not released 16 days ago due to some error: https://circleci.com/gh/googleapis/nodejs-bigtable/1109
I'm trying to get the release through now, but we're stalled on a new error.
@kolodny could you please take a look?
https://circleci.com/gh/googleapis/nodejs-bigtable/1408:
1) Bigtable/Table
mutate()
valid mutation:
Error: Aborting after running 1000 timers, assuming an infinite loop!
at Object.runAll (node_modules/lolex/src/lolex-src.js:671:15)
at Context.done (system-test/mutate-rows.js:132:15)
2) Bigtable/Table
mutate()
retries the failed mutations:
Error: Aborting after running 1000 timers, assuming an infinite loop!
at Object.runAll (node_modules/lolex/src/lolex-src.js:671:15)
at Context.done (system-test/mutate-rows.js:132:15)
3) Bigtable/Table
mutate()
has a `PartialFailureError` error when an entry fails after the retries:
Error: Aborting after running 1000 timers, assuming an infinite loop!
at Object.runAll (node_modules/lolex/src/lolex-src.js:671:15)
at Context.done (system-test/mutate-rows.js:132:15)
4) Bigtable/Table
mutate()
does not retry unretryable mutations:
Error: Aborting after running 1000 timers, assuming an infinite loop!
at Object.runAll (node_modules/lolex/src/lolex-src.js:671:15)
at Context.done (system-test/mutate-rows.js:132:15)
5) Bigtable/Table
mutate()
considers network errors towards the retry count:
Error: Aborting after running 1000 timers, assuming an infinite loop!
at Object.runAll (node_modules/lolex/src/lolex-src.js:671:15)
at Context.done (system-test/mutate-rows.js:132:15)
6) Bigtable/Table
createReadStream
simple read:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
7) Bigtable/Table
createReadStream
retries a failed read:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
8) Bigtable/Table
createReadStream
resets the retry counter after a successful read:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
9) Bigtable/Table
createReadStream
moves the start point of a range being consumed:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
10) Bigtable/Table
createReadStream
removes ranges already consumed:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
11) Bigtable/Table
createReadStream
removes keys already read:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
12) Bigtable/Table
createReadStream
adjust the limit based on the number of rows read:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
13) Bigtable/Table
createReadStream
does the previous 5 things in one giant test case:
AssertionError [ERR_ASSERTION]: .on('end') shoud have been invoked
+ expected - actual
-false
+true
at Context.it (system-test/read-rows.js:138:11)
We have code like this:
if (!(this instanceof ChunkTransformer)) {
return new ChunkTransformer(options);
}
That won't work with TypeScript :)
I can write the buffer and retrieve it again using decode:false but I cannot figure out how to filter on the value.
const buf = Buffer.from('a468c3a669', 'hex');
// Throws Can't convert to RegExp String from unknown type
{
value: buf
}
// Returns zero rows instead of throwing
{
value: [
buf
]
}
// Using binary string also returns zero rows
{
value: buf.toString('binary')
}
@google-cloud/bigtable
version: 0.13.1Continuing from Bigtable Convert grpc APIs to use GAPIC, Bigtable is the only API left to complete these steps:
gaxOptions
is available on all requestsI'm using @google-cloud/[email protected] with node v8.9.3 on linux.
I have a table where the row keys contain binary data, e.g. <Buffer db ee fe e8 fe 9f 65 33 dd 11 1f 2e ec d4 00 00> (2+7+6P6fZTPdER8u7NQAAA== in base64).
If I retrieve the row using row.get(), the id correctly comes back as a Buffer.
const rowKey = Buffer.from('2+7+6P6fZTPdER8u7NQAAA==', 'base64');
let row = await table.row(rowKey).get();
console.log(row[0].id) // prints <Buffer db ee fe e8 fe 9f 65 33 dd 11 1f 2e ec d4 00 00>
However, if I perform a scan using createReadStream, I get a different behavior. The id comes back as a string, which in my case is unusable.
const rowKey = Buffer.from('2+7+6P6fZTPdER8u7NQAAA==', 'base64');
table.createReadStream({
start: rowKey,
end: rowKey,
decode: false
}).on('data', row => {
console.log(row.id); //prints ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝe3๏ฟฝ.๏ฟฝ๏ฟฝ
})
I would expect the id to come back as a Buffer there as well.
Note I did try setting decode=false in the options, but that didn't affect the encoding of the row key at all.
This is just a placeholder issue to apply this patch and write the necessary tests
diff --git a/src/family.js b/src/family.js
index 365f3bc..86a89df 100644
--- a/src/family.js
+++ b/src/family.js
@@ -297,7 +297,7 @@ Family.prototype.get = function(options, callback) {
this.getMetadata(gaxOptions, function(err, metadata) {
if (err) {
if (err instanceof FamilyError && autoCreate) {
- self.create({gaxOptions}, callback);
+ self.create({gaxOptions, rule: options.rule}, callback);
return;
}
Branch | Build failing ๐จ |
---|---|
Dependency | @google-cloud/nodejs-repo-tools |
Current Version | 2.1.3 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
@google-cloud/nodejs-repo-tools is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
From @arbesfeld on May 2, 2017 14:15
Using Bigtable 0.9.1 - I seem to be getting more than 1 result after calling this.end()
. Is this expected behavior?
Copied from original issue: googleapis/google-cloud-node#2271
The description for this GitHub project currently is:
Node.js client for Google Cloud BigTable: Google's NoSQL Big Data database service. https://cloud.google.com/bigtable/
which is capitalizing the "T" in Bigtable incorrectly. As this repo will get cloned, the same capitalization will keep getting copied (with no way to push changes to those repos), which will make it difficult that the t
is intended to be lowercase.
Unfortunately, it's not possible to submit a PR or another change to the subject; it just requires admin-level permissions to edit it (there's no history tracking for that feature, AFAICT).
Can someone with appropriate permissions please fix this? Thanks!
There needs to be a sample in the samples/ directory that manipulates tables. Here's the list of commands:
union: [
{max_versions: 10}
{intersection: [
{max_versions: 2}
{max_age: 30d}
]}
]
I try to use this library with a Bigtable emulator. Everything I try to do ends with an error:
Error: Unexpected error while acquiring application default credentials: Could not load the default credentials. Browse to https://developers.google.com/accounts/docs/application-default-credentials for more information.
@google-cloud/bigtable
version: 0.13.0 $(gcloud beta emulators bigtable env-init)
node example.js
const Bigtable = require('@google-cloud/bigtable');
var client = new Bigtable.v2.BigtableTableAdminClient();
client.createTable("test").then().catch(function (err) {
console.log(err)
})
The following functions either missing or use incorrect @param
Page Address | Comment Discrepancy |
---|---|
create(options,callback) | callback not in comment param |
get(gaxOptions,callback) | callback not in comment param |
create(options,callback) | callback not in comment param |
create(options, callback) | callback not in comment param |
save(entry, gaxOptions, callback) | Comment param is written as key instead of entry |
create(options, callback) | callback not in comment param |
_flush(cb) | callback function name is cb but comment has it as callback |
destroy(err) | parameter is err but comment its mentioned as error |
all(pass) | comment doesn't have details of param |
column(column) | comment doesn't have details of param |
condition(condition) | comment doesn't have details of param |
family(family) | comment doesn't have details of param |
interleave(filters) | comment doesn't have details of param |
label(label) | comment doesn't have details of param |
row(row) | comment doesn't have details of param |
sink(sink) | comment doesn't have details of param |
time(time) | comment doesn't have details of param |
value(value) | comment doesn't have details of param |
convertFromBytes(bytes, options) | comment doesn't have options in param description |
parse(mutation) | Comment uses param as entry instead of mutation |
Branch | Build failing ๐จ |
---|---|
Dependency | eslint-plugin-prettier |
Current Version | 2.5.0 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
eslint-plugin-prettier is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
The new version differs by 4 commits.
d772dfa
Build: update package.json and changelog for v2.6.0
9e0fb48
Update: Add option to skip loading prettierrc (#83)
e5b5fa7
Build: add Node 8 and 9 to Travis
1ab43fd
Chore: add test for vue parsing
See the full diff
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
There needs to be a sample in the samples/
directory that manipulates instances. Here's the list of commands:
TODO: add AppProfileId CRUD once it exists.
Instances support a notion of "labels" (key value pairs meaningful to users, like "project" = "myProject" or "env" = "staging"). Create and update instance should have a labels options which is basically a map of {string, string}.
Also, instance.setMetadata should use "PartialUpdateInstance" instead of "UpdateInstance"
Hello,
I am having an issue reading bigtable rows from stream.
My rows are constantly growing while I receive new data.
Now the size of some rows have exceeded the maximum authorized and I am getting the following error message:
Error: 8 RESOURCE_EXHAUSTED: Received message larger than max (4411510 vs. 4194304)
Is there a way to stream such messages ? I am using hashed keys and as I can not receive the message, I can not see wich keys are having issue to be read.
const bigtableStream = table.createReadStream()
.on('data', row => {
// Do Stuff
});
.on('error', err => console.log(err))
environement:
NodeJs: 8.9.4
google-cloud/bigtable: 0.13.1
table.get() returns the families but the object is empty using getTables
t1 { AutoCreateFamily: { gcRule: null } }
t2 {}
OS: macos 10.13.3
Node.js version: 8.10.0
npm version: 5.6.0
yarn version: 1.3.2
@google-cloud/bigtable
version: master branch (0.13)
let [t1] = await table.get();
console.log('t1', t1.metadata.columnFamilies);
let [tables] = await bt.getTables();
let t2 = tables.find(t => t.name === 'TestAutoCreate');
console.log('t2', t2.metadata.columnFamilies);
โ๏ธ Greenkeeperโs updated Terms of Service will come into effect on April 6th, 2018.
Branch | Build failing ๐จ |
---|---|
Dependency | @google-cloud/nodejs-repo-tools |
Current Version | 2.2.2 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
@google-cloud/nodejs-repo-tools is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
The new version differs by 1 commits.
7b3af41
Fix link to open in cloud shell button image.
See the full diff
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
I tried to increment a value by Number.MAX_SAFE_INTEGER
Incrementing beyond int64 gives you a string in return:
"hulahoi": [
{
"value": "\u0000 \u0000\u0000\u0000\u0000\u0000\u0001",
"labels": [],
"timestamp": "1524579337814000"
}
],
OS: macos 10.13.3
Node.js version: 8.10.0
npm version: 5.6.0
yarn version: 1.3.2
@google-cloud/bigtable version: master branch (0.13)
await table.row('gwashington').increment('fam1:hulahoi');
await table.row('gwashington').increment('fam1:hulahoi', Number.MAX_SAFE_INTEGER);
โ๏ธ Greenkeeperโs updated Terms of Service will come into effect on April 6th, 2018.
Branch | Build failing ๐จ |
---|---|
Dependency | concat-stream |
Current Version | 1.6.0 |
Type | dependency |
This version is covered by your current version range and after updating it in your project the build failed.
concat-stream is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
When using thetable.dropRows()
method to clear a table, I receive the following error:
{ Error: 4 DEADLINE_EXCEEDED: Insufficient deadline for DropRowRange. Please try again with a longer request deadline.
at new createStatusError (/temp-bt/node_modules/google-gax/node_modules/grpc/src/client.js:64:15)
at /temp-bt/node_modules/google-gax/node_modules/grpc/src/client.js:583:15
code: 4,
metadata:
Metadata {
_internal_repr:
{ 'google.rpc.debuginfo-bin': [Array],
'grpc-status-details-bin': [Array] } },
details: 'Insufficient deadline for DropRowRange. Please try again with a longer request deadline.'
The associated code is:
const Bigtable = require('@google-cloud/bigtable');
const bigtable = new Bigtable({
projectId: 'some-project-id'
});
const INSTANCE_NAME = 'some-instance';
async function main() {
await bigtable.createInstance(INSTANCE_NAME, {
clusters: [{
name: 'some-cluster',
location: 'us-central1-c',
nodes: 3
}]
});
const instance = bigtable.instance(INSTANCE_NAME);
const table = instance.table('someTable');
await table.create({
families: ['someFamily']
});
await table.insert({
key: 'some-key',
data: {
someFamily: {
someData: 'some-data'
}
}
});
await table.deleteRows(); // error thrown
}
main().catch(console.error);
There appears to be a big in chunktransformer where if a cell is in progress the +=
operator is used. This causes two buffers to stringify: Buffer('test') + Buffer('ing') === String('testing')
I have the fix below, however I gave up trying to get the coverage to stay at 100%. @ajaaym can you take a look and try creating a PR to include tests? Thanks
diff --git a/src/chunktransformer.js b/src/chunktransformer.js
index 34b8163..eb64e9b 100644
--- a/src/chunktransformer.js
+++ b/src/chunktransformer.js
@@ -337,7 +337,15 @@ ChunkTransformer.prototype.processCellInProgress = function(chunk) {
if (chunk.resetRow) {
return this.reset();
}
- this.qualifier.value += Mutation.convertFromBytes(chunk.value, this.options);
+ const chunkQualifierValue =
+ Mutation.convertFromBytes(chunk.value, this.options);
+ if (chunkQualifierValue instanceof Buffer &&
+ this.qualifier.value instanceof Buffer) {
+ this.qualifier.value =
+ Buffer.concat([this.qualifier.value, chunkQualifierValue])
+ } else {
+ this.qualifier.value += chunkQualifierValue;
+ }
this.moveToNextState(chunk);
};
table.row().get() overwrites filter created out of first argument ignoring filter provided as options. This limits the filter options can be passed to get row request.
OS: macos 10.13.3
Node.js version: 8.10.0
npm version: 5.6.0
yarn version: 1.3.2
@google-cloud/bigtable version: master branch (0.13)
const columns = ['fam:col'];
const options = {
filter: [
{
column: {
cellLimit: 1
}
},
]
};
await table.row('key').get(columns); // Returns single column
await table.row('key').get(columns, options); // Returns all columns
The following hyperlinks in parameter description are incorrect
I'm getting unexpected value on all except the latest one
{
"id": "wmckinley",
"data": {
"fam1": {
"tjefferson": [
{
"value": 3,
"labels": [],
"timestamp": "1523529421404000"
},
{
"value": "\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0002",
"labels": [],
"timestamp": "1523529391871000"
},
{
"value": "\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0001",
"labels": [],
"timestamp": "1523527967168000"
}
]
}
}
}
{
"id": "wmckinley",
"data": {
"fam1": {
"tjefferson": [
{
"value": 3,
"labels": [],
"timestamp": "1523529421404000"
},
{
"value": 2,
"labels": [],
"timestamp": "1523529391871000"
},
{
"value": 1,
"labels": [],
"timestamp": "1523527967168000"
}
]
}
}
}
@google-cloud/bigtable
version: 0.13(async () => {
await bt.createTable('jau1', {
families: [
'fam1'
]
}).catch(swallowCode(6));
let [tables] = await bt.getTables();
tables.forEach(t => {
delete t.instance;
delete t.bigtable;
console.log(t);
})
const table = bt.table('jau1');
await table.createFamily('fam2').catch(swallowCode(6));
let rows = [
{
key: 'wmckinley',
data: {
fam1: {
tjefferson: 3
}
}
}
];
await table.insert(rows);
[rows] = await table.getRows();
console.log(JSON.stringify(rows,null,1))
//-
// <h4>Retrieving Rows</h4>
//
// If you're anticipating a large number of rows to be returned, we suggest
// using the {@link Table#getRows} streaming API.
//-
table.createReadStream()
.on('error', console.error)
.on('data', function (row) {
delete row.bigtable;
delete row.instance;
delete row.table;
console.log('got row', JSON.stringify(row,null,1));
// `row` is a Row object.
});
})().catch(err => {
console.warn(err);
})//.then(() => process.exit());
function swallowCode(code) {
return err => {
if (err.code !== code) {
throw err;
}
}
}
โ๏ธ Greenkeeperโs updated Terms of Service will come into effect on April 6th, 2018.
Branch | Build failing ๐จ |
---|---|
Dependency | eslint |
Current Version | 4.18.2 |
Type | devDependency |
This version is covered by your current version range and after updating it in your project the build failed.
eslint is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.
Buffer()
in backticks in no-buffer-constructor
rule description (#10084) (Stephen Edgar)The new version differs by 12 commits.
4f595e8
4.19.0
16fc59e
Build: changelog update for 4.19.0
55a1593
Update: consecutive option for one-var (fixes #4680) (#9994)
8d3814e
Fix: false positive about ES2018 RegExp enhancements (fixes #9893) (#10062)
935f4e4
Docs: Clarify default ignoring of node_modules (#10092)
72ed3db
Docs: Wrap Buffer()
in backticks in no-buffer-constructor
rule description (#10084)
3aded2f
Docs: Fix lodash typos, make spacing consistent (#10073)
e33bb64
Chore: enable no-param-reassign on ESLint codebase (#10065)
66a1e9a
Docs: fix possible typo (#10060)
2e68be6
Update: give a node at least the indentation of its parent (fixes #9995) (#10054)
72ca5b3
Update: Correctly indent JSXText with trailing linebreaks (fixes #9878) (#10055)
2a4c838
Docs: Update ECMAScript versions in FAQ (#10047)
See the full diff
There is a collection of frequently asked questions. If those donโt help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot ๐ด
There is a standard set of tests driven by a json file. Here's the Python version, the GoLang version and the Java version.
It's a standard set of tests that show that a Cloud Bigtable client correctly parses ReadRowResponse
CellChunk
s. This is a blocker to saying that this is an alpha level client.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.