Giter Club home page Giter Club logo

nodejs-logging-bunyan's Introduction

Google Cloud Platform logo

release level npm version

This module provides an easy to use, higher-level layer for working with Cloud Logging, compatible with Bunyan. Simply attach this as a transport to your existing Bunyan loggers.

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Before you begin

  1. Select or create a Cloud Platform project.
  2. Enable the Cloud Logging for Bunyan API.
  3. Set up authentication with a service account so you can access the API from your local workstation.

Installing the client library

npm install @google-cloud/logging-bunyan

Using the client library

const bunyan = require('bunyan');

// Imports the Google Cloud client library for Bunyan
const {LoggingBunyan} = require('@google-cloud/logging-bunyan');

// Creates a Bunyan Cloud Logging client
const loggingBunyan = new LoggingBunyan();

// Create a Bunyan logger that streams to Cloud Logging
// Logs will be written to: "projects/YOUR_PROJECT_ID/logs/bunyan_log"
const logger = bunyan.createLogger({
  // The JSON payload of the log as it appears in Cloud Logging
  // will contain "name": "my-service"
  name: 'my-service',
  streams: [
    // Log to the console at 'info' and above
    {stream: process.stdout, level: 'info'},
    // And log to Cloud Logging, logging at 'info' and above
    loggingBunyan.stream('info'),
  ],
});

// Writes some log entries
logger.error('warp nacelles offline');
logger.info('shields at 99%');

Using as an express middleware

NOTE: this feature is experimental. The API may change in a backwards incompatible way until this is deemed stable. Please provide us feedback so that we can better refine this express integration.

We provide a middleware that can be used in an express application. Apart from being easy to use, this enables some more powerful features of Cloud Logging: request bundling. Any application logs emitted on behalf of a specific request will be shown nested inside the request log as you see in this screenshot:

Request Bundling Example

The middleware adds a bunyan-style log function to the request object. You can use this wherever you have access to the request object (req in the sample below). All log entries that are made on behalf of a specific request are shown bundled together in the Cloud Logging UI.

const lb = require('@google-cloud/logging-bunyan');

// Import express module and create an http server.
const express = require('express');

async function startServer() {
  const {logger, mw} = await lb.express.middleware();
  const app = express();

  // Install the logging middleware. This ensures that a Bunyan-style `log`
  // function is available on the `request` object. Attach this as one of the
  // earliest middleware to make sure that log function is available in all the
  // subsequent middleware and routes.
  app.use(mw);

  // Setup an http route and a route handler.
  app.get('/', (req, res) => {
    // `req.log` can be used as a bunyan style log method. All logs generated
    // using `req.log` use the current request context. That is, all logs
    // corresponding to a specific request will be bundled in the Cloud UI.
    req.log.info('this is an info log message');
    res.send('hello world');
  });

  // `logger` can be used as a global logger, one not correlated to any specific
  // request.
  logger.info({port: 8080}, 'bonjour');

  // Start listening on the http server.
  app.listen(8080, () => {
    console.log('http server listening on port 8080');
  });
}

startServer();

Error Reporting

Any Error objects you log at severity error or higher can automatically be picked up by Cloud Error Reporting if you have specified a serviceContext.service when instantiating a LoggingBunyan instance:

const loggingBunyan = new LoggingBunyan({
  serviceContext: {
    service: 'my-service', // required to report logged errors
                           // to the Google Cloud Error Reporting
                           // console
    version: 'my-version'
  }
});

It is an error to specify a serviceContext but not specify serviceContext.service.

Make sure to add logs to your uncaught exception and unhandled rejection handlers if you want to see those errors too.

You may also want to see the [@google-cloud/error-reporting][@google-cloud/error-reporting] module which provides direct access to the Error Reporting API.

Special Payload Fields in LogEntry

There are some fields that are considered special by Google cloud logging and will be extracted into the LogEntry structure. For example, severity, message and labels can be extracted to LogEntry if included in the bunyan log payload. These special JSON fields will be used to set the corresponding fields in the LogEntry. Please be aware of these special fields to avoid unexpected logging behavior.

LogEntry Labels

If the bunyan log record contains a label property where all the values are strings, we automatically promote that property to be the LogEntry.labels value rather than being one of the properties in the payload fields. This makes it easier to filter the logs in the UI using the labels.

logger.info({labels: {someKey: 'some value'}}, 'test log message');

All the label values must be strings for this automatic promotion to work. Otherwise the labels are left in the payload.

Formatting Request Logs

To format your request logs you can provide a httpRequest property on the bunyan metadata you provide along with the log message. We will treat this as the HttpRequest message and Cloud logging will show this as a request log. Example:

Request Log Example

logger.info({
  httpRequest: {
    status: res.statusCode,
    requestUrl: req.url,
    requestMethod: req.method,
    remoteIp: req.connection.remoteAddress,
    // etc.
  }
}, req.path);

The httpRequest property must be a properly formatted HttpRequest message. (Note: the linked protobuf documentation shows snake_case property names, but in JavaScript one needs to provide property names in camelCase.)

Correlating Logs with Traces

If you use [@google-cloud/trace-agent][trace-agent] module, then this module will set the Cloud Logging [LogEntry][LogEntry] trace property based on the current trace context when available. That correlation allows you to [view log entries][trace-viewing-log-entries] inline with trace spans in the Cloud Trace Viewer. Example:

Logs in Trace Example

If you wish to set the Cloud LogEntry trace property with a custom value, then write a Bunyan log entry property for 'logging.googleapis.com/trace', which is exported by this module as LOGGING_TRACE_KEY. For example:

const bunyan = require('bunyan');
// Node 6+
const {LoggingBunyan, LOGGING_TRACE_KEY} = require('@google-cloud/logging-bunyan');
const loggingBunyan = LoggingBunyan();

...

logger.info({
  [LOGGING_TRACE_KEY]: 'custom-trace-value'
}, 'Bunyan log entry with custom trace field');

Error handling with a default callback

The LoggingBunyan class creates an instance of Logging which creates the Log class from @google-cloud/logging package to write log entries. The Log class writes logs asynchronously and there are cases when log entries cannot be written when it fails or an error is returned from Logging backend. If the error is not handled, it could crash the application. One possible way to handle the error is to provide a default callback to the LoggingBunyan constructor which will be used to initialize the Log object with that callback like in the example below:

// Imports the Google Cloud client library for Bunyan
const {LoggingBunyan} = require('@google-cloud/logging-bunyan');
// Creates a client
const loggingBunyan = new LoggingBunyan({
  projectId: 'your-project-id',
  keyFilename: '/path/to/key.json',
  defaultCallback: err => {
      if (err) {
      console.log('Error occured: ' + err);
      }
  },
});

Alternative way to ingest logs in Google Cloud managed environments

If you use this library with the Cloud Logging Agent, you can configure the handler to output logs to process.stdout using the structured logging Json format. To do this, add redirectToStdout: true parameter to the LoggingBunyan constructor as in sample below. You can use this parameter when running applications in Google Cloud managed environments such as AppEngine, Cloud Run, Cloud Function or GKE. The logger agent installed on these environments can capture process.stdout and ingest it into Cloud Logging. The agent can parse structured logs printed to process.stdout and capture additional log metadata beside the log payload. It is recommended to set redirectToStdout: true in serverless environments like Cloud Functions since it could decrease logging record loss upon execution termination - since all logs are written to process.stdout those would be picked up by the Cloud Logging Agent running in Google Cloud managed environment. Note that there is also a useMessageField option which controls if "message" field is used to store structured, non-text data inside jsonPayload field when redirectToStdout is set. By default useMessageField is always true. Set the skipParentEntryForCloudRun option to skip creating an entry for the request itself as Cloud Run already automatically creates such log entries. This might become the default behaviour in a next major version.

// Imports the Google Cloud client library for Bunyan
const {LoggingBunyan} = require('@google-cloud/logging-bunyan');

// Creates a client
const loggingBunyan = new LoggingBunyan({
  projectId: 'your-project-id',
  keyFilename: '/path/to/key.json',
  redirectToStdout: true,
});

Samples

Samples are in the samples/ directory. Each sample's README.md has instructions for running its sample.

Sample Source Code Try it
Express source code Open in Cloud Shell
Quickstart source code Open in Cloud Shell
Explict Auth Setup source code Open in Cloud Shell

The Cloud Logging for Bunyan Node.js Client API Reference documentation also contains samples.

Supported Node.js Versions

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js. If you are using an end-of-life version of Node.js, we recommend that you update as soon as possible to an actively supported LTS version.

Google's client libraries support legacy versions of Node.js runtimes on a best-efforts basis with the following warnings:

  • Legacy versions are not tested in continuous integration.
  • Some security patches and features cannot be backported.
  • Dependencies cannot be kept up-to-date.

Client libraries targeting some end-of-life versions of Node.js are available, and can be installed through npm dist-tags. The dist-tags follow the naming convention legacy-(version). For example, npm install @google-cloud/logging-bunyan@legacy-8 installs client libraries for versions compatible with Node.js 8.

Versioning

This library follows Semantic Versioning.

This library is considered to be stable. The code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against stable libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its templates in directory.

License

Apache Version 2.0

See LICENSE

nodejs-logging-bunyan's People

Contributors

0xsage avatar alexander-fenster avatar bcoe avatar callmehiphop avatar cindy-peng avatar crwilcox avatar dominickramer avatar dpebot avatar draffensperger avatar fhinkel avatar gcf-owl-bot[bot] avatar google-cloud-policy-bot[bot] avatar greenkeeper[bot] avatar jkwlui avatar jmdobry avatar justinbeckwith avatar losalex avatar lukesneeringer avatar nareshqlogic avatar ofrobots avatar release-please[bot] avatar renovate-bot avatar renovate[bot] avatar sofisl avatar stephenplusplus avatar surferjeffatgoogle avatar tbpg avatar tswast avatar wvanderdeijl avatar yoshi-automation avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nodejs-logging-bunyan's Issues

System tests quota exceeded

The quota for the service account used in the system tests has been exceeded causing the tests to fail. The service account should be updated with a quota that handles the system tests.

Publishing fails with `tsc` not found

Ensure npm install is run in the CircleCI publish script so that the module can be published. Otherwise, the publish step fails with an error stating that tsc cannot be found.

Sample code in the README doesn't work

Environment details

  • OS: Ubuntu 19.04
  • Node.js version: v8.16.1
  • npm version: 6.4.1
  • @google-cloud/logging-bunyan version: 1.2.3

Steps to reproduce

  1. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to file containing service account key, project ID, etc.
  2. Try to run the sample code in the README. Namely,
const bunyan = require('bunyan');

// Imports the Google Cloud client library for Bunyan
const {LoggingBunyan} = require('@google-cloud/logging-bunyan');

// Creates a Bunyan Stackdriver Logging client
const loggingBunyan = new LoggingBunyan();

// Create a Bunyan logger that streams to Stackdriver Logging
// Logs will be written to: "projects/YOUR_PROJECT_ID/logs/bunyan_log"
const logger = bunyan.createLogger({
  // The JSON payload of the log as it appears in Stackdriver Logging
  // will contain "name": "my-service"
  name: 'my-service',
  streams: [
    // Log to the console at 'info' and above
    {stream: process.stdout, level: 'info'},
    // And log to Stackdriver Logging, logging at 'info' and above
    loggingBunyan.stream('info'),
  ],
});

// Writes some log entries
logger.error('warp nacelles offline');
logger.info('shields at 99%');

Expected result: The script logs the messages to the console, and then to Stackdriver Logging.

Actual result: The script logs the messages to the console, but doesn't log them to Stackdriver Logging. After a few seconds (about 15 seconds on my machine), it throws the following exception:

(node:358) UnhandledPromiseRejectionWarning: FetchError: request to http://169.254.169.254/computeMetadata/v1/instance failed, reason: connect EHOSTUNREACH 169.254.169.254:80
    at ClientRequest.<anonymous> (/app/node_modules/node-fetch/lib/index.js:1430:11)
    at emitOne (events.js:116:13)
    at ClientRequest.emit (events.js:211:7)
    at Socket.socketErrorListener (_http_client.js:401:9)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at emitErrorNT (internal/streams/destroy.js:66:8)
    at _combinedTickCallback (internal/process/next_tick.js:139:11)
    at process._tickCallback (internal/process/next_tick.js:181:9)
(node:358) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:358) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

For the record, I am running the script locally on my machine.

Fix inconsistent system tests

The system tests are inconsistent in that when they are first run they fail, but on a follow-up run, they pass. This seems to be caused by timing issues.

improved processing of structured logs

From @travi on April 24, 2017 16:45

I recently got @google-cloud/logging-bunyan working in a hapi app (leveraging good-bunyan) after a very helpful conversation with @ofrobots.

This got us to a much better state of logging than our previous use of good-console to simply write to stdout. However, the log events that are sent to Stackdriver are still not tokenized nearly as well as the logs that we have for some other (non-node) apps we have running in the standard environment.

It appears that the events sent through @google-cloud/logging-bunyan are structured in the bunyan recommended formats, rather than the Stackdriver format. If I understand the docs correctly, Stackdriver would automatically process these better if they were transformed to meet the formats like:

I have not looked very deeply into trying to perform any transformations myself because I wanted to see if the team agreed that it made more sense to be part of the @google-cloud/logging-bunyan library directly. It seems like that would be the most helpful place so that each consumer doesn't need to build the transformations themselves (if event possible).

Would it be reasonable to add this type of transformation into the library to support better log tokenization?

Copied from original issue: googleapis/google-cloud-node#2251

Compilation Error in system tests

Compilation failing at master for me with the following:

> @google-cloud/[email protected] compile /Users/ofrobots/src/yoshi/nodejs-logging-bunyan
> tsc -p .

system-test/errors-transport.ts:65:11 - error TS2554: Expected 2 arguments, but got 1.

65     await this.request(options);
             ~~~~~~~~~~~~~~~~~~~~~

  node_modules/@google-cloud/common/build/src/service.d.ts:101:46
    101     request(reqOpts: DecorateRequestOptions, callback: BodyResponseCallback): void;
                                                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    An argument for 'callback' was not provided.

K8s metadata missing from `resource.labels` object

Description

Log entries are missing container specific information: container_name, instance_id, pod_id...
those fields appear as empty strings in Stackdrive console

labels: {
   cluster_name:  "cluster_name"    
   container_name:  ""    
   instance_id:  ""    
   namespace_id:  "namespace_id"    
   pod_id:  ""    
   project_id:  "project_id"    
   zone:  ""    
  }

Environment details

  • Docker Image: node:10.16.3-alpine running on GKE
  • Node.js version: 10.16.3
  • npm version:
  • @google-cloud/logging-bunyan version: 1.2.2

Steps to reproduce

const streams = [];
const loggingBunyan = new LoggingBunyan();
streams.push(loggingBunyan.stream('debug'));
const logger = bunyan.createLogger({
  name, level, streams, serializers
});

GA release

Package name: @google-cloud/logging-bunyan
Current release: beta
Proposed release: GA

Instructions

Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.

Required

  • 28 days elapsed since last beta release with new API surface
  • Server API is GA
  • Package API is stable, and we can commit to backward compatibility
  • All dependencies are GA

Optional

  • Most common / important scenarios have descriptive samples
  • Public manual methods have at least one usage sample each (excluding overloads)
  • Per-API README includes a full description of the API
  • Per-API README contains at least one “getting started” sample using the most common API scenario
  • Manual code has been reviewed by API producer
  • Manual code has been reviewed by a DPE responsible for samples
  • 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site

Incorrect GCE instance_id

Environment details

  • OS: CentOS on a GCE instance
  • Node.js version: 8.9.3
  • npm version: yarn 1.4.0
  • @google-cloud/logging-bunyan version: 0.8.2

Steps to reproduce

Logging from within a GCE instance consistently produces wrong instance_id in resource.labels, e.g. 2554270933530966500 while the actual ID is 2554270933530966670. This makes using the GCP Console inconvenient where the logs are filtered by instance_id by default. Also while the project is correctly filled, the zone is missing:

resource: {
  labels: {
    instance_id:  "2554270933530966500"    
    project_id:  "my-project"    
    zone:  ""    
  }
  type:  "gce_instance"   
}

I suspect the instance_id issue has something to do with googleapis/gcp-metadata#74 but since the zone is missing as well there might be more to it.

Determine why npm publish fails

During deployment it looks like npm publish is run twice. Determine why this is the case.

#!/bin/bash -eo pipefail
npm publish

> @google-cloud/[email protected] prepare .
> npm run compile


> @google-cloud/[email protected] compile /home/node/project
> tsc -p .


> @google-cloud/[email protected] postcompile /home/node/project
> cpy ./src/types/* ./build/src/types

npm ERR! publish Failed PUT 403
npm ERR! code E403
npm ERR! You cannot publish over the previously published versions: 0.8.1. : @google-cloud/logging-bunyan

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/node/.npm/_logs/2018-05-08T22_01_56_560Z-debug.log
Exited with code 1

Logging Bunyan Error: google/logging/v2/logging.proto could not be found

From @Jeremiahgibson on October 20, 2017 23:43

Environment details

  • OS: Linux Ubuntu 17.04
  • Node.js version: 6.10.3
  • npm version: 3.10.10
  • google-cloud-node version: "google-cloud/logging-bunyan": "^0.5.0", "bunyan": "^1.8.10"

Steps to reproduce

  1. require @google-cloud/logging-bunyan
  2. Attempt to log to stack driver using logger.debug({httpRequest: {url: req.url}}, 'Some message');
  3. All of this was done from within a kubernetes container in a google project container cluster.

Dependency Tree Diff

google-cloud/[email protected] -> google-cloud/[email protected]
[email protected] -> [email protected]

Npm debug log output

(node:16) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: google/logging/v2/logging.proto could not be found in /var/www/node_modules/google-cloud/logging/protos
/var/www/node_modules/google-gax/lib/grpc.js:172
throw new Error(filename + " could not be found in " + protoPath);
^
Error: google/logging/v2/logging.proto could not be found in /var/www/node_modules/google-cloud/logging/protos
at Function.GrpcClient._resolveFile (/var/www/node_modules/google-gax/lib/grpc.js:172:9)
at GrpcClient.loadProto (/var/www/node_modules/google-gax/lib/grpc.js:161:33)
at new LoggingServiceV2Client (/var/www/node_modules/@google-cloud/logging/src/v2/logging_service_v2_client.js:94:15)
at /var/www/node_modules/@google-cloud/logging/src/index.js:789:21
at Immediate.setImmediate (/var/www/node_modules/google-auto-auth/index.js:161:9)
at runCallback (timers.js:672:20)
at tryOnImmediate (timers.js:645:5)
at processImmediate [as _immediateCallback] (timers.js:617:5)

Copied from original issue: googleapis/google-cloud-node#2688

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper App’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

An in-range update of nyc is breaking the build 🚨

☝️ Greenkeeper’s updated Terms of Service will come into effect on April 6th, 2018.

Version 11.6.0 of nyc was just published.

Branch Build failing 🚨
Dependency nyc
Current Version 11.5.0
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

nyc is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • ci/circleci: node9 Your tests failed on CircleCI Details
  • ci/circleci: node8 Your tests passed on CircleCI! Details
  • ci/circleci: node6 Your tests passed on CircleCI! Details
  • continuous-integration/appveyor/branch AppVeyor build succeeded Details
  • ci/circleci: node4 Your tests passed on CircleCI! Details

Commits

The new version differs by 4 commits.

  • dd372f5 chore(release): 11.6.0
  • c6b30ba feat: allow usage of ignoreClassMethods from istanbul (#785)
  • 2d51562 fix: update to yargs version that allows extending configs with no extension (#790)
  • b4032ce fix: removes unused split-lines dependency. (#787)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Info sign and severity not matching with the level of the log

Environment details

  • OS: Ubuntu 18.04
  • Node.js version: 8.11.3
  • npm version: 6.4.1
  • @google-cloud/logging-bunyan version: 1.8.12

Steps to reproduce

  1. Implement the code as per the doc says :
const loggingBunyan = new LoggingBunyan({
  projectId: projectId,
  keyFilename: 'keyfilenamePath.json'
});
logger = bunyan.createLogger({
        name: process.env.STACKDRIVER_CONTAINER_NAME,
        streams: [
          { stream: process.stdout, level: 'info' },
          loggingBunyan.stream('info')
        ]
});

logger.error('logger error initialized');
logger.info('logger info initialized');
  1. Run the application and check inside Stackdriver GCP :
    image003

logger.error will always have the blue info sign but with a level at 50 and a severity at INFO.
So filtering the logs by the level is impossible at the meantime.

Thanks!

TypeScript compilation error related to @types/express

With a brand new module, that only has @google-cloud/logging-bunyan and bunyan installed, I get this error message:

> tsc -p .

node_modules/@google-cloud/logging/build/src/middleware/express/make-middleware.d.ts:16:49 - error TS7016: Could not find a declaration file for module 'express'. '/Users/beckwith/Code/node_modules/express/index.js' implicitly has an 'any' type.
  Try `npm install @types/express` if it exists or add a new declaration (.d.ts) file containing `declare module 'express';`

16 import { Request, Response, NextFunction } from 'express';
                                                   ~~~~~~~~~

Installing @types/express made the bad message go away. We should probably be distributing those types along with the package.

An in-range update of @google-cloud/nodejs-repo-tools is breaking the build 🚨

☝️ Greenkeeper’s updated Terms of Service will come into effect on April 6th, 2018.

Version 2.2.4 of @google-cloud/nodejs-repo-tools was just published.

Branch Build failing 🚨
Dependency @google-cloud/nodejs-repo-tools
Current Version 2.2.3
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

@google-cloud/nodejs-repo-tools is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/appveyor/branch Waiting for AppVeyor build to complete Details
  • ci/circleci: node8 Your tests passed on CircleCI! Details
  • ci/circleci: node9 Your tests passed on CircleCI! Details
  • ci/circleci: node6 Your tests passed on CircleCI! Details
  • ci/circleci: node4 Your tests passed on CircleCI! Details
  • ci/circleci: docs Your tests passed on CircleCI! Details
  • ci/circleci: lint Your tests failed on CircleCI Details

Commits

The new version differs by 1 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

TS2507 Error for @google-cloud/logging dependency on 0.9.5

Getting the following Typescript build error on the lastest version of this library:

[Project Root]/node_modules/@google-cloud/logging/node_modules/google-gax/build/src/streaming.d.ts(47,42): error TS2507: Type '{ default: DuplexifyConstructor; obj(writable?: Writable | undefined, readable?: Readable | undefined, streamOptions?: DuplexOptions | undefined): Duplexify; }' is not a constructor function type.

Environment details

  • OS: Mac OSX Sierra 10.12.6
  • Node.js version: Typescript 3.2.4, Node 10.14.2
  • npm version: 6.4.1; yarn 1.12.3
  • @google-cloud/logging-bunyan version: 0.9.5

Steps to reproduce

  1. run tsc from command line
  2. observe error: node_modules/@google-cloud/logging/node_modules/google-gax/build/src/streaming.d.ts:47:42 - error TS2507: Type '{ default: DuplexifyConstructor; obj(writable?: Writable | undefined, readable?: Readable | undefined, streamOptions?: DuplexOptions | undefined): Duplexify; }' is not a constructor function type.

Thanks!

Log levels not accounted for in Log Viewer

Environment details

  • OS: Windows 10
  • Node.js version: 10.0.0
  • npm version: 6.0.1
  • @google-cloud/logging-bunyan version: 0.8.1

Steps to reproduce

import { LoggingBunyan } from '@google-cloud/logging-bunyan';
import * as bunyan from 'bunyan';

const cloudLogger: LoggingBunyan = new LoggingBunyan();

export default bunyan.createLogger({
  name: 'messaging',
  streams: [
    {stream: process.stdout, level: 'info'},
    cloudLogger.stream('info')
  ]
});

Use Application Default Credentials.
Deploy to App Engine flex environment.

Resulting logs in Log Viewer are all put under the 'any' log level (i.e. logger.info, logger.warn, logger.error are ignored). Error objects logged to logger.error do appropriately show up in Error Reporting correctly.

image

Synthesis failed for nodejs-logging-bunyan

Hello! Autosynth couldn't regenerate nodejs-logging-bunyan. 💔

Here's the output from running synth.py:

Cloning into 'working_repo'...
Switched to branch 'autosynth'
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
    main()
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
    last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
    text=True,
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
    with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'

Google internal developers can see the full log here.

Fix typo in README

The sample code in the README doesn't work on version 0.8.0 since it is using the old api surface.

NodeJS 10: Async bunyan logging crashes the cloud function if await is not used while logging

Issue:

  • nodeJS 10 Cloud function runs but results in a crash while using bunyan logs and publishing to another topic!

Environment details

  • OS: Google cloud function
  • Node.js version: v10.14.2
  • npm version: 6.4.1
  • @google-cloud/logging-bunyan version: 2.0.0

Steps to reproduce

  1. Create a cloud function using nodeJS 10 runtime with pubsub as trigger
  2. use bunyan logging and redirect logs to cloud functions.
  3. Try and publish to a sample topic and use bunyan logs right after publish

Error reported:

Ignoring extra callback call
Function execution took 1852 ms, finished with status: 'crash'
{
 insertId: "000000-811aac99-2369-4f5b-801f-19364c10437c"  
 labels: {…}  
 logName: "projects/some-project/logs/cloudfunctions.googleapis.com%2Fcloud-functions"  
 receiveTimestamp: "2019-12-03T18:22:41.491846254Z"  
 resource: {
  labels: {
   function_name: "test-logger-lib-publish-duplicate"    
   project_id: "some-project"    
   region: "us-central1"    
  }
  type: "cloud_function"   
 }
 severity: "ERROR"  
 textPayload: "Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
    at GoogleAuth.getApplicationDefaultAsync (/srv/functions/node_modules/google-auth-library/build/src/auth/googleauth.js:161:19)
    at process._tickCallback (internal/process/next_tick.js:68:7)"  
 timestamp: "2019-12-03T18:22:40.664Z"  
 trace: "projects/some-project/traces/ce459298d8da7569d7fb40b07785d594"  
}

Sample Code:
index.js

const bunyan = require('bunyan');
const { LoggingBunyan } = require('@google-cloud/logging-bunyan');

const loggingBunyan = new LoggingBunyan({
  logName: process.env.LOG_NAME
});

function getLogger(logginglevel) {
  let logger = bunyan.createLogger({
    name: process.env.FUNCTION_TARGET,
    level: logginglevel,

    streams: [
      loggingBunyan.stream()
    ]
  });
  logger.debug(`${process.env.FUNCTION_TARGET} :: ${process.env.LOG_NAME}`);
  return logger;
}

const pubsub = new PubSub({
  projectId: 'some-project'
});


exports.helloPubSub = async(event, context) => {
  console.log('I am in!');
  const logger = getLogger('debug');

  const test_topic = pubsub.topic('logger-topic');
  logger.debug('debug level test');

  const data = Buffer.from('interesting', 'utf8');
  await test_topic.publish(data);
  logger.error('error level test');
  logger.info('I am out');
};

package.json

  "name": "sample-pubsub",
  "version": "0.0.1",
  "dependencies": {
    "@google-cloud/pubsub": "^1.0.0",
    "bunyan": "^1.8.12",
    "@google-cloud/logging-bunyan": "^2.0.0"
  }
}

Note:

  • The same code works fine for nodejs 8 runtime! This issue is only with nodejs 10 runtime.
  • In the above index.js file, if I use await on the last logger.info('I am out') or all the logger calls, the function works like a charm!

Could anyone help me with what's wrong here?

Reference issues:

Unhandled exception from gRPC for request size

When trying to load test the our kubernetes stack by sending thousands of requests to our exposed endpoint I get this error which crashes the Node app:

{ Error: Request payload size exceeds the limit: 10485760 bytes. at /var/www/node_modules/grpc/src/client.js:554:15 code: 3, metadata: Metadata { _internal_repr: {} } }

It seems like the 1MB buffering that the @google-cloud/logging library is supposed to be doing is not happening.

Environment details

  • OS: Google Cloud Container Cluster - Ubuntu Latest Node image
  • Node.js version: Docker public image - node:6.10.3
  • npm version: 3.10.10
  • @google-cloud/logging-bunyan version: 0.7.0

Steps to reproduce

  1. Enable stackdriver logging on a Google Cloud Kubernetes app.
  2. Log thousands of requests as fast as possible.

Errors not reported to Stackdriver Error Reporting

Similar to @google-cloud/logging-winston issue #32, errors that are logged are not reported to the Error Reporting console unless a serviceContext.service is explicitly set by the user when configuring @google-cloud/logging-bunyan.

A default value should be used if nothing is specified so that the errors are visible.

logging: best practices for batching log entries

From @JoshFerge on June 21, 2017 2:50

Hello, I am working on implementing a logging solution, and have a few questions that I could use help with. Here is some context:

  • We would like to log every request that our nodejs server processes to stackdriver logging.
  • This request is in JSON format and contains many fields.
  • Each server we have processes a large amount of RPS.
  • Time and performance is crucial to the service.
  • The documentation states, "While you may write a single entry at a time, batching multiple entries together is preferred to avoid reaching the queries per second limit."
  • It appears it should be a common pattern to have a wrapper around the logging API and only write entries when a certain number of them is accrued.

I have searched through the docs, and I haven't found a solid answer on these questions:

  • Will gcloud-winston or gcloud-bunyan handle batching log entries?
  • Is there an example of batching log entries in a high throughput scenario?
  • What is the recommended amount of entries to batch before writing?

Thank you for your help.

Copied from original issue: googleapis/google-cloud-node#2402

deployment error using logging-bunyan

From @chadbr on March 22, 2017 23:1

Environment details

  • OS:CentOS 7

  • Node.js version:

  • npm version:
    Installing : 1:npm-3.10.10-1.6.9.4.2.el7.x86_64 3/4
    Installing : 1:nodejs-6.9.4-2.el7.x86_64 4/4

  • google-cloud-node version:
    "@google-cloud/logging-bunyan@^0.1.0":
    version "0.1.0"

Steps to reproduce

Download this repo: https://github.com/chadbr/google-bunyan-error

  • yarn install
  • yarn build
  • deploying will yield:
ERROR: (gcloud.app.deploy) Error Response: [9] 
Application startup error:
yarn start v0.21.3
$ node dist/index.js 
Listening on port 8080
Mar 22 22:34:57 google-bunyan-issue[14] INFO:  ::ffff:172.18.0.3 <-- GET /_ah/health HTTP/1.1 200 0 - Other 0.0 Other 0.0.0 48.062011 ms
  req: GET 10.128.0.7/_ah/health 200 48.062011ms - 0 bytes
node: symbol lookup error: /app/node_modules/grpc/src/node/extension_binary/grpc_node.node: undefined symbol: SSL_CTX_set_alpn_protos
error Command failed with exit code 127.

Copied from original issue: googleapis/google-cloud-node#2128

use of record field "labels" causes unhandled exception TypeError: .google.logging.v2.LogEntry.labels: object expected

Environment details

  • OS: OSX/Linux
  • Node.js version: 8.15
  • npm version: 6.4.1
  • @google-cloud/logging-bunyan version: 0.9.5

Steps to reproduce

  1. Use the field labels within the record object log.info({labels: -1}, 'test')

it will crash with

TypeError: .google.logging.v2.LogEntry.labels: object expected
    at Type.LogEntry$fromObject [as fromObject] (eval at Codegen (/usr/src/app/node_modules/@protobufjs/codegen/index.js:50:33), <anonymous>:86:9)
    at Type.WriteLogEntriesRequest$fromObject [as fromObject] (eval at Codegen (/usr/src/app/node_modules/@protobufjs/codegen/index.js:50:33), <anonymous>:30:25)
    at serialize (/usr/src/app/node_modules/grpc/src/protobuf_js_6_common.js:71:23)
    at Object.final_requester.sendMessage (/usr/src/app/node_modules/grpc/src/client_interceptors.js:806:37)
    at InterceptingCall._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:419:43)
    at InterceptingCall.sendMessage (/usr/src/app/node_modules/grpc/src/client_interceptors.js:464:8)
    at InterceptingCall._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:428:12)
    at InterceptingCall.sendMessage (/usr/src/app/node_modules/grpc/src/client_interceptors.js:464:8)
    at ServiceClient.Client.makeUnaryRequest (/usr/src/app/node_modules/grpc/src/client.js:536:21)
    at ServiceClient.method_func (/usr/src/app/node_modules/grpc/src/client.js:950:43)
    at /usr/src/app/node_modules/@google-cloud/logging/build/src/v2/logging_service_v2_client.js:188:39
    at Task.timeoutFunc [as _apiCall] (/usr/src/app/node_modules/google-gax/build/src/api_callable.js:143:16)
    at Task.run (/usr/src/app/node_modules/google-gax/build/src/bundling.js:195:18)
    at BundleExecutor._runNow (/usr/src/app/node_modules/google-gax/build/src/bundling.js:421:14)
    at Timeout._timers.(anonymous function).setTimeout [as _onTimeout] (/usr/src/app/node_modules/google-gax/build/src/bundling.js:367:22)
    at ontimeout (timers.js:498:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:290:5)

it looks like the protobuf for LogEntry defines the labels field as Object.<string, string>

seems like this is a reserved field so the easiest fix here would probably be to update documentation to avoid using this field (and potentially others?)

Thanks!

Auto-detect serviceContext.service on GCP

The Error Reporting API requires a serviceContext.service when reporting errors. Thus, if an error is logged by this library, but serviceContext.service is not specified, the error will not be reported to the Error Reporting console.

For applications deployed to GCP, the service should be automatically detected if possible.

See PR #122 and issue #121 for more information.

Logging: How to change BundleOptions

I hit a quota limit for Stackdriver ingestion requests recently, which is 1000 / second. I found this happened because I have a distributed service that was not bundling well because the GoogleCloud logging config sends logs every 50ms, per source here. This seems like an odd value to me, considering it only allows a maximum of 50 servers to be logging per project. I see this is defined as the GAX setting BundleOptions but the only GAX options that can be configured are CallOptions, per the @google-cloud/logging documentation.

Is there a way to change any of these options for the Bunyan logger? If not, do you have other recommendations to better batch logging requests in highly-distributed setups? Thanks!

Error to import dependency

I get the following error while trying to import the dependency:

 TypeError: setInterval(...).unref is not a function

      1 | const bunyan = require('bunyan');
    > 2 | const LoggingBunyan = require('@google-cloud/logging-bunyan');

Environment details

  • OS: macOS Mojave 10.14.6
  • Node.js version: 12.8.0
  • npm version: 6.10.2
  • @google-cloud/logging-bunyan version: 1.2.3
  1. I just try to import the dependency, and right after, I run the application

Async logging doesnt complete before function termination

When using the bunyan wrapper for logging from a Google Cloud Function, there doesn't appear to be a way to tell when the logs have been flushed to Stack Driver. If you don't give the logger time to flush, the logs wont show up.

Environment details

  • OS: Google Cloud Functions
  • Node.js version: All
  • npm version: All
  • @google-cloud/logging-bunyan version: ^ 0.10.1

Steps to reproduce

const { LoggingBunyan } = require('@google-cloud/logging-bunyan')
const bunyan = require('bunyan')
const loggingBunyan = new LoggingBunyan()

const stackdriver = bunyan.createLogger({
  name: 'frea-tarballs',
  level: 'debug',
  streams: [
    loggingBunyan.stream()
  ]
})

exports.logtest = function logtest (message, event, cb) {
  stackdriver.error('logging!', { message, event })
  return cb()
}

If you run this, the logs don't show up.

If you change return cb() to return setTimeout(cb, 1000) the logs usually show up.

Cannot read property 'APP_ENGINE' of undefined

Creating the express middleware for logging causes the application to crash, as the variable that APP_ENGINE is being read from is undefined.

Environment details

  • OS: AppEngine Standard Env
  • Node.js version: 8.x.x
  • npm version: 6.4.1
  • @google-cloud/logging-bunyan version: 0.10.0

Steps to reproduce

  1. Create the express.middleware() from the logging library
  2. See crash due to value APP_ENGINE being read from some undefined object.

Stacktrace

TypeError: Cannot read property 'APP_ENGINE' of undefined
    at Object.<anonymous> (..../node_modules/@google-cloud/logging-bunyan/build/src/middleware/express.js:55:50)
    at Generator.next (<anonymous>)
    at fulfilled (..../node_modules/@google-cloud/logging-bunyan/build/src/middleware/express.js:4:58)
    at propagateAslWrapper (..../node_modules/async-listener/index.js:504:23)
    at ..../node_modules/async-listener/glue.js:188:31
    at ..../node_modules/async-listener/index.js:541:70
    at ..../node_modules/async-listener/glue.js:188:31
    at <anonymous>
    at process._tickDomainCallback (internal/process/next_tick.js:229:7)
    at process.fallback (..../node_modules/async-listener/index.js:565:15)

I tested that this error vanishes on 0.9.5. I'm not sure if there is some breaking change introduced in the new release which needs me to change how I'm setting up the middleware.

Making sure to follow these steps will guarantee the quickest resolution possible.

Thanks!

GKE Stackdriver UI: Only showing stdout events

Hi, maybe i got this wrong or something, but severity is sent as INFO when it should be ERROR for this case:

captura de pantalla 2018-08-16 a las 12 24 01

Related Issue: #59

Environment details

  • OS: OSX El Capitan
  • Node.js version: 8.9.0
  • npm version: 6.0.1
  • @google-cloud/logging-bunyan version: v0.8.2

Steps to reproduce

// logger.js
const bunyan = require('bunyan')
const { LoggingBunyan } = require('@google-cloud/logging-bunyan')
const loggingBunyan = new LoggingBunyan()
const logger = bunyan.createLogger({
  name: 'my-service',
  streams: [
    { stream: process.stdout },
    loggingBunyan.stream(),
  ],
})

module.exports = {
  logger,
}

logger.error('ERROR')

Node 10: log entries appear as string in textPayload, not structured in jsonPayload

Environment details

  • OS: N/A
  • Node.js version: 10 (beta)
  • npm version: N/A
  • @google-cloud/logging-bunyan version: 0.10.1
  • bunyan version: 1.8.12

Steps to reproduce

  1. Create a new Google Cloud Function and select the Node.js 10 runtime
  2. Copy the code from the example in the README into the HTTP sample code:
const bunyan = require('bunyan');

// Imports the Google Cloud client library for Bunyan (Node 6+)
const {LoggingBunyan} = require('@google-cloud/logging-bunyan');

// Creates a Bunyan Stackdriver Logging client
const loggingBunyan = new LoggingBunyan();

// Create a Bunyan logger that streams to Stackdriver Logging
// Logs will be written to: "projects/YOUR_PROJECT_ID/logs/bunyan_log"
const logger = bunyan.createLogger({
  // The JSON payload of the log as it appears in Stackdriver Logging
  // will contain "name": "my-service"
  name: 'my-service',
  // log at 'info' and above
  level: 'info',
  streams: [
    // Log to the console
    {stream: process.stdout},
    // And log to Stackdriver Logging
    loggingBunyan.stream('info'),
  ],
});

/**
 * Responds to any HTTP request.
 *
 * @param {!express:Request} req HTTP request context.
 * @param {!express:Response} res HTTP response context.
 */
exports.helloWorld = (req, res) => {
  let message = req.query.message || req.body.message || 'Hello World!';
  res.status(200).send(message);

  // Writes some log entries
  logger.error('warp nacelles offline');
  logger.info('shields at 99%');
};
  1. Add dependencies to package.json:
{
  "name": "sample-http",
  "version": "0.0.1",
  "dependencies": {
    "@google-cloud/logging-bunyan": "0.10.1",
    "bunyan": "1.8.12"
  }
}
  1. Trigger the cloud function through an HTTP GET request
  2. View the logs

Expected result

Screen Shot 2019-04-08 at 13 58 14

Actual result

Screen Shot 2019-04-08 at 13 59 38

Additional information

The issue does not appear when I set the runtime to Node 6, nor when I set the runtime to Node 8. The expected result screenshot is the output of Node 6.

Wrong example of logger.info({msg:..

Hi, apparently there's a wrong example in the README.md.

logger.info({
  msg: 'Bunyan log entry with custom trace field',
  [LoggingBunyan.LOGGING_TRACE_KEY]: 'custom-trace-value'
});

With me, it only worked with message:.

Cheers,

Express middleware does not nest stackdriver entries

Environment details

  • OS: Ubuntu
  • Node.js version: 10.x
  • yarn version: 1.10.1
  • @google-cloud/logging-bunyan version: "^0.9.4"

Steps to reproduce

  1. Add middleware to express pipeline.
  async function StartServer() {
    const { mw } = await lb.express.middleware({
      logName: 'assets',
    })

    const app = express()

    // Enable this to get the proper IP from X-Forwarded-For headers
    app.enable('trust proxy')

    // Do not send X-Powered-By header
    app.disable('x-powered-by')

    // Install the logging middleware. This ensures that a Bunyan-style `log`
    // function is available on the `request` object. Attach this as one of the
    // earliest middleware to make sure that log function is available in all the
    // subsequent middleware and routes.
    app.use(mw)

    /* routes here */

    return app
  }
  1. Log multiple entries against the request
    req.log.info({
      httpRequest: {
        status: statusCode,
        requestUrl: req.url,
        requestMethod: req.method,
        remoteIp: req.ip,
        userAgent: req.headers['user-agent'],
        responseSize: res.get('Content-Length') || res._contentLength || 0,
        latency: {
          seconds: Math.floor(nanos / 1e9),
          nanos: Math.floor(nanos % 1e9),
        },
      },
    }, path)
    req.log.info(req.headers)
    req.log.info(`Timings: ${timings}`)

Unlike the documentation states, the log entries are not nested. They do have the same trace and insertId.

screen shot 2018-10-09 at 12 53 33 pm

Synthesis failed for nodejs-logging-bunyan

Hello! Autosynth couldn't regenerate nodejs-logging-bunyan. 💔

Here's the output from running synth.py:

Cloning into 'working_repo'...
Switched to branch 'autosynth'
�[35msynthtool > �[31m�[43mYou are running the synthesis script directly, this will be disabled in a future release of Synthtool. Please use python3 -m synthtool instead.�[0m
.circleci/config.yml
.circleci/npm-install-retry.js
.eslintignore
.eslintrc.yml
.github/CONTRIBUTING.md
.github/ISSUE_TEMPLATE/bug_report.md
.github/ISSUE_TEMPLATE/feature_request.md
.github/ISSUE_TEMPLATE/support_request.md
.jsdoc.js
.kokoro/common.cfg
.kokoro/continuous/node10/common.cfg
.kokoro/continuous/node10/test.cfg
.kokoro/continuous/node11/common.cfg
.kokoro/continuous/node11/test.cfg
.kokoro/continuous/node6/common.cfg
.kokoro/continuous/node6/test.cfg
.kokoro/continuous/node8/common.cfg
.kokoro/continuous/node8/docs.cfg
.kokoro/continuous/node8/lint.cfg
.kokoro/continuous/node8/samples-test.cfg
.kokoro/continuous/node8/system-test.cfg
.kokoro/continuous/node8/test.cfg
.kokoro/docs.sh
.kokoro/lint.sh
.kokoro/presubmit/node10/common.cfg
.kokoro/presubmit/node10/test.cfg
.kokoro/presubmit/node11/common.cfg
.kokoro/presubmit/node11/test.cfg
.kokoro/presubmit/node6/common.cfg
.kokoro/presubmit/node6/test.cfg
.kokoro/presubmit/node8/common.cfg
.kokoro/presubmit/node8/docs.cfg
.kokoro/presubmit/node8/lint.cfg
.kokoro/presubmit/node8/samples-test.cfg
.kokoro/presubmit/node8/system-test.cfg
.kokoro/presubmit/node8/test.cfg
.kokoro/presubmit/windows/common.cfg
.kokoro/presubmit/windows/test.cfg
.kokoro/samples-test.sh
.kokoro/system-test.sh
.kokoro/test.bat
.kokoro/test.sh
.kokoro/trampoline.sh
.nycrc
CODE_OF_CONDUCT.md
codecov.yaml
�[35msynthtool > �[36mCleaned up 2 temporary directories.�[0m

Changed files:

On branch autosynth
nothing to commit, working tree clean
Traceback (most recent call last):
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 166, in <module>
    main()
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 151, in main
    commit_changes(pr_title)
  File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 95, in commit_changes
    subprocess.check_call(["git", "commit", "-m", message])
  File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 291, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['git', 'commit', '-m', '[CHANGE ME] Re-generated  to pick up changes in the API or client library generator.']' returned non-zero exit status 1.

Google internal developers can see the full log here.

unhandledRejection event using .stream( ) in .createLogger( )

Point
Experience some connection issues (on stackdriver service side probably) module throws the unhandled rejections which can be handled only during the process.on('unhandledRejection') .

Generated unhandledRejection

FetchError: request to http://169.254.169.254/computeMetadata/v1/instance failed, reason: connect EHOSTDOWN 169.254.169.254:80 - Local (192.168.31.47:55341)
    at ClientRequest.<anonymous> (/Users/**/Documents/projects/**/node_modules/node-fetch/lib/index.js:1455:11)
at ClientRequest.emit (events.js:200:13)
at Socket.socketErrorListener (_http_client.js:402:9)
at Socket.emit (events.js:200:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at processTicksAndRejections (internal/process/task_queues.js:84:9) {
    message: 'request to http://169.254.169.254/computeMetadata/v1/instance failed, ' +
    'reason: connect EHOSTDOWN 169.254.169.254:80 - Local ' +
    '(192.168.31.47:55341)',
        type: 'system',
        errno: 'EHOSTDOWN',
        code: 'EHOSTDOWN',
        config: {
        url: 'http://169.254.169.254/computeMetadata/v1/instance',
            headers: { 'Metadata-Flavor': 'Google' },
        retryConfig: {
            noResponseRetries: 0,
                currentRetryAttempt: 0,
                retry: 3,
                retryDelay: 100,
                httpMethodsToRetry: [Array],
                statusCodesToRetry: [Array]
        },
        responseType: 'text',
            timeout: 3000,
            params: [Object: null prototype] {},
        paramsSerializer: [Function: paramsSerializer],
        validateStatus: [Function: validateStatus],
        method: 'GET'
    }
}

Start DATE/TIME
14:03 GMT+0, 14 September 2019

ENV

  • OS: MacOS 10.12.6 OR docker: node:current-alpine OR docker node:latest
  • Node.js version: 12.4.0 OR 10.15.3
  • npm version: 6.11.3
  • @google-cloud/logging-bunyan version: 1.2.3 OR 1.2.2 OR 1.2.1
  • location: Moscow, Russia OR Moscow, Russia + OpenVPN on FRA1 OR Kubernetes on FRA1 (DigitalOcean managed cluster)

Steps to reproduce

App code

const bunyan = require('bunyan')
  const {LoggingBunyan} = require('@google-cloud/logging-bunyan')

     const loggingBunyan = new LoggingBunyan({
            logName:  config.APP_NAME,
            projectId: config.GOOGLE_STACKDRIVER_PROJECT_ID,  // credentials are ok
            keyFilename: config.GOOGLE_STACKDRIVER_KEYFILE,
        });


        logger = await bunyan.createLogger({
            name: config.APP_NAME,
            streams: [
                loggingBunyan.stream('info'),  // removing this string causes good behavior
            ],
        });

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.