Giter Club home page Giter Club logo

k8s-sentry's People

Contributors

athak avatar flimzy avatar wichert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

k8s-sentry's Issues

Missing fingerprints on events

Might try to resolve this myself, but events right now aren't fingerprinted at all which means every node's unique error becomes a unique issue in Sentry, but what you really want is that every unique type of error is a unique issue (and every pod experiencing that error bubbles up to the same issue).

There's a few approaches to this, but the current sentry-kubernetes does this fairly well.

Report errors from pod's stderr stream

Hi! I found that k8s-sentry only logs operational issues (such as a failing readiness probe). However I was hoping that messages printed to a pod's stderr stream would be logged as well to Sentry. In my understanding this would increase the utility of k8s-sentry a lot. Though this might not be in the scope of this tool.

How would I need to extend this tool so that a pod's error messages, printed stderr, can be reported as welll?

And thanks a ton for this tool!

Add k8s context

Something I'm looking to evaluate and contribute back (in hopes that maybe we can standardize on a more correct/performant/useful SDK), is adding additional context/index capabilities.

Contexts look like this:

{
  // event payload
  "contexts": {
    "kubernetes": {
       "key": "value"
    }
  }
}

Eventually these will get auto-indexed like tags do, but right now we still have to send both tags (for search) and contexts for display. It's basically a more flexible version of 'extra' (since its just a namespaced extra).

I think for starters it'd be good to take every tag or generic object metadata and stuff it into the context.

Ability to Customize Sentry Reporting

It would be great to have the ability to customize what events or what types of events are sent to sentry from this library. For example, only reporting errors to sentry instead of warnings could help manage monthly sentry usage, or blacklisting certain warning types that are not impacting application performance. We are currently using GKE, and so we get a number of warnings that are not impacting application performance but are driving up our sentry event usage.

Helm chart

I'd love to have this as a Helm chart, if I submit a PR would you be up for it?

The Helm bits themselves are just a few more YAML files, but to make the chart installable it'll need to be compiled on CI and served up from Github Pages.

Bind logger

This is more of an open conversation if its agreeable, but you have two options for monitoring k8s:

  • use an existing project (e.g. i have one single project for a service)
  • use a new project just for k8s

Neither quite fit how we describe Sentry with one project per service.

Anyways, I think it could be valuable for us to bind the 'logger' attribute as 'kubernetes' so if you do choose to use one project, you can then easily filter down to kubernetes-only events.

Failed CronJob runs get re-raised until cleaned up (and don't have message)

I found an odd issue, it seems that if there's a Failed run of a CronJob, k8s-sentry will keep on re-raising it as a Sentry issue approximately 8 times an hour until it's removed.

The only workaround is to delete these manually:

kubectl delete pods --field-selector status.phase=Failed --all-namespaces

It also seems that the error message / reason is missing, the actualy issue was container exiting with exit code 1.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.