Giter Club home page Giter Club logo

kuberhealthy's Introduction


Kuberhealthy is a Kubernetes operator for synthetic monitoring and continuous process verification. Write your own tests in any language and Kuberhealthy will run them for you. Automatically creates metrics for Prometheus. Includes simple JSON status page. Now part of the CNCF!

License Go Report Card CII Best Practices Twitter Follow
Join Slack

What is Kuberhealthy?

Kuberhealthy lets you continuously verify that your applications and Kubernetes clusters are working as expected. By creating a custom resource (a KuberhealthyCheck) in your cluster, you can easily enable various synthetic tests and get Prometheus metrics for them.

Kuberhealthy comes with lots of useful checks already available to ensure the core functionality of Kubernetes, but checks can be used to test anything you like. We encourage you to write your own check container in any language to test your own applications. It really is quick and easy!

Kuberhealthy serves the status of all checks on a simple JSON status page, a Prometheus metrics endpoint (at /metrics), and supports InfluxDB metric forwarding for integration into your choice of alerting solution.

Installation

Deployment

Kuberhealthy requires Kubernetes 1.16 or above.

Using Plain Ole' YAML

If you just want the rendered default specs without Helm, you can use the static flat file or the static flat file for Prometheus or even the static flat file for Prometheus Operator.

Here are the one-line installation commands for those same specs:

# If you don't use Prometheus:
kubectl create namespace kuberhealthy
kubectl apply -f https://raw.githubusercontent.com/kuberhealthy/kuberhealthy/master/deploy/kuberhealthy.yaml

# If you use Prometheus, but not with Prometheus Operator:
kubectl create namespace kuberhealthy
kubectl apply -f https://raw.githubusercontent.com/kuberhealthy/kuberhealthy/master/deploy/kuberhealthy-prometheus.yaml

# If you use Prometheus Operator:
kubectl create namespace kuberhealthy
kubectl apply -f https://raw.githubusercontent.com/kuberhealthy/kuberhealthy/master/deploy/kuberhealthy-prometheus-operator.yaml

Using Helm

kubectl create namespace kuberhealthy
helm repo add kuberhealthy https://kuberhealthy.github.io/kuberhealthy/helm-repos
helm install -n kuberhealthy kuberhealthy kuberhealthy/kuberhealthy

If you have Prometheus

helm install --set prometheus.enabled=true -n kuberhealthy kuberhealthy kuberhealthy/kuberhealthy

If you have Prometheus via Prometheus Operator:

helm install --set prometheus.enabled=true --set prometheus.serviceMonitor.enabled=true -n kuberhealthy kuberhealthy kuberhealthy/kuberhealthy

Configure Service

After installation, Kuberhealthy will only be available from within the cluster (Type: ClusterIP) at the service URL kuberhealthy.kuberhealthy. To expose Kuberhealthy to clients outside of the cluster, you must edit the service kuberhealthy and set Type: LoadBalancer or otherwise expose the service yourself.

Edit Configuration Settings

You can edit the Kuberhealthy configmap as well and it will be automatically reloaded by Kuberhealthy. All configmap options are set to their defaults to make configuration easy.

kubectl edit -n kuberhealthy configmap kuberhealthy

See Configured Checks

You can see checks that are configured with kubectl -n kuberhealthy get khcheck. Check status can be accessed by the JSON status page endpoint, or via kubectl -n kuberhealthy get khstate.

Further Configuration

To configure Kuberhealthy after installation, see the configuration documentation.

Details on using the helm chart are documented here. The Helm installation of Kuberhealthy is automatically updated to use the latest Kuberhealthy release.

More installation options, including static yaml files are available in the /deploy directory. These flat spec files contain the most recent changes to Kuberhealthy, or the master branch. Use this if you would like to test master branch updates.

Visualized

Here is an illustration of how Kuberhealthy provisions and operates checker pods. The following process is illustrated:

  • An admin creates a KuberhealthyCheck resource that calls for a synthetic Kubernetes daemonset to be deployed and tested every 15 minutes. This will ensure that all nodes in the Kubernetes cluster can provision containers properly.
  • Kuberhealthy observes this new KuberhealthyCheck resource.
  • Kuberhealthy schedules a checker pod to manage the lifecycle of this check.
  • The checker pod creates a daemonset using the Kubernetes API.
  • The checker pod observes the daemonset and waits for all daemonset pods to become Ready
  • The checker pod deletes the daemonset using the Kubernetes API.
  • The checker pod observes the daemonset being fully cleaned up and removed.
  • The checker pod reports a successful test result back to Kuberhealthy's API.
  • Kuberhealthy stores this check's state and makes it available to various metrics systems.

Included Checks

You can use any of the pre-made checks by simply enabling them. By default Kuberhealthy comes with several checks to test Kubernetes deployments, daemonsets, and DNS.

Some checks you can easily enable:

  • SSL Handshake Check - checks SSL certificate validity and warns when certs are about to expire.
  • CronJob Scheduling Failures - checks for events indicating that a CronJob has failed to create Job pods.
  • Image Pull Check - checks that an image can be pulled from an image repository.
  • Deployment Check - verifies that a fresh deployment can run, deploy multiple pods, pass traffic, do a rolling update (without dropping connections), and clean up successfully.
  • Daemonset Check - verifies that a daemonset can be created, fully provisioned, and torn down. This checks the full kubelet functionality of every node in your Kubernetes cluster.
  • Storage Provisioner Check - verifies that a pod with persistent storage can be configured on every node in your cluster.

Create Synthetic Checks for Your APIs

You can easily create synthetic tests to check your applications and APIs with real world use cases. This is a great way to be confident that your application functions as expected in the real world at all times.

Here is a full check example written in go. Just implement doCheckStuff and you're off!

package main

import (
  "github.com/kuberhealthy/kuberhealthy/v2/pkg/checks/external/checkclient"
)

func main() {
  ok := doCheckStuff()
  if !ok {
    checkclient.ReportFailure([]string{"Test has failed!"})
    return
  }
  checkclient.ReportSuccess()
}

You can read more about how checks are configured and learn how to create your own check container. Checks can be written in any language and helpful clients for checks not written in Go can be found in the clients directory.

Status Page

You can directly access the current test statuses by accessing the kuberhealthy.kuberhealthy HTTP service on port 80. The status page displays server status in the format shown below. The boolean OK field can be used to indicate global up/down status, while the Errors array will contain a list of all check error descriptions. Granular, per-check information, including how long the check took to run (Run Duration), the last time a check was run, and the Kuberhealthy pod ran that specific check is available under the CheckDetails object.

{
    "OK": true,
    "Errors": [],
    "CheckDetails": {
        "kuberhealthy/daemonset": {
            "OK": true,
            "Errors": [],
            "RunDuration": "22.512278967s",
            "Namespace": "kuberhealthy",
            "LastRun": "2019-11-14T23:24:16.7718171Z",
            "AuthoritativePod": "kuberhealthy-67bf8c4686-mbl2j",
            "uuid": "9abd3ec0-b82f-44f0-b8a7-fa6709f759cd"
        },
        "kuberhealthy/deployment": {
            "OK": true,
            "Errors": [],
            "RunDuration": "29.142295647s",
            "Namespace": "kuberhealthy",
            "LastRun": "2019-11-14T23:26:40.7444659Z",
            "AuthoritativePod": "kuberhealthy-67bf8c4686-mbl2j",
            "uuid": "5f0d2765-60c9-47e8-b2c9-8bc6e61727b2"
        },
        "kuberhealthy/dns-status-internal": {
            "OK": true,
            "Errors": [],
            "RunDuration": "2.43940936s",
            "Namespace": "kuberhealthy",
            "LastRun": "2019-11-14T23:34:04.8927434Z",
            "AuthoritativePod": "kuberhealthy-67bf8c4686-mbl2j",
            "uuid": "c85f95cb-87e2-4ff5-b513-e02b3d25973a"
        },
        "kuberhealthy/pod-restarts": {
            "OK": true,
            "Errors": [],
            "RunDuration": "2.979083775s",
            "Namespace": "kuberhealthy",
            "LastRun": "2019-11-14T23:34:06.1938491Z",
            "AuthoritativePod": "kuberhealthy-67bf8c4686-mbl2j",
            "uuid": "a718b969-421c-47a8-a379-106d234ad9d8"
        }
    },
    "CurrentMaster": "kuberhealthy-7cf79bdc86-m78qr"
}

Contributing

If you're interested in contributing to this project:

  • Check out the Contributing Guide.
  • If you use Kuberhealthy in a production environment, add yourself to the list of Kuberhealthy adopters!
  • Check out open issues. If you're new to the project, look for the good first issue tag.
  • We're always looking for check contributions (either in suggestions or in PRs) as well as feedback from folks implementing Kuberhealthy locally or in a test environment.

Hermit

While working on Kuberhealthy, you can take advantage of the included Hermit dev environment to get Go & other tooling without having to install them separately on your local machine.

Just use the following command to activate the environment, and you're good to go:

. ./bin/activate-hermit

Monthly Community Meeting

If you would like to talk directly to the core maintainers to discuss ideas, code reviews, or other complex issues, we have a monthly Zoom meeting on the 24th day of every month at 04:30 PM Pacific Time.

kuberhealthy's People

Contributors

2infinitee avatar actions-user avatar adriananeci avatar ashutoshnirkhe avatar bavarianbidi avatar bbkgh avatar blame19 avatar brahminikatta avatar dependabot[bot] avatar geol86 avatar hungrylion2019 avatar ihoegen avatar integrii avatar isaaguilar avatar jdowni000 avatar jkulesa avatar jonnydawg avatar joshulyne avatar kristakhare avatar marksrobinson avatar mikeinton avatar nissessenap avatar qqshfox avatar raidancampbell avatar rawlingsj avatar rjacks161 avatar sheikhrachel avatar shillasaebi avatar u5surf avatar zjhans avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kuberhealthy's Issues

Add podAnnotations section to helm chart

If a podAnnotations section is added to the helm chart, it is possible to add annotations to kuberhealthy which can be used for setting up external prometheus scrapers (eg datadog).

Eg:

  podAnnotations:
    ad.datadoghq.com/kuberhealthy.check_names: '["prometheus"]'
    ad.datadoghq.com/kuberhealthy.init_configs: '[{}]'
    ad.datadoghq.com/kuberhealthy.instances: '[{"prometheus_url":
      "http://%%host%%:8080/metrics", "namespace": "kuberhealthy", "metrics":
      ["kuberhealthy_*"]}]'
    prometheus.io/port: "8080"
    prometheus.io/scrape: "true"

Document clustering methodology

Kuberhealthy uses a pretty sweet clustering system that does not require a database. CRDs are used along with the position of which pod comes up first in the pod listing (its always alphabetical). A quick readme file that describes this would be useful for contributors and others looking for an easy method of clustering things.

Tag and release 0.1.1

We need 0.1.1 released. This means we update the helm chart version and image URL, update the helm chart archive with helm package, then update the README.md to point to the new helm package download URL.

Daemonset checker detects pods that don't clean up after the daemonset is deleted

Kuberhealthy routinely makes a daemonset, then, once all the pods in the daemonset are in the 'Ready' state, Kuberhealthy deletes the daemonset and waits for all the pods to clean up.

It should be impossible for the daemonset to be deleted, but its pods to stay behind. However, this is exactly what occurs. The daemonset finishes removal but several daemonset checker pods are remaining.

Steps To Reproduce

  • Run kuberhealthy for a long time
  • Watch daemonset checker alerts

Expected behavior
The daemonset checker pods are always cleaned up and a daemonset is never deleted before all of its pods are deleted.

Screenshots
image

Create helm chart for Kuberhealthy 2

The current helm chart enables all checks. It is possible to pass in flags that disable checks. For example, if you only wanted the daemonset checker to run, you could pass in helm chart flags that disable the other checks.

This issue has a few items to complete:

  • add input flags to helm chart (in github.com/helm/charts/stable/kuberhealthy)
  • document new helm chart flags on the helm chart readme
  • document the flags used to disable each check in the README.md file where each check is defined

Setup TravisCI

Setup TravisCI so that master builds and pushes to the :unstable tag on quay.

Improve Daemonset Checking with DNS lookups and API calls

Deploying a pod to each node is a good start, but it would be nice if that pod also validated that it could communicate with the Kubernetes API and resolve DNS. It is possible for nodes to be online but with corrupted routing tables or kube-proxy configs. Better synthetic checks should reveal issues like this.

Let's create and publish a new container with an automated build pipeline. The container will simply query the Kubernetes API for some basic metadata. Just enough to ensure that kubernetes.default resolves and TCP connections to API servers work. Then, we modify the daemonset checker to build this pod and only start removal once all pods have come online AND completed their checks successfully. Maybe we use the health check command to indicate when the pod is ready.

We had a production impacting issue this week at my job due to a single node in the cluster having incomplete routing tables from Calico and/or kube-proxy. This caused anything routed through the bad node (or built there) to fail to resolve DNS and other bad things.

Include Namespace as Part of Check Details

Describe the solution you would like to see happen

This proposal is to include the check's namespace as part of the CheckDetails, which in turn goes into the State.

The main benefit to including the namespace as part of the CheckDetails is that it allows for more fine-grained metrics.

By having the namespace as part of CheckDetails, it is possible to include namespace as a label in the prometheus metric, allowing for greater flexibility in dashboards.

How do you propose we build it?
To do this, the CheckDetails struct would need to be updated to include the check namespace, possibly even include the name. These fields would then become part of the CRD when it's created.

panic: runtime error: invalid memory address or nil pointer dereference

Describe the bug
POD fails with:

kubectl -n kuberhealthy logs kuberhealthy-7868b9cd7b-6s42h 
time="2019-02-19T18:09:48Z" level=info msg="Startup Arguments: [/app/kuberhealthy]"
time="2019-02-19T18:09:48Z" level=info msg="Starting web services on port :8080"
time="2019-02-19T18:09:48Z" level=info msg="I am now master. Starting checks."
time="2019-02-19T18:09:48Z" level=info msg="Became master. Starting checks."
time="2019-02-19T18:09:48Z" level=info msg="Running check: PodStatusChecker namespace kube-system"
time="2019-02-19T18:09:48Z" level=info msg="Running check: DaemonSetChecker"
time="2019-02-19T18:09:48Z" level=info msg="Running check: PodRestartChecker namespace kube-system"
time="2019-02-19T18:09:48Z" level=info msg="Running check: ComponentStatusChecker"
time="2019-02-19T18:09:48Z" level=info msg="Setting state of check ComponentStatusChecker to true []"
time="2019-02-19T18:09:48Z" level=info msg="Searching for unique taints on the cluster."
time="2019-02-19T18:09:48Z" level=info msg="Found taints to tolerate: [{dedicated  infra NoSchedule <nil>}]"
time="2019-02-19T18:09:48Z" level=info msg="Generating daemon set kubernetes spec."
time="2019-02-19T18:09:48Z" level=info msg="Deploying daemon set with tolerations:  [{dedicated  infra NoSchedule <nil>}]"
time="2019-02-19T18:09:48Z" level=info msg="DaemonSetChecker removing 0 daemonset pods"
time="2019-02-19T18:09:48Z" level=info msg="DaemonSetChecker removing daemonset"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xf77cb1]

goroutine 39 [running]:
main.(*Kuberhealthy).setCheckExecutionError(0xc00025e6e0, 0x11b0b5a, 0x10, 0x0, 0x0)
        /go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:63 +0x211
main.(*Kuberhealthy).runCheck(0xc00025e6e0, 0xc0003dc0e0, 0x1302f20, 0xc000247000)
        /go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:242 +0x366
created by main.(*Kuberhealthy).StartChecks
        /go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:123 +0xf9

Steps To Reproduce

  • install with the helm chart

Expected behavior
Should run without failure with default settings.

Screenshots
N/A

Versions

  • Cluster OS: RHEL 7.6
  • Kubernetes Version: v1.12.4+icp-ee
  • Kuberhealthy: 1.0.0

Additional context
Add any other context about the problem here.

Index out of range

Describe the bug
Kuberhealthy panicked and died because of the an index out of range in master calculation. Looks like PodList is empty.

Expected behavior
Kuberhealthy to not panic

Versions

  • Kubernetes Version: 1.9.6

Additional context
Logs:

time="2018-07-19T20:03:32Z" level=info msg="Startup Arguments: [/app/kuberhealthy]"
time="2018-07-19T20:03:32Z" level=info msg="Starting web services on port :8080"
panic: runtime error: index out of range
main.(*Kuberhealthy).masterStatusMonitor(0xc4203bca00, 0xc420040060, 0xc4200400c0)
goroutine 7 [running]:
github.com/Comcast/kuberhealthy/masterCalculation.IAmMaster(0xc420374400, 0x1f, 0xc420374400, 0x0)
github.com/Comcast/kuberhealthy/masterCalculation.CalculateMaster(0xc420374400, 0x0, 0x0, 0x0, 0x0)
created by main.(*Kuberhealthy).Start
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:89 +0xa9
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:167 +0xc5
	/go/src/github.com/Comcast/kuberhealthy/masterCalculation/masterCalculation.go:92 +0x4f
	/go/src/github.com/Comcast/kuberhealthy/masterCalculation/masterCalculation.go:78 +0x41e

Pod restart is throwing false alarms

It appears that the pod restart checker is triggering on pods that have not restarted five times in the last hour. This only happens once the pods have actually restarted five times previously. Likely the counter is not being purged correctly.

image
image

Add hard pod anti-affinity to specs

We should avoid having multiple Kuberhealthy instances on a single node. This makes the system prone to failure when a single node is lost. Pod anti-affinity should take care of this.

Implement project vendoring

We should have vendoring so that our builds are predictable.

We should strongly consider vgo. If we have to do this before vgo is ready, I think the next best thing is dep. I have seen problems with dep against Kubernetes packages in the past, though.

v1.0.0 Release Checklist

  • Update install instructions in issue #84
  • Merge unstable into master
  • Add a v1.0.0 tag to quay.io
  • Update helm chart to new release image
  • Merge master into the gh-pages branch to update the readme
  • Tag a v1.0.0 release on github

Daemonsets are leaving pods around, which causes false alarms for daemonset checker

The daemonset checker is sometimes able to delete the daemonset, and pods from the daemonset remain forever. This looks like a race inherent in Kubernetes where daemonset pods are not always cleaned up when a daemonset is deleted.

To reproduce, run the daemonset checker for a long time. Every so often it will alert on pods not being removed. Checking the daemonset reveals it has already been deleted.

I would expect that kubernetes will always remove all daemonset pods when a daemonset is removed.

image

We will probably need to repeatedly issue kill commands directly to daemonset pods. For bonus points, we could take this to the Kubernetes project and help find the race condition.

CRD concurrent map access

Describe the bug
It seems like CRDs trigger some underlying concurrent map access problem within the kubernetes api.

Versions

  • Kubernetes Version: 1.9.7
  • Kuberhealthy Release or build [e.g. 0.1.5 or 235]

Stack:

time="2018-07-22T09:42:38Z" level=info msg="Setting state of check PodRestartChecker namespace kube-system to true []"
time="2018-07-22T09:42:38Z" level=info msg="Setting state of check PodRestartChecker namespace datadog-agent to true []"
time="2018-07-22T09:42:38Z" level=info msg="Setting state of check PodStatusChecker namespace openshift-infra to true []"
time="2018-07-22T09:42:38Z" level=info msg="Setting state of check PodStatusChecker namespace kube-system to true []"
time="2018-07-22T09:42:38Z" level=info msg="Setting state of check PodRestartChecker namespace openshift-infra to true []"
fatal error: concurrent map read and map write
goroutine 28 [running]:
runtime.throw(0x1132123, 0x21)
	/usr/local/go/src/runtime/panic.go:619 +0x81 fp=0xc420d91218 sp=0xc420d911f8 pc=0x42b491
runtime.mapaccess2(0xfdbdc0, 0xc4202db020, 0xc420d912d8, 0xc420337438, 0x1061b01)
	/usr/local/go/src/runtime/hashmap.go:409 +0x225 fp=0xc420d91260 sp=0xc420d91218 pc=0x409345
k8s.io/apimachinery/pkg/runtime.(*Scheme).ObjectKinds(0xc42026b0a0, 0x1207080, 0xc4200d36c0, 0x2a, 0x25, 0xffffffffffffffff, 0xffffffffffffffff, 0x1201cc0, 0xc4208c0b70)
	/go/src/k8s.io/apimachinery/pkg/runtime/scheme.go:260 +0x25d fp=0xc420d91338 sp=0xc420d91260 pc=0x7279ed
k8s.io/apimachinery/pkg/runtime.(*parameterCodec).EncodeParameters(0xc4200d2e40, 0x1207080, 0xc4200d36c0, 0x1123bf6, 0x11, 0x111b501, 0x2, 0x111b32c, 0x0, 0x1842a90)
	/go/src/k8s.io/apimachinery/pkg/runtime/codec.go:178 +0x62 fp=0xc420d91430 sp=0xc420d91338 pc=0x716e72
k8s.io/client-go/rest.(*Request).SpecificallyVersionedParams(0xc42065b380, 0x1207080, 0xc4200d36c0, 0x1207540, 0xc4200d2e40, 0x1123bf6, 0x11, 0x111b501, 0x2, 0xc42065b380)
	/go/src/k8s.io/client-go/rest/request.go:327 +0xaf fp=0xc420d91538 sp=0xc420d91430 pc=0xd16b0f
k8s.io/client-go/rest.(*Request).VersionedParams(0xc42065b380, 0x1207080, 0xc4200d36c0, 0x1207540, 0xc4200d2e40, 0xf921e0)
	/go/src/k8s.io/client-go/rest/request.go:320 +0x83 fp=0xc420d91598 sp=0xc420d91538 pc=0xd16a33
github.com/Comcast/kuberhealthy/khstatecrd.(*KuberhealthyStateClient).Get(0xc420886b20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x111d592, 0x8, ...)
	/go/src/github.com/Comcast/kuberhealthy/khstatecrd/functions.go:70 +0x168 fp=0xc420d91678 sp=0xc420d91598 pc=0xd4f5a8
main.ensureCRDExists(0xc420024810, 0x2a, 0xc420886b20, 0x2, 0xc420047a60)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/crd.go:77 +0x187 fp=0xc420d91ba8 sp=0xc420d91678 pc=0xee5d17
main.(*Kuberhealthy).storeCheckState(0xc420425360, 0xc420024810, 0x2a, 0x1, 0x18717c0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:255 +0xb8 fp=0xc420d91c70 sp=0xc420d91ba8 pc=0xee8458
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d43f0, 0x121dce0, 0xc4204345c0)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:237 +0x6a5 fp=0xc420d91fc0 sp=0xc420d91c70 pc=0xee8035
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc420d91fc8 sp=0xc420d91fc0 pc=0x457fe1
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 1 [IO wait]:
internal/poll.runtime_pollWait(0x7f04e193df00, 0x72, 0x0)
	/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420432e18, 0x72, 0xc4200d2000, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc420432e18, 0xffffffffffffff00, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Accept(0xc420432e00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:372 +0x1a8
net.(*netFD).accept(0xc420432e00, 0xc420c8ab20, 0xc4200c7c20, 0x402c58)
	/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc42000e720, 0xc4200c7c50, 0x401bb7, 0xc420c8ab20)
	/usr/local/go/src/net/tcpsock_posix.go:136 +0x2e
net.(*TCPListener).AcceptTCP(0xc42000e720, 0xc4200c7c98, 0xc4200c7ca0, 0x18)
	/usr/local/go/src/net/tcpsock.go:246 +0x49
net/http.tcpKeepAliveListener.Accept(0xc42000e720, 0x11989a0, 0xc420c8aaa0, 0x1218b60, 0xc42044a5d0)
	/usr/local/go/src/net/http/server.go:3216 +0x2f
net/http.(*Server).Serve(0xc420415520, 0x1216960, 0xc42000e720, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2770 +0x1a5
net/http.(*Server).ListenAndServe(0xc420415520, 0xc420415520, 0x2)
	/usr/local/go/src/net/http/server.go:2711 +0xa9
net/http.ListenAndServe(0x7ffc7470d6ab, 0x5, 0x0, 0x0, 0xc420048800, 0x4348d4)
	/usr/local/go/src/net/http/server.go:2969 +0x7a
main.(*Kuberhealthy).StartWebServer(0xc420425360)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:288 +0x166
main.main()
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/main.go:156 +0x5fd
goroutine 5 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0x18532a0)
	/go/src/github.com/golang/glog/glog.go:882 +0x8b
created by github.com/golang/glog.init.0
	/go/src/github.com/golang/glog/glog.go:410 +0x203
goroutine 6 [syscall, 230 minutes]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 8 [chan receive, 230 minutes]:
main.listenForInterrupts()
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/main.go:163 +0xb7
created by main.main
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/main.go:100 +0x4a
goroutine 19 [select, 230 minutes, locked to thread]:
runtime.gopark(0x1199000, 0x0, 0x111c592, 0x6, 0x18, 0x1)
	/usr/local/go/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc42009ef50, 0xc4203e8060)
	/usr/local/go/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
	/usr/local/go/src/runtime/signal_unix.go:549 +0x1f4
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1
goroutine 9 [select, 230 minutes]:
main.(*Kuberhealthy).Start(0xc420425360)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:93 +0x131
created by main.main
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/main.go:153 +0x5ed
goroutine 34 [chan receive]:
main.(*Kuberhealthy).masterStatusMonitor(0xc420425360, 0xc420456000, 0xc420456060)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:193 +0x14d
created by main.(*Kuberhealthy).Start
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:89 +0xa9
goroutine 21 [chan receive, 4 minutes]:
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d4000, 0x121dc20, 0xc42043e300)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:241 +0x6ce
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 85 [IO wait]:
internal/poll.runtime_pollWait(0x7f04e193de30, 0x72, 0xc4201cb850)
	/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc42003a318, 0x72, 0xffffffffffffff00, 0x1202fe0, 0x17f16f8)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc42003a318, 0xc420828000, 0x8000, 0x8000)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc42003a300, 0xc420828000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc42003a300, 0xc420828000, 0x8000, 0x8000, 0xc4201cba30, 0x80c802, 0x111b9bd)
	/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc420190070, 0xc420828000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:176 +0x6a
crypto/tls.(*block).readFromUntil(0xc4200f6ff0, 0x7f04e18ea800, 0xc420190070, 0x5, 0xc420190070, 0x10000000111b9bd)
	/usr/local/go/src/crypto/tls/conn.go:493 +0x96
crypto/tls.(*Conn).readRecord(0xc42003c700, 0x1199017, 0xc42003c820, 0x10)
	/usr/local/go/src/crypto/tls/conn.go:595 +0xe0
crypto/tls.(*Conn).Read(0xc42003c700, 0xc4203d6000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:1156 +0x100
bufio.(*Reader).Read(0xc4200d64e0, 0xc420258498, 0x9, 0x9, 0x3, 0xc4201cbc70, 0x405985)
	/usr/local/go/src/bufio/bufio.go:216 +0x238
io.ReadAtLeast(0x1200880, 0xc4200d64e0, 0xc420258498, 0x9, 0x9, 0x9, 0xc420925880, 0xc4201cbd30, 0x4056b7)
	/usr/local/go/src/io/io.go:309 +0x86
io.ReadFull(0x1200880, 0xc4200d64e0, 0xc420258498, 0x9, 0x9, 0x0, 0x0, 0x0)
	/usr/local/go/src/io/io.go:327 +0x58
k8s.io/apimachinery/vendor/golang.org/x/net/http2.readFrameHeader(0xc420258498, 0x9, 0x9, 0x1200880, 0xc4200d64e0, 0x0, 0x0, 0x1, 0xc4201cbdf8)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/frame.go:237 +0x7b
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc420258460, 0xc4203ce780, 0x0, 0x0, 0x0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/frame.go:492 +0xa4
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc4201cbfb0, 0x1197bf8, 0xc4204997b0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc4201101a0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:1354 +0x76
created by k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).newClientConn
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:579 +0x651
goroutine 22 [select]:
github.com/Comcast/kuberhealthy/checks/podRestarts.(*Checker).Run(0xc420434400, 0xc420048700, 0x2, 0xc4203b12f0)
	/go/src/github.com/Comcast/kuberhealthy/checks/podRestarts/podRestarts.go:78 +0x1b9
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d40e0, 0x121dc80, 0xc420434400)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:220 +0x190
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 23 [select]:
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc4201101a0, 0xc420cca700, 0x0, 0x0, 0x0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:879 +0x809
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc420351680, 0xc420cca700, 0x0, 0x40bf80, 0x111bf00, 0xc420de6b00)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:351 +0x156
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(0xc420351680, 0xc420cca700, 0x0, 0xc42002cc88, 0xc420028ab0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:313 +0x3a
k8s.io/apimachinery/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip(0xc420351680, 0xc420cca700, 0xc420de6b00, 0x5, 0xc42037f4c8)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/configure_transport.go:75 +0x39
net/http.(*Transport).RoundTrip(0xc4203940f0, 0xc420cca700, 0xd, 0xc420c00380, 0x376)
	/usr/local/go/src/net/http/transport.go:380 +0xc36
k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc420e59780, 0xc420cca700, 0xa, 0xc420cc4680, 0x34)
	/go/src/k8s.io/client-go/transport/round_trippers.go:284 +0x17c
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc420e597a0, 0xc420cca600, 0xc420e597a0, 0x0, 0x0)
	/go/src/k8s.io/client-go/transport/round_trippers.go:162 +0x10c
net/http.send(0xc420cca500, 0x1201780, 0xc420e597a0, 0x0, 0x0, 0x0, 0xc42055e348, 0x0, 0xc4209d5230, 0x1)
	/usr/local/go/src/net/http/client.go:252 +0x185
net/http.(*Client).send(0xc420028870, 0xc420cca500, 0x0, 0x0, 0x0, 0xc42055e348, 0x0, 0x1, 0x0)
	/usr/local/go/src/net/http/client.go:176 +0xfa
net/http.(*Client).Do(0xc420028870, 0xc420cca500, 0x0, 0x7e, 0x0)
	/usr/local/go/src/net/http/client.go:615 +0x28d
k8s.io/client-go/rest.(*Request).request(0xc4207fe000, 0xc4209d54d0, 0x0, 0x0)
	/go/src/k8s.io/client-go/rest/request.go:687 +0x34b
k8s.io/client-go/rest.(*Request).Do(0xc4207fe000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/rest/request.go:759 +0xb7
github.com/Comcast/kuberhealthy/khstatecrd.(*KuberhealthyStateClient).Get(0xc420e59820, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x111d592, 0x8, ...)
	/go/src/github.com/Comcast/kuberhealthy/khstatecrd/functions.go:71 +0x176
main.ensureCRDExists(0xc420cc06c0, 0x29, 0xc420e59820, 0x2, 0xc420047a60)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/crd.go:77 +0x187
main.(*Kuberhealthy).storeCheckState(0xc420425360, 0xc420cc06c0, 0x29, 0x1, 0x18717c0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:255 +0xb8
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d4150, 0x121dc80, 0xc420434440)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:237 +0x6a5
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 24 [runnable]:
k8s.io/client-go/rest.buildUserAgent(0x7ffc7470d68e, 0xc, 0x112b26b, 0x6, 0x111bf5a, 0x5, 0x111be38, 0x5, 0x111ee36, 0x7, ...)
	/go/src/k8s.io/client-go/rest/config.go:296 +0x124
k8s.io/client-go/rest.DefaultKubernetesUserAgent(0x1216560, 0xc420c73980)
	/go/src/k8s.io/client-go/rest/config.go:301 +0x187
github.com/Comcast/kuberhealthy/khstatecrd.Client(0x1123bf6, 0x11, 0x111b501, 0x2, 0xc420047a60, 0xd, 0x80, 0xc4207c66f0, 0x2b)
	/go/src/github.com/Comcast/kuberhealthy/khstatecrd/api.go:49 +0x1c7
main.(*Kuberhealthy).storeCheckState(0xc420425360, 0xc4207c66f0, 0x2b, 0x1, 0x18717c0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:249 +0x75
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d41c0, 0x121dc80, 0xc4204344c0)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:237 +0x6a5
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 25 [select]:
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc4201101a0, 0xc4206ace00, 0x0, 0x0, 0x0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:879 +0x809
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc420351680, 0xc4206ace00, 0x0, 0x40bf80, 0x111bf00, 0xc420102b80)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:351 +0x156
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(0xc420351680, 0xc4206ace00, 0x0, 0xc420833788, 0xc42054fd10)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:313 +0x3a
k8s.io/apimachinery/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip(0xc420351680, 0xc4206ace00, 0xc420102b80, 0x5, 0xc42037f4c8)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/configure_transport.go:75 +0x39
net/http.(*Transport).RoundTrip(0xc4203940f0, 0xc4206ace00, 0xd, 0xc420ce6380, 0x376)
	/usr/local/go/src/net/http/transport.go:380 +0xc36
k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc420539400, 0xc4206ace00, 0xa, 0xc4208e5bc0, 0x34)
	/go/src/k8s.io/client-go/transport/round_trippers.go:284 +0x17c
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc420539420, 0xc4206acd00, 0xc420539420, 0x0, 0x0)
	/go/src/k8s.io/client-go/transport/round_trippers.go:162 +0x10c
net/http.send(0xc4206acc00, 0x1201780, 0xc420539420, 0x0, 0x0, 0x0, 0xc42000ebc8, 0x0, 0xc4209d9230, 0x1)
	/usr/local/go/src/net/http/client.go:252 +0x185
net/http.(*Client).send(0xc42053ad50, 0xc4206acc00, 0x0, 0x0, 0x0, 0xc42000ebc8, 0x0, 0x1, 0x0)
	/usr/local/go/src/net/http/client.go:176 +0xfa
net/http.(*Client).Do(0xc42053ad50, 0xc4206acc00, 0x0, 0x7c, 0x0)
	/usr/local/go/src/net/http/client.go:615 +0x28d
k8s.io/client-go/rest.(*Request).request(0xc420c1b680, 0xc4209d94d0, 0x0, 0x0)
	/go/src/k8s.io/client-go/rest/request.go:687 +0x34b
k8s.io/client-go/rest.(*Request).Do(0xc420c1b680, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/rest/request.go:759 +0xb7
github.com/Comcast/kuberhealthy/khstatecrd.(*KuberhealthyStateClient).Get(0xc4205394a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x111d592, 0x8, ...)
	/go/src/github.com/Comcast/kuberhealthy/khstatecrd/functions.go:71 +0x176
main.ensureCRDExists(0xc420ea20c0, 0x27, 0xc4205394a0, 0x2, 0xc420047a60)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/crd.go:77 +0x187
main.(*Kuberhealthy).storeCheckState(0xc420425360, 0xc420ea20c0, 0x27, 0x1, 0x18717c0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:255 +0xb8
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d4230, 0x121dc80, 0xc420434500)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:237 +0x6a5
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 26 [select]:
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).Run(0xc420434540, 0xc420048700, 0x2, 0xc4206ee010)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:84 +0x1b9
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d42a0, 0x121dce0, 0xc420434540)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:220 +0x190
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 27 [select]:
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).Run(0xc420434580, 0xc420048700, 0x2, 0xc420540010)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:84 +0x1b9
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d4380, 0x121dce0, 0xc420434580)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:220 +0x190
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 29 [select]:
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc4201101a0, 0xc420ccaa00, 0x0, 0x0, 0x0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:879 +0x809
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc420351680, 0xc420ccaa00, 0x0, 0x40bf80, 0x111bf00, 0xc420de6e00)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:351 +0x156
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(0xc420351680, 0xc420ccaa00, 0x0, 0xc42002d8e8, 0xc420029650)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:313 +0x3a
k8s.io/apimachinery/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip(0xc420351680, 0xc420ccaa00, 0xc420de6e00, 0x5, 0xc42037f4c8)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/configure_transport.go:75 +0x39
net/http.(*Transport).RoundTrip(0xc4203940f0, 0xc420ccaa00, 0xd, 0xc420c00a80, 0x376)
	/usr/local/go/src/net/http/transport.go:380 +0xc36
k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc420021020, 0xc420ccaa00, 0xa, 0xc420cc4b00, 0x34)
	/go/src/k8s.io/client-go/transport/round_trippers.go:284 +0x17c
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc420021040, 0xc420cca900, 0xc420021040, 0x0, 0x0)
	/go/src/k8s.io/client-go/transport/round_trippers.go:162 +0x10c
net/http.send(0xc420cca800, 0x1201780, 0xc420021040, 0x0, 0x0, 0x0, 0xc42055e408, 0x0, 0xc420c77230, 0x1)
	/usr/local/go/src/net/http/client.go:252 +0x185
net/http.(*Client).send(0xc420029350, 0xc420cca800, 0x0, 0x0, 0x0, 0xc42055e408, 0x0, 0x1, 0x0)
	/usr/local/go/src/net/http/client.go:176 +0xfa
net/http.(*Client).Do(0xc420029350, 0xc420cca800, 0x0, 0x7b, 0x0)
	/usr/local/go/src/net/http/client.go:615 +0x28d
k8s.io/client-go/rest.(*Request).request(0xc4207fe480, 0xc420c774d0, 0x0, 0x0)
	/go/src/k8s.io/client-go/rest/request.go:687 +0x34b
k8s.io/client-go/rest.(*Request).Do(0xc4207fe480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/rest/request.go:759 +0xb7
github.com/Comcast/kuberhealthy/khstatecrd.(*KuberhealthyStateClient).Get(0xc4200211a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x111d592, 0x8, ...)
	/go/src/github.com/Comcast/kuberhealthy/khstatecrd/functions.go:71 +0x176
main.ensureCRDExists(0xc420cc0b70, 0x26, 0xc4200211a0, 0x2, 0xc420047a60)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/crd.go:77 +0x187
main.(*Kuberhealthy).storeCheckState(0xc420425360, 0xc420cc0b70, 0x26, 0x1, 0x18717c0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:255 +0xb8
main.(*Kuberhealthy).runCheck(0xc420425360, 0xc4200d4460, 0x121dce0, 0xc420434600)
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:237 +0x6a5
created by main.(*Kuberhealthy).StartChecks
	/go/src/github.com/Comcast/kuberhealthy/cmd/kuberhealthy/kuberhealthy.go:112 +0xe4
goroutine 27298 [runnable]:
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*clientStream).awaitRequestCancel(0xc42061fcc0, 0xc420dc4500)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:243
created by k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:1620 +0x77e
goroutine 26116 [runnable]:
encoding/json.stateInString(0xc42010c260, 0x10ade39, 0x0)
	/usr/local/go/src/encoding/json/scanner.go:336 +0x100
encoding/json.checkValid(0xc420db8000, 0x6bcf, 0x7e00, 0xc42010c260, 0x1016900, 0x1)
	/usr/local/go/src/encoding/json/scanner.go:29 +0xb0
encoding/json.Unmarshal(0xc420db8000, 0x6bcf, 0x7e00, 0xf612e0, 0xc42018c120, 0x5b54518e, 0xc420bae830)
	/usr/local/go/src/encoding/json/decode.go:102 +0x66
k8s.io/apimachinery/pkg/runtime/serializer/json.SimpleMetaFactory.Interpret(0xc420db8000, 0x6bcf, 0x7e00, 0xc420baeb00, 0xd1e18f, 0x1871400)
	/go/src/k8s.io/apimachinery/pkg/runtime/serializer/json/meta.go:55 +0x7a
k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Decode(0xc4200d2d40, 0xc420db8000, 0x6bcf, 0x7e00, 0x0, 0x1205a80, 0xc42084c070, 0xc420db2c70, 0xd16b6b, 0xc4206e2100, ...)
	/go/src/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:171 +0x273
k8s.io/apimachinery/pkg/runtime/serializer/versioning.DirectDecoder.Decode(0x1200d00, 0xc4200d2d40, 0xc420db8000, 0x6bcf, 0x7e00, 0x0, 0x1205a80, 0xc42084c070, 0x0, 0x0, ...)
	/go/src/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:265 +0x97
k8s.io/client-go/rest.Result.Into(0xc420db8000, 0x6bcf, 0x7e00, 0xc420492db0, 0x10, 0x0, 0x0, 0xc8, 0x12026e0, 0xc420393e80, ...)
	/go/src/k8s.io/client-go/rest/request.go:1061 +0xb6
k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc4208b2080, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:85 +0x1a2
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).podFailures(0xc420434580, 0x42d26b, 0xc420876ee0, 0x453ed0, 0xc420cd0d80, 0x4)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:126 +0xbe
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).doChecks(0xc420434580, 0x0, 0x0)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:103 +0x40
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).Run.func1(0xc420434580, 0xc420642000)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:79 +0x2b
created by github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).Run
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:78 +0x9f
goroutine 27282 [select]:
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc4201101a0, 0xc420dc4200, 0x0, 0x0, 0x0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:879 +0x809
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc420351680, 0xc420dc4200, 0x0, 0x40bf80, 0x111bf00, 0xc420916080)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:351 +0x156
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(0xc420351680, 0xc420dc4200, 0x0, 0xc42063c448, 0xc420544720)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:313 +0x3a
k8s.io/apimachinery/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip(0xc420351680, 0xc420dc4200, 0xc420916080, 0x5, 0xc42037f4c8)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/configure_transport.go:75 +0x39
net/http.(*Transport).RoundTrip(0xc4203940f0, 0xc420dc4200, 0xd, 0xc420db6380, 0x376)
	/usr/local/go/src/net/http/transport.go:380 +0xc36
k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc420411a00, 0xc420dc4200, 0xa, 0xc4204164c0, 0x34)
	/go/src/k8s.io/client-go/transport/round_trippers.go:284 +0x17c
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc420411a20, 0xc420dc4100, 0xc420411a20, 0x0, 0x0)
	/go/src/k8s.io/client-go/transport/round_trippers.go:162 +0x10c
net/http.send(0xc420dc4000, 0x1201780, 0xc420411a20, 0x0, 0x0, 0x0, 0xc42088a018, 0x0, 0xc4200c32a0, 0x1)
	/usr/local/go/src/net/http/client.go:252 +0x185
net/http.(*Client).send(0xc4203cf9e0, 0xc420dc4000, 0x0, 0x0, 0x0, 0xc42088a018, 0x0, 0x1, 0x0)
	/usr/local/go/src/net/http/client.go:176 +0xfa
net/http.(*Client).Do(0xc4203cf9e0, 0xc420dc4000, 0x0, 0x38, 0x0)
	/usr/local/go/src/net/http/client.go:615 +0x28d
k8s.io/client-go/rest.(*Request).request(0xc420e38000, 0xc420bab540, 0x0, 0x0)
	/go/src/k8s.io/client-go/rest/request.go:687 +0x34b
k8s.io/client-go/rest.(*Request).Do(0xc420e38000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/rest/request.go:759 +0xb7
k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc42089a0c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:84 +0x149
github.com/Comcast/kuberhealthy/checks/podRestarts.(*Checker).doChecks(0xc420434400, 0x0, 0x0)
	/go/src/github.com/Comcast/kuberhealthy/checks/podRestarts/podRestarts.go:124 +0xbb
github.com/Comcast/kuberhealthy/checks/podRestarts.(*Checker).Run.func1(0xc420434400, 0xc4204560c0)
	/go/src/github.com/Comcast/kuberhealthy/checks/podRestarts/podRestarts.go:73 +0x2b
created by github.com/Comcast/kuberhealthy/checks/podRestarts.(*Checker).Run
	/go/src/github.com/Comcast/kuberhealthy/checks/podRestarts/podRestarts.go:72 +0x9f
goroutine 27283 [runnable]:
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc4201101a0, 0xc420dc4500, 0x0, 0x0, 0x0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:879 +0x809
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc420351680, 0xc420dc4500, 0x0, 0x40bf80, 0x111bf00, 0xc4209160c0)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:351 +0x156
k8s.io/apimachinery/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(0xc420351680, 0xc420dc4500, 0x0, 0xc42063c9a8, 0xc420544900)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/transport.go:313 +0x3a
k8s.io/apimachinery/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip(0xc420351680, 0xc420dc4500, 0xc4209160c0, 0x5, 0xc42037f4c8)
	/go/src/k8s.io/apimachinery/vendor/golang.org/x/net/http2/configure_transport.go:75 +0x39
net/http.(*Transport).RoundTrip(0xc4203940f0, 0xc420dc4500, 0xd, 0xc420db6700, 0x376)
	/usr/local/go/src/net/http/transport.go:380 +0xc36
k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc420411a00, 0xc420dc4500, 0xa, 0xc4204164c0, 0x34)
	/go/src/k8s.io/client-go/transport/round_trippers.go:284 +0x17c
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc420411a20, 0xc420dc4400, 0xc420411a20, 0x0, 0x0)
	/go/src/k8s.io/client-go/transport/round_trippers.go:162 +0x10c
net/http.send(0xc420dc4300, 0x1201780, 0xc420411a20, 0x0, 0x0, 0x0, 0xc42088a038, 0x0, 0xc4200c8978, 0x1)
	/usr/local/go/src/net/http/client.go:252 +0x185
net/http.(*Client).send(0xc4203cf9e0, 0xc420dc4300, 0x0, 0x0, 0x0, 0xc42088a038, 0x0, 0x1, 0x0)
	/usr/local/go/src/net/http/client.go:176 +0xfa
net/http.(*Client).Do(0xc4203cf9e0, 0xc420dc4300, 0x0, 0x38, 0x0)
	/usr/local/go/src/net/http/client.go:615 +0x28d
k8s.io/client-go/rest.(*Request).request(0xc420e38180, 0xc420d94c18, 0x0, 0x0)
	/go/src/k8s.io/client-go/rest/request.go:687 +0x34b
k8s.io/client-go/rest.(*Request).Do(0xc420e38180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/rest/request.go:759 +0xb7
k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc42089a360, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:84 +0x149
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).podFailures(0xc420434540, 0x80000, 0x80000, 0x80000, 0xc420669c80, 0x4)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:126 +0xbe
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).doChecks(0xc420434540, 0x0, 0x0)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:103 +0x40
github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).Run.func1(0xc420434540, 0xc420456180)
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:79 +0x2b
created by github.com/Comcast/kuberhealthy/checks/podStatus.(*Checker).Run
	/go/src/github.com/Comcast/kuberhealthy/checks/podStatus/podStatus.go:78 +0x9f

Build is broken due to update of k8s.io/apimachinery/pkg/apis/meta/v1.ListOptions

Describe the bug
Build failing with following errors

Fetching https://k8s.io/client-go/tools/clientcmd?go-get=1
Parsing meta tags from https://k8s.io/client-go/tools/clientcmd?go-get=1 (status code 200)
get "k8s.io/client-go/tools/clientcmd": found meta tag get.metaImport{Prefix:"k8s.io/client-go", VCS:"git", RepoRoot:"https://github.com/kubernetes/client-go"} at https://k8s.io/client-go/tools/clientcmd?go-get=1
get "k8s.io/client-go/tools/clientcmd": verifying non-authoritative meta tag
github.com/integrii/flaggy (download)
k8s.io/api/vendor/github.com/gogo/protobuf/sortkeys
...
github.com/sirupsen/logrus
k8s.io/client-go/vendor/github.com/spf13/pflag
github.com/Comcast/kuberhealthy/pkg/checks/daemonSet
# github.com/Comcast/kuberhealthy/pkg/checks/daemonSet
../../pkg/checks/daemonSet/daemonSet.go:699:3: unknown field 'IncludeUninitialized' in struct literal of type "k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions
../../pkg/checks/daemonSet/daemonSet.go:787:3: unknown field 'IncludeUninitialized' in struct literal of type "k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions
../../pkg/checks/daemonSet/daemonSet.go:807:3: unknown field 'IncludeUninitialized' in struct literal of type "k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions
../../pkg/checks/daemonSet/daemonSet.go:836:4: unknown field 'IncludeUninitialized' in struct literal of type "k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions
../../pkg/checks/daemonSet/daemonSet.go:851:5: unknown field 'IncludeUninitialized' in struct literal of type "k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions

Steps To Reproduce

docker build .

Additional context
Revision of Comcast/kuberhealthy 67fd71113deb20625f97772ccdbe6fb555d85886
Remove use of alpha initializers

Implement more fine grained tagging in prometheus metrics

Describe the feature you would like and why you want it

Add additional tags to prometheus check value metrics:

https://github.com/Comcast/kuberhealthy/blob/6dab3697e6626f1e76f3e3a2f3e4941524046f80/pkg/metrics/exporter.go#L40

e.g. check=$checkname,namespace=$namespace for additional grainularity.

Additional context

We're feeding these into DataDog such as: https://photos.app.goo.gl/4o989E9uvSWkoVmd6

It would be excellent if we had per check/namespace tuples that we could filter on and use in alerts if a check value drops below 1.

Publish a helm chart

As of right now, the helm chart has no icon because we do not have an approved and trade-markable icon to use. When we do, we should add one.

Relevant to original issue #25

Package lists left after updates

The production container should purge package lists after updating and installing in the dockerfile. This must all happen in one step or else space will still be lost as diffs are between layers.

Failed to install using helm

Describe the bug
I tried to install using the following helm command:

helm install stable/kuberhealthy --set prometheus.enabled=true

Installation failed with the following error:

Error: validation failed: unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Steps To Reproduce
Try to install with helm
Expected behavior
Kubehealthy installed currently

Versions

  • Cluster OS: Mac OS (docker for desktop Version 2.0.0.2 (30215))
  • Kubernetes Version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:38Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • Helm version:
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
  • Kuberhealthy Release or build: latest

Check all kube-dns pods functionality

Describe the solution you would like to see happen
Sometimes specific kube-dns pods are not returning appropriate responses. We could check them directly with DNS queries.

How do you propose we build it?
Make a new check. Check lists kube-dns pods in kube-system. DNS tests are executed on each for various internal service names.

Move all go packages into a pkg directory

Describe the solution you would like to see happen
I would like to move the packages in the root of the source control into a directory named pkg. This would make it obvious which folder was a go package and which are other things like a helm chart or documentation.

How do you propose we build it?
checks ➡️pkg/checks
health ➡️pkg/health
kubeClient ➡️pkg/kubeClient
masterCalculation ➡️pkg/masterCalculation
metrics ➡️pkg/metrics

A new directory named helm would be left in the root when the helm chart branch is merged for issue #10.

Create getting started guide

The existing guide assumes too many things and is too short. We need a good getting started doc in the docs directory.

namespace list regexp?

What would you all think about doing a regexp for namespace matching? In a world where we have 100's of namespaces, I'd love to be able to throw a checker on each (if it can scale) for massively failing pods. Obviously, this would be a query into the API to match namespaces that will be submitted as a check.

Missing env variable: MY_POD_NAME

Describe the bug
A clear and concise description of what the bug is.

I used https://github.com/Comcast/kuberhealthy/blob/master/deploy/kuberhealthy.yaml to deploy kuberhealthy in my Kubernetes cluster. The pod always complain with the following error:

time="2019-02-13T12:57:40Z" level=error msg="Could not retreive Environment variable, or it had no content. MY_POD_NAME"
time="2019-02-13T12:57:50Z" level=error msg="Could not retreive Environment variable, or it had no content. MY_POD_NAME"
time="2019-02-13T12:57:50Z" level=error msg="Could not retreive Environment variable, or it had no content. MY_POD_NAME"
time="2019-02-13T12:58:00Z" level=error msg="Could not retreive Environment variable, or it had no content. MY_POD_NAME"
time="2019-02-13T12:58:00Z" level=error msg="Could not retreive Environment variable, or it had no content. MY_POD_NAME"

Steps To Reproduce

  • kubectl apply -f https://github.com/Comcast/kuberhealthy/blob/master/deploy/kuberhealthy.yaml

Expected behavior
The kuberhealhy deployment should work out of the box. The issue came probably with this commit: dc768f8 the manifest should point to a container tag that supports the new env variable (or sets the old one).

Versions

  • Cluster OS: COS on GKE
  • Kubernetes Version: v1.11.6-gke.3
  • Kuberhealthy Release or build [e.g. 0.1.5 or 235]

Additional context
Changing the version from 0.1.1 to 1.0.0 works: https://quay.io/repository/comcast/kuberhealthy?tab=tags

Include recent pod events in error messages for podStatus

If a pod is not in a Ready state, there are usually events associated with that error state. Ex: cannot create storage object, no schedulable nodes, etc. This could be important information to pass through to the status page.

DS GC not working after a crash/panic

Describe the bug

We had another CRD concurrent read/write issue that popped up in our test environment. As a result, the daemonset and the resultant pods that daemonset created were kept ad infinitum. Furthermore, because the new kuberhealthy pod was restarted and had the same name, I believe that the "orphan" detection was unable to determine that this was an orphaned daemonset. While this did not provide functional issues, it did leave a DS hanging around for > 1 day.

Steps To Reproduce
Not 100% sure, but we're polling the prometheus metrics every 20-ish seconds. We have the 2 kuberhealthy pods running

Expected behavior
Even though Kubernetes has restarted the kuberhealthy pod that crashed, it should be able to determine that the daemonset is an orphan

Screenshots
N/A

Versions

  • CentOS 7.5
  • Kubernetes Version: 1.9.1
  • Kuberhealthy Release or build: 0.1.1

Additional context
Please find attached relevant output
oc-get-po-o-wide.txt
kuberhealthy-logs-current.txt
kuberhealthy-logs-previous.txt
oc-get-po-o-yaml.txt

Create a website

We should have a public website with a logo and graphics. We can use github pages.

Create prometheus integration

Lets create a metrics endpoint and serve up some prometheus endpoints for easy integration.

* create a metrics endpoint that exposes prometheus endpoints
* Include a prometheus annotation on the deployment yaml
* include the prometheus metrics endpoint on the service and pod port spec
* create some base line recommended prometheus alert definitions

Implement a persistent volume checker

We need a way to confirm that PVCs and PVs can be requested and attached to every node in a cluster. We probably want to leverage the DaemonSet checker for this and have this be a configurable option.

nevalau on Reddit has RBAC issue

image

2019-02-12T09:00:06.663145281Z time="2019-02-12T09:00:06Z" level=warning msg="DaemonSetCheckerError determining which node was unschedulable. Retrying.nodes is forbidden: User \"system:serviceaccount:kuberhealthy:kuberhealthy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope"

Steps To Reproduce
Install the helm chart

Expected behavior
The helm chart installs and everything works

Update copyright headers

Please update all headers to show the following copyright statement:
Copyright 2018 Comcast Cable Communications Management, LLC

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.