Giter Club home page Giter Club logo

sloop's Introduction

Sloop - Kubernetes History Visualization

Publish Status Build Status Go Report Card


Sloop monitors Kubernetes, recording histories of events and resource state changes and providing visualizations to aid in debugging past events.

Key features:

  1. Allows you to find and inspect resources that no longer exist (example: discover what host the pod from the previous deployment was using).
  2. Provides timeline displays that show rollouts of related resources in updates to Deployments, ReplicaSets, and StatefulSets.
  3. Helps debug transient and intermittent errors.
  4. Allows you to see changes over time in a Kubernetes application.
  5. Is a self-contained service with no dependencies on distributed storage.

Screenshots

Screenshot1

Architecture Overview

Architecture

Install

Sloop can be installed using any of these options:

Helm Chart

Users can install sloop by using helm chart now, for instructions refer helm readme

Precompiled Binaries

TODO: See the Releases.

Build from Source

Building Sloop from source needs a working Go environment with the version defined in the go.mod file or greater.

See: https://golang.org/doc/install

Clone the sloop repository and build using make:

mkdir -p $GOPATH/src/github.com/salesforce
cd $GOPATH/src/github.com/salesforce
git clone https://github.com/salesforce/sloop.git
cd sloop
make
$GOPATH/bin/sloop

When complete, you should have a running Sloop version accessing the current context from your kubeConfig. Just point your browser at http://localhost:8080/

Other makefile targets:

  • docker: Builds a Docker image.
  • cover: Runs unit tests with code coverage.
  • generate: Updates genny templates for typed table classes.
  • protobuf: Generates protobuf code-gen.

Local Docker Run

To run from Docker you need to host mount your kubeconfig:

make docker-snapshot
docker run --rm -it -p 8080:8080 -v ~/.kube/:/kube/ -e KUBECONFIG=/kube/config sloop

In this mode, data is written to a memory-backed volume and is discarded after each run. To preserve the data, you can host-mount /data with something like -v /data/:/some_path_on_host/

Updating webfiles folder

To reflect any changes to webserver/webfiles, run the following command on terminal while within the webserver directory before submitting a pr:

go-bindata -pkg webserver -o bindata.go webfiles/

This will update the bindata folder with your changes to any html, css or javascript files within the directory.

Local Docker Run and connecting to EKS

This is very similar to above but abstracts running docker with AWS credentials for connecting to EKS

make docker
export AWS_ACCESS_KEY_ID=<access_key_id> AWS_SECRET_ACCESS_KEY=<secret_access_key> AWS_SESSION_TOKEN=<session_token>
./providers/aws/sloop_to_eks.sh <cluster name>

Data retention policy stated above still applies in this case.

Backup & Restore

This is an advanced feature. Use with caution.

To download a backup of the database, navigate to http://localhost:8080/data/backup

To restore from a backup, start sloop with the -restore-database-file flag set to the backup file downloaded in the previous step. When restoring, you may also wish to set the -disable-kube-watch=true flag to stop new writes from occurring and/or the -context flag to restore the database into a different context.

Memory Consumption

Sloop's memory usage can be managed by tweaking several options:

  • badger-use-lsm-only-options If this flag is set to true, values would be collocated with the LSM tree, with value log largely acting as a write-ahead log only. Recommended value for memory constrained environments: false
  • badger-keep-l0-in-memory When this flag is set to true, Level 0 tables are kept in memory. This leads to better performance in writes as well as compactions. Recommended value for memory constrained environments: false
  • badger-sync-writes When SyncWrites is true all writes are synced to disk. Setting this to false would achieve better performance, but may cause data loss in case of crash. Recommended value for memory constrained environments: false
  • badger-vlog-fileIO-mapping TableLoadingMode indicates which file loading mode should be used for the LSM tree data files. Setting to true would not load the value in memory map. Recommended value for memory constrained environments: true

Apart from these flags some other values can be tweaked to fit in the memory constraints. Following are some examples of setups.

  • Memory consumption max limit: 1GB
               // 0.5<<20 (524288 bytes = 0.5 Mb)
               "badger-max-table-size=524288",
               "badger-number-of-compactors=1",
               "badger-number-of-level-zero-tables=1",
               "badger-number-of-zero-tables-stall=2",
  • Memory consumption max limit: 2GB
               // 16<<20 (16777216 bytes = 16 Mb)
               "badger-max-table-size=16777216",
               "badger-number-of-compactors=1",
               "badger-number-of-level-zero-tables=1",
               "badger-number-of-zero-tables-stall=2",
  • Memory consumption max limit: 5GB
               // 32<<20 (33554432 bytes = 32 Mb)
               "badger-max-table-size=33554432",
               "badger-number-of-compactors=1",
               "badger-number-of-level-zero-tables=2",
               "badger-number-of-zero-tables-stall=3",

Apart from the above settings, max-disk-mb and max-look-back can be tweaked according to input data and memory constraints.

Prometheus

Sloop uses the Prometheus library to emit metrics, which is very helpful for performance debugging.

In the root of the repo is a Prometheus config file prometheus.yml.

On OSX you can install Prometheus with brew install prometheus. Then start it from the sloop directory by running prometheus

Open your browser to http://localhost:9090.

An example of a useful query is rate(kubewatch_event_count[5m])

Event filtering

Events can be excluded from Sloop by adding exclusionRules to the config file:

{
  "defaultNamespace": "default",
  "defaultKind": "Pod",
  "defaultLookback": "1h",
  [...]
  "exclusionRules": {
    "_all": [
      {"==": [ { "var": "metadata.namespace" }, "kube-system" ]}
    ],
    "Pod": [
      {"==": [ { "var": "metadata.name" }, "sloop-0" ]}
    ],
    "Job": [
      {"in": [ { "var": "metadata.name" }, [ "cron1", "cron3" ] ]}
    ]
  }
}`

Adding rules can help to reduce resources consumed by Sloop and remove unwanted noise from the UI for events that are of no interest.

Limiting rules to specific kinds

  • Rules under the special key _all are evaluated against events for objects of any kind
  • Rules under any other key are evaluated only against objects whose kind matches the key, e.g. Pod only applies to pods, Job only applies to jobs etc.

Rule format and supported operations

Rules should follow the JsonLogic format and are evaluated against the json representation of the Kubernetes API object related to the event (see below).

Available operators, such as == and in shown above, are documented here.

Data available to rule logic

Kubernetes API conventions for objects require the following keys to exist in the json data for all resources, all of which can be referenced in rules:

  • metadata
  • spec
  • status

Some commonly useful fields under the metadata object are:

  • name
  • namespace
  • labels

Type specific data

Some resources contain additional type-specific fields, for example PersistentVolumeClaimSpec objects have fields named selector and storageClassName.

Type specific fields for each object and their corresponding keys in the object json representation are documented in the core API, e.g. for PersistentVolumeClaimSpec objects the documentation is here.

Contributing

Refer to CONTRIBUTING.md

License

BSD 3-Clause

sloop's People

Contributors

akrag avatar annelau21 avatar cmeister-sfdc avatar dependabot[bot] avatar dlipovetsky avatar duke-harlan avatar duncansmith1126 avatar hoerup avatar hsiddulugari avatar karlskewes avatar kritikapradhan avatar kskewes-sf avatar lukasstockner avatar lxlxok avatar mengyaoyang11 avatar mtmn avatar npattanayak avatar nurland avatar osassonsf avatar rafabios avatar ryanbrainard avatar sana-jawad avatar sarjamil avatar sridhav avatar sumitjainn avatar svc-scm avatar tariq1890 avatar thomashargrove avatar tyrken avatar venkatramreddykunta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sloop's Issues

Publish Helm chart for Sloop

Helm helps to manage Kubernetes applications, Helm Charts help to define, install, and upgrade even the most complex Kubernetes application.

Helm is a package manager (similar to yum and apt), Charts are packages (similar to debs and rpms). For now contributors can just create helm chart under sloop, eventually, we want to contribute helm chart under its official repo: https://github.com/helm/charts

For whoever works on this item, please refer helm chart official doc before starting: https://github.com/helm/helm/tree/master/docs

Feature Request: Color Legend

When looking at the timeline in Sloop, it is hard to know what color means what without mousing over each of the timeline bars individually. It would be very helpful to have a legend that shows what each color represents on the timeline.

Sloop Releases Request

Hi, any chance you could please use releases and subsequently immutable style versioning for docker tags? I'd like to be able to pin my Sloop deployments rather than using latest.

I'm happy to submit a PR for this but would need some direction in the implementation as it requires buy-in from your side.

Sloop garbage collection is not able to successfully decrease the DB size on disk

There are a couple of issues in Sloop GC:

  1. Event count table data (although it is the smallest table in terms of number of keys and size) gets added beyond the max look back time.
Before Event Count data is added:    MaxLookBack|----- Count: 50 ---|
                                                                   
After Event Count data is added :      *****MaxLookBack|----- Count: 50 ---|
(new data is shown using *)
  1. The sprinkling of event count data uses max look back as a boundary resulting in small partitions containing very less event count data spread all across till look back time.
Before Event Count data is added:     MaxLookBack|        |MinPartition -----|
                                                                   
After Event Count data is added:      MaxLookBack|********|MinPartition -----|

Due to these two issues the GC cleans the oldest partition and wait another 30 minutes will then a lot of new data goes beyond the minimum partition resulting in almost no reduction due to GC and data keeps on growing.

HTTP 503 Error and Failed to list errors in logs on Kubernetes v1.17.0

I have a K8S 1.17.0 Cluster and have installed "sloop" using helm but I am getting a http 503 error when I try to view the dashboard and the pods logs are full of errors as shown below.

E1219 23:51:45.217815 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *unstructured.Unstructured: volumesnapshotcontents.snapshot.storage.k8s.io is forbidden: User "system:serviceaccount:sloop:sloop" cannot list resource "volumesnapshotcontents" in API group "snapshot.storage.k8s.io" at the cluster scope
E1219 23:51:45.417719 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *unstructured.Unstructured: volumesnapshots.snapshot.storage.k8s.io is forbidden: User "system:serviceaccount:sloop:sloop" cannot list resource "volumesnapshots" in API group "snapshot.storage.k8s.io" at the cluster scope
........
............
E1219 23:51:55.617216 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *unstructured.Unstructured: hostendpoints.crd.projectcalico.org is forbidden: User "system:serviceaccount:sloop:sloop" cannot list resource "hostendpoints" in API group "crd.projectcalico.org" at the cluster scope
E1219 23:51:55.817503 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *unstructured.Unstructured: ipamblocks.crd.projectcalico.org is forbidden: User "system:serviceaccount:sloop:sloop" cannot list resource "ipamblocks" in API group "crd.projectcalico.org" at the cluster scope

Add helm options for clusters without PVCs

When running sloop on a dev machine (docker-desktop for example) there is no PVC available. It would be good to have a helm value to switch to using a host mount or an emptyDir for storage. Would also be good to give steps to install this way in the helm readme

Increasing the Time Range

How can we increase the time range to 4 weeks?

I have tried increasing the "max-look-back" to 672h and still didn't see any change in "http://localhost:8080/debug/config/" and showing same value "1209600000000000".
After changing the value, can't see the changes in UI as well. Are values hardcoded?

I have deployed on k8s cluster using helm.

Thanks

CRDs with same kind but different groups get merged together

The CRDs are only using the local resource Kind instead of the fully-qualified GroupKind. We have several CRDs that have the same Kind, but are in different Groups, so not only is it confusing in the UI because you don't know which Kind is which, but also only the last (first?) Kind with a common name is actually used.

Feature Request: Expose badger debug info

It would be useful to expose debug info for:

  1. Web page showing badger tables with level, id, left+right key and keyCount
  2. Expose badger internal metrics. Badger adds an page /debug/vars when using default http server. I returns these metrics: https://github.com/dgraph-io/badger/blob/master/README.md#statistics We can use the handler here https://golang.org/pkg/expvar/
  3. Expose badger trace. Badger adds /debug/requests and /debug/events using https://godoc.org/golang.org/x/net/trace which we can also expose
  4. Add flag for additional badger logging. It should be just EventLogging bool in options: https://github.com/dgraph-io/badger/blob/master/options.go#L51

Sloop will not pick up CRD types created after the start of sloop, gives errors when CRD types are removed after startup

Sloop scans the list of CRDs at startup, then sets up watches for each CRD type. If CRD types are registered later Sloop does not recognize them. If CRD types are deleted from the server Sloop will return errors like:

E1114 09:59:37.508684 10297 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *unstructured.Unstructured: the server could not find the requested resource

Support job kind

Currently our service is not supporting job kind yet, we should add support for this.

Error(key not found) trying to get the pod details

I did the following:-

  • selected a cluster
  • selected a namespace with a statefulset kpod(filter kind)
  • name filter "mysql"
  • time range "12 hrs"

it shows me a payload link and when i click the payload link, it shows the following error

Error rendering url: "/debug/view/?k=/watch/001585040400/Pod/<my namespace?/my statefulset name>-0/1585040835570667792".  Note: view transaction failed. Error: Key not found

Excessive memory consumption?

We are currently experimenting to use sloop.

We find it very useful but we found out that it was very greedy regarding memory.
After less than a full day, it is currently using 5Gb of memory :(

Is it the normal behaviour?

The last 3 hours
sloop-last-3hours-2020 04 08-16_19_41

The last 24 hours
sloop-last-24hours-2020 04 08-16_20_06

Here is our current configuration (no memory limits on purpose to see what's needed without being OOM killed)

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sloop
  labels:
    app.kubernetes.io/name: sloop
spec:
  serviceName: sloop
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: sloop
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sloop
    spec:
      containers:
        - args:
            - --config=/sloop-config/sloop.json
          command:
            - /sloop
          image: FIXME/sloop
          name: sloop
          ports:
            - containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits: {}
            requests:
              memory: 1.5Gi
              cpu: 50m
          volumeMounts:
            - mountPath: /data
              name: sloop-data
            - mountPath: /sloop-config
              name: sloop-config
            - mountPath: /tmp
              name: sloop-tmp
          securityContext:
            allowPrivilegeEscalation: false
            privileged: false
            runAsNonRoot: true
            runAsUser: 100
            runAsGroup: 1000
            readOnlyRootFilesystem: true
      securityContext:
        fsGroup: 1000
      volumes:
        - name: sloop-config
          configMap:
            name: sloop-config
        - name: sloop-tmp
          emptyDir:
            sizeLimit: 100Mi
      serviceAccountName: sloop
      terminationGracePeriodSeconds: 10
  volumeClaimTemplates:
    - metadata:
        name: sloop-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi

Failed to list resources on startup in k8s 1.16+

When connecting to k8s cluster with version 1.16 or higher receiving a lot of errors like this:
E0415 12:30:41.601909 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.DaemonSet: the server could not find the requested resource
E0415 12:30:41.672392 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.ReplicaSet: the server could not find the requested resource
E0415 12:30:42.712250 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.DaemonSet: the server could not find the requested resource
E0415 12:30:42.795160 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.ReplicaSet: the server could not find the requested resource
E0415 12:30:43.806459 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.DaemonSet: the server could not find the requested resource
E0415 12:30:43.890617 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.ReplicaSet: the server could not find the requested resource
E0415 12:30:45.067983 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.DaemonSet: the server could not find the requested resource
E0415 12:30:45.252900 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.ReplicaSet: the server could not find the requested resource
E0415 12:30:46.359063 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.DaemonSet: the server could not find the requested resource
E0415 12:30:46.368595 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.ReplicaSet: the server could not find the requested resource
E0415 12:30:47.640136 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.DaemonSet: the server could not find the requested resource
E0415 12:30:47.668593 1 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *v1beta1.ReplicaSet: the server could not find the requested resource

If I connect to k8s 1.15 there is no such errors.
I think it`s related to this https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

I tried to make changes to api version kubewatcher.go:

i.informerFactory.Apps().V1().DaemonSets().Informer().AddEventHandler(i.getEventHandlerForResource("DaemonSet")) i.informerFactory.Apps().V1().ReplicaSets().Informer().AddEventHandler(i.getEventHandlerForResource("ReplicaSet"))

but still, have these errors. I can't find any use of beta APIs in other go files, but I`m not a go programmer.

Any help?

-disable-kube-watch should not require kube context

Even though -disable-kube-watch=true is set, Sloop still tries to create a Kube context and client.

$ $GOPATH/bin/sloop -disable-kube-watch=true -context=dummy

...
I1113 07:32:36.471984   25355 kubeclient.go:20] Creating k8sclient with user-defined config masterURL=, kubeContext=dummy.
E1113 07:32:36.489076   25355 main.go:31] Main exited with error: failed to create kubernetes client: context "dummy" does not exist

Note, I set -context here to force it to not use my default. As an alternative reproducer, this KUBECONFIG can also be overridden and it fails like this:

$ export KUBECONFIG=$HOME/.kube/config-dummy

$ $GOPATH/bin/sloop -disable-kube-watch=true

...
I1113 07:36:46.691441   25775 kubeclient.go:20] Creating k8sclient with user-defined config masterURL=, kubeContext=.
W1113 07:36:46.691501   25775 loader.go:223] Config not found: /Users/rbrainard/.kube/config-dummy
E1113 07:36:46.702665   25775 main.go:31] Main exited with error: failed to create kubernetes client: invalid configuration: no configuration has been provided
W1113 07:36:46.760143   25776 loader.go:223] Config not found: /Users/rbrainard/.kube/config-dummy
error: current-context is not set
W1113 07:36:46.865888   25797 loader.go:223] Config not found: /Users/rbrainard/.kube/config-dummy
error: current-context is not set

The actual fix/UX here is a little tricky because Sloop uses the context to point to the correct database and display in the UI. Perhaps if -disable-kube-watch is set then -context must also be set.

Ref: #56 (comment)

Error in initializing K8s client: no Auth Provider found for name "azure"

Hi,

I am getting the below error when trying to run sloop in a docker container

kubeclient.go:42] Creating k8sclient with user-defined config masterURL=, kubeContext=*****.
mysloop_1  | E0525 17:05:16.909310       1 kubeclient.go:53] Cannot Initialize Kubernetes Client API: no Auth Provider found for name "azure"
mysloop_1  | I0525 17:05:16.910185       1 store.go:112] Closing store
mysloop_1  | badger 2021/05/25 17:05:16 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
mysloop_1  | I0525 17:05:16.918293       1 store.go:114] Finished closing store
mysloop_1  | E0525 17:05:16.934304       1 main.go:31] Main exited with error: failed to create kubernetes client: no Auth Provider found for name "azure"

Looks like https://github.com/salesforce/sloop/blob/master/pkg/sloop/ingress/kubeclient.go does not import the azure auth plugin as mentioned in https://github.com/liggitt/kubernetes/blob/master/staging/src/k8s.io/client-go/examples/README.md#auth-plugins

import _ "k8s.io/client-go/plugin/pkg/client/auth/azure"

Failed to list *unstructured.Unstructured: the server could not find the requested resource

When I download the sloop source code and run the command "make linux", then I run the sloop binary with no parameters on my k8s cluster, the error happens:

[root@k8s-master ~]# ./sloop
ERROR: logging before flag.Parse: I0604 17:57:03.921818 1446 config.go:253] Default config set
I0604 17:57:03.924442 1446 server.go:41] SloopConfig: ConfigFile: ""
apiServerHost: ""
badgerDiscardRatio: 0.99
badgerEnableEventLogging: false
badgerKeepL0InMemory: true
badgerLevSizeMultiplier: 0
badgerLevelOneSize: 0
badgerMaxTableSize: 0
badgerNumLevelZeroTables: 0
badgerNumLevelZeroTablesStall: 0
badgerNumOfCompactors: 0
badgerSyncWrites: true
badgerUseLSMOnlyOptions: true
badgerVLogFileIOMapping: false
badgerVLogFileSize: 0
badgerVLogGCFreq: 60000000000
badgerVLogMaxEntries: 200000
badgerVLogTruncate: true
bindAddress: ""
cleanupFrequency: 1800000000000
context: ""
crdRefreshInterval: 300000000000
debugPlaybackFile: ""
debugRecordFile: ""
defaultKind: _all
defaultLookback: 1h
defaultNamespace: default
deletionBatchSize: 1000
disableKubeWatch: false
disableStoreManager: false
displayContext: ""
enableDeleteKeys: false
keepMinorNodeUpdates: false
kubeWatchResyncInterval: 1800000000000
leftBarLinks: null
maxDiskMb: 32768
maxLookBack: 1209600000000000
mockBadger: false
port: 8080
resourceLinks: null
restoreDatabaseFile: ""
storeRoot: ./data
threshold for GC: 0.8
watchCrds: true
webfilesPath: ./pkg/sloop/webserver/webfiles
I0604 17:57:03.924707 1446 kubeclient.go:20] Getting k8s context with user-defined config masterURL=, kubeContextPreference=.
I0604 17:57:03.926010 1446 kubeclient.go:35] Get k8s context with context=kubernetes
badger 2021/06/04 17:57:03 INFO: All 1 tables opened in 1ms
badger 2021/06/04 17:57:03 INFO: Replaying file id: 0 at offset: 2764788
badger 2021/06/04 17:57:03 INFO: Replay took: 2.726µs
badger 2021/06/04 17:57:03 DEBUG: Value log discard stats empty
badger 2021/06/04 17:57:03 INFO:
badger 2021/06/04 17:57:03 INFO: Level: 0. 0 B Size. 0 B Max.
badger 2021/06/04 17:57:03 INFO: Level: 1. 2.7 MB Size. 268 MB Max.
badger 2021/06/04 17:57:03 INFO: Level: 2. 0 B Size. 2.7 GB Max.
badger 2021/06/04 17:57:03 INFO: Level: 3. 0 B Size. 27 GB Max.
badger 2021/06/04 17:57:03 INFO: Level: 4. 0 B Size. 268 GB Max.
badger 2021/06/04 17:57:03 INFO: Level: 5. 0 B Size. 2.7 TB Max.
badger 2021/06/04 17:57:03 INFO: Level: 6. 0 B Size. 27 TB Max.
badger 2021/06/04 17:57:03 INFO: All tables consolidated into one level. Flattening done.
I0604 17:57:03.953201 1446 store.go:108] BadgerDB Options: {Dir:data/kubernetes ValueDir:data/kubernetes SyncWrites:true TableLoadingMode:2 ValueLogLoadingMode:2 NumVersionsToKeep:1 ReadOnly:false Truncate:true Logger:0x255f720 Compression:0 EventLogging:true InMemory:false MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:1048576 NumMemtables:5 BlockSize:4096 BloomFalsePositive:0.01 KeepL0InMemory:true MaxCacheSize:0 MaxBfCacheSize:0 LoadBloomsOnOpen:true NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:200000 NumCompactors:2 CompactL0OnClose:true LogRotatesToFlush:2 ZSTDCompressionLevel:1 VerifyValueChecksum:false EncryptionKey:[] EncryptionKeyRotationDuration:240h0m0s BypassLockGuard:false ChecksumVerificationMode:0 managedTxns:false maxBatchCount:0 maxBatchSize:0}
I0604 17:57:03.953275 1446 kubeclient.go:42] Creating k8sclient with user-defined config masterURL=, kubeContext=kubernetes.
I0604 17:57:03.955024 1446 kubeclient.go:57] Created k8sclient with context=kubernetes, masterURL=https://172.26.0.52:6443, configFile=[/root/.kube/config].
I0604 17:57:04.045843 1446 kubewatcher.go:141] Found 15 CRD definitions
I0604 17:57:04.046329 1446 kubewatcher.go:152] Stopping 0 CRD Informers
I0604 17:57:04.047338 1446 webserver.go:225] Listening on http://localhost:8080
E0604 17:57:04.047618 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0604 17:57:04.049055 1446 stats.go:72] Finished updating store stats: &{timestamp:{wall:13845925748516866428 ext:135248307 loc:0x256c200} DiskSizeBytes:5502299 DiskLsmBytes:2712576 DiskLsmFileCount:1 DiskVlogBytes:2789640 DiskVlogFileCount:1 LevelToKeyCount:map[1:1498] LevelToTableCount:map[1:1] TotalKeyCount:1511}
I0604 17:57:04.049164 1446 storemanager.go:145] RunValueLogGC(0.99) run took 39.561µs and returned 'Value log GC attempt didn't result in any cleanup'
I0604 17:57:04.049080 1446 stats.go:72] Finished updating store stats: &{timestamp:{wall:13845925748516770299 ext:135152178 loc:0x256c200} DiskSizeBytes:5502299 DiskLsmBytes:2712576 DiskLsmFileCount:1 DiskVlogBytes:2789640 DiskVlogFileCount:1 LevelToKeyCount:map[1:1498] LevelToTableCount:map[1:1] TotalKeyCount:1511}
E0604 17:57:04.049361 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:04.049418 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
I0604 17:57:04.052559 1446 storemanager.go:203] Deletion/dropPrefix of prefixes took 248ns:
I0604 17:57:04.052586 1446 storemanager.go:116] GC finished in 3.395915ms with error ''. Next run in 30m0s
E0604 17:57:05.247304 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:05.447439 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:05.647334 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:06.248318 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:06.448430 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:06.648247 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:07.249372 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:07.449493 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource
E0604 17:57:07.649390 1446 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource

The following is some info about my cluster:

[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# uname -a
Linux k8s-master 5.2.6-1.el7.elrepo.x86_64 #1 SMP Sun Aug 4 10:13:32 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@k8s-master ~]# cat /etc/system-release
CentOS Linux release 7.4.1708 (Core)
[root@k8s-master ~]# cat /root/.kube/config
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data:
    .........
    server: https://172.26.0.52:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: admin
    name: kubernetes
    current-context: kubernetes
    kind: Config
    preferences: {}
    users:
  • name: admin
    user:
    client-certificate-data:
    ...........
    client-key-data:
    ...........

How to fix this issue?
Thanks a lot.

ImagePullSecrets needed

Please add ability to provide imagePullSecrets now that Docker Hub has rate limiting on anonymous pulls.

Running out of space in /data

How can we rotate or some such the files in /data directory?

Error that we are getting
E1014 07:07:58.984041 1 processing.go:38] Processing for updateKubeWatchTable failed with error Unable to write to value log file: "data/000010.vlog": write data/000010.vlog: no space left on device badger 2019/10/14 07:07:58 ERROR: writeRequests: Unable to write to value log file: "data/000010.vlog": write data/000010.vlog: no space left on device

Feature Request: Selectable UTC/LocalTime support

Currently the Sloop UX only shows times in UTC. It would useful to have the ability to show all times in browser local timezone. This should include the main timeline view, the event details, and the k8s yaml payloads.

Flexible filters

For example, when I select Filter namespace: production and Filter Kind: Pod I would like to add also Filter namespace: _all and Filter Kind: node on the same timeline

Screenshot from 2019-10-22 16-22-01
graph.

Badger Compaction Issue

Hi, there's an issue with Sloop that occurs when you consume all of maxDiskDb and a compaction runs, the sloop graphs disappear (and subsequent writes aren't made to the DB).

I'm able to consistently reproduce this by installing the latest helm chart without any overrides, except adding --max-disk-mb=1 to the command line.

It seems to occur after these events:

I1211 01:51:02.648860       1 storemanager.go:171] Start cleaning up because current file size: 14073983 exceeds file size: 1048576
badger 2019/12/11 01:51:02 INFO: Writes flushed. Stopping compactions now...
badger 2019/12/11 01:51:02 DEBUG: Flushing memtable
badger 2019/12/11 01:51:02 DEBUG: Storing value log head: {Fid:0 Len:32 Offset:13236655}
badger 2019/12/11 01:51:02 INFO: Got compaction priority: {level:0 score:1.74 dropPrefix:[47 119 97 116 99 104 47 48 48 49 53 55 54 48 50 54 48 48 48]}
badger 2019/12/11 01:51:02 INFO: Running for level: 0
badger 2019/12/11 01:51:02 DEBUG: LOG Compact. Added 212 keys. Skipped 114 keys. Iteration took: 466.315µs
badger 2019/12/11 01:51:02 DEBUG: Discard stats: map[0:205537]
badger 2019/12/11 01:51:02 INFO: LOG Compact 0->1, del 2 tables, add 1 tables, took 11.140577ms
badger 2019/12/11 01:51:02 INFO: Compaction for level: 0 DONE
badger 2019/12/11 01:51:02 DEBUG: LOG Compact. Added 205 keys. Skipped 7 keys. Iteration took: 444.281µs
badger 2019/12/11 01:51:02 DEBUG: Discard stats: map[0:7658]
badger 2019/12/11 01:51:02 INFO: LOG Compact 1->1, del 1 tables, add 1 tables, took 10.030778ms
badger 2019/12/11 01:51:02 INFO: DropPrefix done
badger 2019/12/11 01:51:02 INFO: Resuming writes
badger 2019/12/11 01:51:02 INFO: Writes flushed. Stopping compactions now...
badger 2019/12/11 01:51:02 DEBUG: LOG Compact. Added 201 keys. Skipped 4 keys. Iteration took: 261.251µs
badger 2019/12/11 01:51:02 DEBUG: Discard stats: map[0:582]
badger 2019/12/11 01:51:02 INFO: LOG Compact 1->1, del 1 tables, add 1 tables, took 8.278415ms
badger 2019/12/11 01:51:02 INFO: DropPrefix done
badger 2019/12/11 01:51:02 INFO: Resuming writes
badger 2019/12/11 01:51:02 INFO: Writes flushed. Stopping compactions now...
badger 2019/12/11 01:51:02 DEBUG: LOG Compact. Added 201 keys. Skipped 0 keys. Iteration took: 277.819µs
badger 2019/12/11 01:51:02 DEBUG: Discard stats: map[]
badger 2019/12/11 01:51:02 INFO: LOG Compact 1->1, del 1 tables, add 1 tables, took 12.260061ms
badger 2019/12/11 01:51:02 INFO: DropPrefix done
badger 2019/12/11 01:51:02 INFO: Resuming writes

image

Support larger fonts

As a DevOps engineer, I want to use Sloop to troubleshoot historical problems in Kubernetes clusters. However, I can't use it because the text next to the bars is so small as to be unreadable. Using the native browser zoom function to zoom in actually makes it smaller.

Avoid overriding rows in watch table when updates too close together

A watch key is defined as : /watch/partition_id/kind/namespace/name/timestamp(unix nano seconds), so theoretically each key should be unique even if they share the same /watch/partition_id/kind/namespace/name/. However, we might override rows in watch table when updates some too close together based on our current logic, we should never override the same row. The potential solutions could be (a) include more precise timestamp (b) detect collisions and mutate key to prevent them

Pod restarts with error after couple days (Error: Unable to fill tables)

Hello,

I run sloop on a busy cluster in a statefulset with:

        - args:
          - --max-look-back=168h
          - --max-disk-mb=25000
          - --config=/sloopconfig/sloop.json
          command:
          - /sloop
          image: sloopimage/sloop:latest

and config

      {
        defaultNamespace: "xxx",
        defaultKind: "Pod",
        defaultLookback: "6h",
        maxLoopBack: "604800",
        leftBarLinks: [
        ],
        resourceLinks: [
        ]
      }

after couple days, it enters a restart loop

image

below end tail of the log

 badger 2020/11/01 16:23:17 INFO: LOG Compact 1->2, del 4 tables, add 3 tables, took 1.790663808s
 badger 2020/11/01 16:23:17 INFO: Compaction for level: 1 DONE
 badger 2020/11/01 16:23:17 INFO: 1 compactor(s) succeeded. One or more tables from level 1 compacted.
 badger 2020/11/01 16:23:17 INFO:
 badger 2020/11/01 16:23:17 INFO: Level: 0.      0 B Size.      0 B Max.
 badger 2020/11/01 16:23:17 INFO: Level: 1.      0 B Size.   268 MB Max.
 badger 2020/11/01 16:23:17 INFO: Level: 2.   432 MB Size.   2.7 GB Max.
 badger 2020/11/01 16:23:17 INFO: Level: 3.   7.9 GB Size.    27 GB Max.
 badger 2020/11/01 16:23:17 INFO: Level: 4.      0 B Size.   268 GB Max.
 badger 2020/11/01 16:23:17 INFO: Level: 5.      0 B Size.   2.7 TB Max.
 badger 2020/11/01 16:23:17 INFO: Level: 6.      0 B Size.    27 TB Max.
 badger 2020/11/01 16:23:17 INFO: Attempting to compact with {level:2 score:1.71 dropPrefix:[]}
 badger 2020/11/01 16:23:17 INFO: Got compaction priority: {level:2 score:1.71 dropPrefix:[]}
 badger 2020/11/01 16:23:17 INFO: Running for level: 2
 badger 2020/11/01 16:23:17 INFO: Got compaction priority: {level:2 score:1.71 dropPrefix:[]}
 badger 2020/11/01 16:23:17 INFO: Got compaction priority: {level:2 score:1.71 dropPrefix:[]}
 badger 2020/11/01 16:23:17 INFO: Running for level: 2
 badger 2020/11/01 16:23:17 INFO: Got compaction priority: {level:2 score:1.71 dropPrefix:[]}
 badger 2020/11/01 16:23:17 INFO: Got compaction priority: {level:2 score:1.71 dropPrefix:[]}
 badger 2020/11/01 16:23:17 INFO: Running for level: 2
 badger 2020/11/01 16:23:17 INFO: Running for level: 2
 badger 2020/11/01 16:23:17 WARNING: While running doCompact with {level:2 score:1.71 dropPrefix:[]}. Error: Unable to fill tables
 badger 2020/11/01 16:23:18 DEBUG: LOG Compact. Added 386827 keys. Skipped 662 keys. Iteration took: 659.054973ms
 badger 2020/11/01 16:23:19 DEBUG: LOG Compact. Added 5752 keys. Skipped 0 keys. Iteration took: 964.415726ms
 badger 2020/11/01 16:23:19 DEBUG: LOG Compact. Added 3633 keys. Skipped 0 keys. Iteration took: 829.724409ms
 badger 2020/11/01 16:23:20 DEBUG: LOG Compact. Added 7828 keys. Skipped 0 keys. Iteration took: 1.013346945s
 badger 2020/11/01 16:23:21 DEBUG: LOG Compact. Added 8556 keys. Skipped 0 keys. Iteration took: 560.922352ms
 badger 2020/11/01 16:23:21 DEBUG: LOG Compact. Added 17496 keys. Skipped 0 keys. Iteration took: 139.795733ms
 badger 2020/11/01 16:23:22 DEBUG: LOG Compact. Added 15063 keys. Skipped 0 keys. Iteration took: 478.939411ms
 badger 2020/11/01 16:23:22 DEBUG: LOG Compact. Added 4866 keys. Skipped 0 keys. Iteration took: 116.207976ms
 badger 2020/11/01 16:23:22 DEBUG: LOG Compact. Added 8154 keys. Skipped 0 keys. Iteration took: 50.77456ms
 badger 2020/11/01 16:23:22 DEBUG: LOG Compact. Added 24536 keys. Skipped 0 keys. Iteration took: 80.037892ms
 badger 2020/11/01 16:23:22 DEBUG: LOG Compact. Added 14969 keys. Skipped 0 keys. Iteration took: 54.562299ms
 badger 2020/11/01 16:23:22 DEBUG: LOG Compact. Added 4828 keys. Skipped 0 keys. Iteration took: 96.514865ms
 badger 2020/11/01 16:23:23 DEBUG: LOG Compact. Added 8053 keys. Skipped 0 keys. Iteration took: 1.25906361s
 badger 2020/11/01 16:23:29 DEBUG: LOG Compact. Added 24830 keys. Skipped 0 keys. Iteration took: 12.354274333s
 badger 2020/11/01 16:23:31 DEBUG: LOG Compact. Added 30009 keys. Skipped 0 keys. Iteration took: 7.721485698s
 badger 2020/11/01 16:23:31 DEBUG: LOG Compact. Added 13548 keys. Skipped 0 keys. Iteration took: 84.701226ms
 badger 2020/11/01 16:23:31 DEBUG: LOG Compact. Added 9268 keys. Skipped 0 keys. Iteration took: 124.778088ms
 badger 2020/11/01 16:23:31 DEBUG: LOG Compact. Added 6794 keys. Skipped 0 keys. Iteration took: 75.668846ms
 badger 2020/11/01 16:23:31 DEBUG: LOG Compact. Added 8150 keys. Skipped 0 keys. Iteration took: 97.70021ms
 badger 2020/11/01 16:23:31 DEBUG: LOG Compact. Added 35736 keys. Skipped 0 keys. Iteration took: 102.852972ms
 badger 2020/11/01 16:23:32 DEBUG: LOG Compact. Added 9400 keys. Skipped 0 keys. Iteration took: 67.736162ms
 badger 2020/11/01 16:23:32 DEBUG: LOG Compact. Added 5961 keys. Skipped 0 keys. Iteration took: 69.370427ms
 badger 2020/11/01 16:23:32 DEBUG: LOG Compact. Added 8035 keys. Skipped 0 keys. Iteration took: 82.539694ms
 badger 2020/11/01 16:23:32 DEBUG: LOG Compact. Added 28549 keys. Skipped 0 keys. Iteration took: 323.908729ms
 badger 2020/11/01 16:23:32 DEBUG: LOG Compact. Added 11741 keys. Skipped 0 keys. Iteration took: 295.102736ms
 badger 2020/11/01 16:23:33 DEBUG: LOG Compact. Added 5993 keys. Skipped 0 keys. Iteration took: 779.201855ms
 badger 2020/11/01 16:23:35 DEBUG: LOG Compact. Added 28090 keys. Skipped 0 keys. Iteration took: 18.143044664s
 badger 2020/11/01 16:23:37 DEBUG: LOG Compact. Added 17457 keys. Skipped 0 keys. Iteration took: 19.808179976s
 badger 2020/11/01 16:23:47 DEBUG: LOG Compact. Added 8082 keys. Skipped 0 keys. Iteration took: 13.697489224s
 badger 2020/11/01 16:23:49 DEBUG: LOG Compact. Added 377466 keys. Skipped 58 keys. Iteration took: 19.939079915s
 badger 2020/11/01 16:23:51 DEBUG: Discard stats: map[]
 badger 2020/11/01 16:23:52 INFO: LOG Compact 2->3, del 3 tables, add 2 tables, took 34.624752582s
 badger 2020/11/01 16:23:52 INFO: Compaction for level: 2 DONE
 badger 2020/11/01 16:23:53 DEBUG: LOG Compact. Added 20288 keys. Skipped 0 keys. Iteration took: 18.345364364s
 badger 2020/11/01 16:23:54 DEBUG: Discard stats: map[]
 badger 2020/11/01 16:23:54 INFO: LOG Compact 2->3, del 3 tables, add 2 tables, took 37.287094876s
 badger 2020/11/01 16:23:54 INFO: Compaction for level: 2 DONE
 badger 2020/11/01 16:23:54 DEBUG: LOG Compact. Added 27458 keys. Skipped 0 keys. Iteration took: 7.523616951s
 badger 2020/11/01 16:23:54 DEBUG: LOG Compact. Added 6566 keys. Skipped 0 keys. Iteration took: 17.693888995s
 badger 2020/11/01 16:23:56 DEBUG: Discard stats: map[]
 badger 2020/11/01 16:23:56 INFO: LOG Compact 2->3, del 3 tables, add 2 tables, took 38.748789049s
 badger 2020/11/01 16:23:56 INFO: Compaction for level: 2 DONE
 badger 2020/11/01 16:23:57 DEBUG: LOG Compact. Added 12734 keys. Skipped 0 keys. Iteration took: 2.417527616s
 badger 2020/11/01 16:23:58 DEBUG: LOG Compact. Added 5835 keys. Skipped 0 keys. Iteration took: 1.357336419s
 stream closed

Cleaning up the disk solves it, but is there anything else I could try?
Thank you.

Rest endpoint with query support

Several people have requested a rest endpoint to query history programatically. The most recent use case was for finding all the timestamps for image changes in a set of deployments.

As part of this work, it would be interesting to look around and see if there are any good golang libraries that will give us a nice query language so we dont need to make our own.

Additionally, we will likely want to re-implement the debug list-keys interface to be built on top of this query rest endpoint. That would cut down on duplicated code, give a better debug interface, and help us build a good interface by using it ourselves.

Repeated "Failed to list *unstructured.Unstructured: the server could not find the requested resource" error message

I just complied today's master locally (on linux/golang 1.16) and ran it using both local kubectl config & in cluster - and got loads of

E0520 15:13:22.786851  934250 reflector.go:156] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:108: Failed to list *unstructured.Unstructured: the server could not find the requested resource

Any suggestions on how to debug this? It sort of sounds similar to #113 but still happens for me, with both 1.18 and 1.19 clusters in AWS EKS. I do have a bunch of CRDs installed & suspect it's related to that, as when I add -watch-crds:false to the command line the errors disappear...

Support event filter

Currently our event history shows all of the events for a k8s resource, but what if people are only interested in certain kind of event, we can add filter function for this.

EventCount processing will duplicate event counts when crossing partition boundaries

The processing code in pkg/sloop/processing/eventcount.go takes a new event and subtracts the count and time range from the previous instance of this event from the watch table. But it only looks in the current partition for the previous event. We now have GetPreviousKey() in the generated table code which works across partitions and is a much more optimized scan.

Feature Request: Ability to pick the start time

Currently Sloop only lets you pick the duration of the timeline ending roughly "now". Would like the ability to change the start time. This is useful when investigating something that happened in the past but over a short time duration (for example, I want to look at 2am-3am last Tuesday).

Config file settings are not applied

The values in a provided config file are not being applied when Sloop boots.

I suspect this is because the config file is being unmarshalled as yaml, but the SloopConfig struct is annotated with json annotations.

Add per-container visibility to Sloop

Often times a more complex pod will have several init containers and several normal containers. The init containers run one after another, then all the normal containers run at once. When debugging container timing it would be nice to include some visual way to see whats going on. (kubectl describe pod gives all the raw info, but its not a quick way to figure out the issue)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.