Giter Club home page Giter Club logo

stern's Introduction

Build

stern

Fork of discontinued wercker/stern

Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.

The query is a regular expression or a Kubernetes resource in the form <resource>/<name> so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.

When a pod contains multiple containers Stern can tail all of them too without having to do this manually for each one. Simply specify the container flag to limit what containers to show. By default all containers are listened to.

Installation

Download binary

Download a binary release

Build from source

go install github.com/stern/stern@latest

asdf (Linux/macOS)

If you use asdf, you can install like this:

asdf plugin-add stern
asdf install stern latest

Homebrew (Linux/macOS)

If you use Homebrew, you can install like this:

brew install stern

Krew (Linux/macOS/Windows)

If you use Krew which is the package manager for kubectl plugins, you can install like this:

kubectl krew install stern

Usage

stern pod-query [flags]

The pod-query is a regular expression or a Kubernetes resource in the form <resource>/<name>.

The query is a regular expression when it is not a Kubernetes resource, so you could provide "web-\w" to tail web-backend and web-frontend pods but not web-123.

When the query is in the form <resource>/<name> (exact match), you can select all pods belonging to the specified Kubernetes resource, such as deployment/nginx. Supported Kubernetes resources are pod, replicationcontroller, service, daemonset, deployment, replicaset, statefulset and job.

cli flags

flag default purpose
--all-namespaces, -A false If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.
--color auto Force set color output. 'auto': colorize if tty attached, 'always': always colorize, 'never': never colorize.
--completion Output stern command-line completion code for the specified shell. Can be 'bash', 'zsh' or 'fish'.
--config ~/.config/stern/config.yaml Path to the stern config file
--container, -c .* Container name when multiple containers in pod. (regular expression)
--container-state all Tail containers with state in running, waiting, terminated, or all. 'all' matches all container states. To specify multiple states, repeat this or set comma-separated value.
--context The name of the kubeconfig context to use
--ephemeral-containers true Include or exclude ephemeral containers.
--exclude, -e [] Log lines to exclude. (regular expression)
--exclude-container, -E [] Container name to exclude when multiple containers in pod. (regular expression)
--exclude-pod [] Pod name to exclude. (regular expression)
--field-selector Selector (field query) to filter on. If present, default to ".*" for the pod-query.
--highlight, -H [] Log lines to highlight. (regular expression)
--include, -i [] Log lines to include. (regular expression)
--init-containers true Include or exclude init containers.
--kubeconfig Path to the kubeconfig file to use for CLI requests.
--max-log-requests -1 Maximum number of concurrent logs to request. Defaults to 50, but 5 when specifying --no-follow
--namespace, -n Kubernetes namespace to use. Default to namespace configured in kubernetes context. To specify multiple namespaces, repeat this or set comma-separated value.
--no-follow false Exit when all logs have been shown.
--node Node name to filter on.
--only-log-lines false Print only log lines
--output, -o default Specify predefined template. Currently support: [default, raw, json, extjson, ppextjson]
--prompt, -p false Toggle interactive prompt for selecting 'app.kubernetes.io/instance' label values.
--selector, -l Selector (label query) to filter on. If present, default to ".*" for the pod-query.
--show-hidden-options false Print a list of hidden options.
--since, -s 48h0m0s Return logs newer than a relative duration like 5s, 2m, or 3h.
--stdin false Parse logs from stdin. All Kubernetes related flags are ignored when it is set.
--tail -1 The number of lines from the end of the logs to show. Defaults to -1, showing all logs.
--template Template to use for log lines, leave empty to use --output flag.
--template-file, -T Path to template to use for log lines, leave empty to use --output flag. It overrides --template option.
--timestamps, -t Print timestamps with the specified format. One of 'default' or 'short'. If specified but without value, 'default' is used.
--timezone Local Set timestamps to specific timezone.
--verbosity 0 Number of the log level verbosity
--version, -v false Print the version and exit.

See stern --help for details

Stern will use the $KUBECONFIG environment variable if set. If both the environment variable and --kubeconfig flag are passed the cli flag will be used.

config file

You can use the config file to change the default values of stern options. The default config file path is ~/.config/stern/config.yaml.

# <flag name>: <value>
tail: 10
max-log-requests: 999
timestamps: short

You can change the config file path with --config flag or STERNCONFIG environment variable.

templates

stern supports outputting custom log messages. There are a few predefined templates which you can use by specifying the --output flag:

output description
default Displays the namespace, pod and container, and decorates it with color depending on --color
raw Only outputs the log message itself, useful when your logs are json and you want to pipe them to jq
json Marshals the log struct to json. Useful for programmatic purposes

It accepts a custom template through the --template flag, which will be compiled to a Go template and then used for every log message. This Go template will receive the following struct:

property type description
Message string The log message itself
NodeName string The node name where the pod is scheduled on
Namespace string The namespace of the pod
PodName string The name of the pod
ContainerName string The name of the container

The following functions are available within the template (besides the builtin functions):

func arguments description
json object Marshal the object and output it as a json text
color color.Color, string Wrap the text in color (.ContainerColor and .PodColor provided)
parseJSON string Parse string as JSON
tryParseJSON string Attempt to parse string as JSON, return nil on failure
extractJSONParts string, ...string Parse string as JSON and concatenate the given keys.
tryExtractJSONParts string, ...string Attempt to parse string as JSON and concatenate the given keys. , return text on failure
extjson string Parse the object as json and output colorized json
ppextjson string Parse the object as json and output pretty-print colorized json
toRFC3339Nano object Parse timestamp (string, int, json.Number) and output it using RFC3339Nano format
toTimestamp object, string [, string] Parse timestamp (string, int, json.Number) and output it using the given layout in the timezone that is optionally given (defaults to UTC).
levelColor string Print log level using appropriate color
colorBlack string Print text using black color
colorRed string Print text using red color
colorGreen string Print text using green color
colorYellow string Print text using yellow color
colorBlue string Print text using blue color
colorMagenta string Print text using magenta color
colorCyan string Print text using cyan color
colorWhite string Print text using white color

Log level verbosity

You can configure the log level verbosity by the --verbosity flag. It is useful when you want to know how stern interacts with a Kubernetes API server in troubleshooting.

Increasing the verbosity increases the number of logs. --verbosity 6 would be a good starting point.

Max log requests

Stern has the maximum number of concurrent logs to request to prevent unintentional load to a cluster. The number can be configured by the --max-log-requests flag.

The behavior and the default are different depending on the presence of the --no-follow flag.

--no-follow default behavior
specified 5 limits the number of concurrent logs to request
not specified 50 exits with an error when if it reaches the concurrent limit

The combination of --max-log-requests 1 and --no-follow will be helpful if you want to show logs in order.

Examples:

Tail all logs from all namespaces

stern . --all-namespaces

Tail the kube-system namespace without printing any prior logs

stern . -n kube-system --tail 0

Tail the gateway container running inside of the envvars pod on staging

stern envvars --context staging --container gateway

Tail the staging namespace excluding logs from istio-proxy container

stern -n staging --exclude-container istio-proxy .

Tail the kube-system namespace excluding logs from kube-apiserver pod

stern -n kube-system --exclude-pod kube-apiserver .

Show auth activity from 15min ago with timestamps

stern auth -t --since 15m

Show all logs of the last 5min by time, sorted by time

stern --since=5m --no-follow --only-log-lines -A -t . | sort -k4

Show auth activity with timestamps in specific timezone (default is your local timezone)

stern auth -t --timezone Asia/Tokyo

Follow the development of some-new-feature in minikube

stern some-new-feature --context minikube

View pods from another namespace

stern kubernetes-dashboard --namespace kube-system

Tail the pods filtered by run=nginx label selector across all namespaces

stern --all-namespaces -l run=nginx

Follow the frontend pods in canary release

stern frontend --selector release=canary

Tail the pods on kind-control-plane node across all namespaces

stern --all-namespaces --field-selector spec.nodeName=kind-control-plane

Tail the pods created by deployment/nginx

stern deployment/nginx

Pipe the log message to jq:

stern backend -o json | jq .

Only output the log message itself:

stern backend -o raw

Output using a custom template:

stern --template '{{printf "%s (%s/%s/%s/%s)\n" .Message .NodeName .Namespace .PodName .ContainerName}}' backend

Output using a custom template with stern-provided colors:

stern --template '{{.Message}} ({{.Namespace}}/{{color .PodColor .PodName}}/{{color .ContainerColor .ContainerName}}){{"\n"}}' backend

Output using a custom template with parseJSON:

stern --template='{{.PodName}}/{{.ContainerName}} {{with $d := .Message | parseJSON}}[{{$d.level}}] {{$d.message}}{{end}}{{"\n"}}' backend

Output using a custom template that tries to parse JSON or fallbacks to raw format:

stern --template='{{.PodName}}/{{.ContainerName}} {{ with $msg := .Message | tryParseJSON }}[{{ colorGreen (toRFC3339Nano $msg.ts) }}] {{ levelColor $msg.level }} ({{ colorCyan $msg.caller }}) {{ $msg.msg }}{{ else }} {{ .Message }} {{ end }}{{"\n"}}' backend

Load custom template from file:

stern --template-file=~/.stern.tpl backend

Trigger the interactive prompt to select an 'app.kubernetes.io/instance' label value:

stern -p

Output log lines only:

stern . --only-log-lines

Read from stdin:

stern --stdin < service.log

Completion

Stern supports command-line auto completion for bash, zsh or fish. stern --completion=(bash|zsh|fish) outputs the shell completion code which work by being evaluated in .bashrc, etc for the specified shell. In addition, Stern supports dynamic completion for --namespace, --context, --node, a resource query in the form <resource>/<name>, and flags with pre-defined choices.

If you use bash, stern bash completion code depends on the bash-completion. On the macOS, you can install it with homebrew as follows:

# If running Bash 3.2
brew install bash-completion

# or, if running Bash 4.1+
brew install bash-completion@2

Note that bash-completion must be sourced before sourcing the stern bash completion code in .bashrc.

source "$(brew --prefix)/etc/profile.d/bash_completion.sh"
source <(stern --completion=bash)

If installed via Krew, use:

source <(kubectl stern --completion bash)
complete -o default -F __start_stern kubectl stern

If you use zsh, just source the stern zsh completion code in .zshrc.

source <(stern --completion=zsh)

if you use fish shell, just source the stern fish completion code.

stern --completion=fish | source

# To load completions for each session, execute once:
stern --completion=fish >~/.config/fish/completions/stern.fish

Running with container

You can also use stern using a container:

docker run ghcr.io/stern/stern --version

If you are using a minikube cluster, you need to run a container as follows:

docker run --rm -v "$HOME/.minikube:$HOME/.minikube" -v "$HOME/.kube:/$HOME/.kube" -e KUBECONFIG="$HOME/.kube/config" ghcr.io/stern/stern .

You can find image tags in https://github.com/orgs/stern/packages/container/package/stern.

Running in Kubernetes Pods

If you want to use stern in Kubernetes Pods, you need to create the following ClusterRole and bind it to ServiceAccount.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: stern
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "watch", "list"]

Contributing to this repository

Please see CONTRIBUTING for details.

stern's People

Contributors

akupila avatar everpeace avatar fardog avatar fd avatar floryut avatar grosser avatar guettli avatar hatchan avatar hogklint avatar jayme-github avatar jlamillan avatar kenden avatar kokaz avatar markxnelson avatar michalschott avatar niamster avatar opensource21 avatar partcyborg avatar prune998 avatar rkmathi avatar shun0309 avatar shutefan avatar stuart-warren avatar superbrothers avatar tksm avatar tmszdmsk avatar uesyn avatar willand31 avatar wjam avatar ybudimir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stern's Issues

Correct Stern clusterrole

Note: this issue was imported from https://github.com/wercker/stern/issues/106, but it was originally created by Bulat-Gumerov...

Hey guys, I'm on EKS
I need to create clusterrole for developers which will allow tailing logs with stern and running commands inside k8s pods. I've created this clusterrole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: dev-namespace
  name: exec-and-get-logs-from-pods
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create"]

Developer can run kubectl logs, kubectl get pods and kubectl exec without any problem, but stern app fails with error: failed to set up watch: failed to set up watch: unknown (get pods). I don't want to give cluster-admin or admin role to our developers.
What's the correct clusterrole for stern?

Cannot find package "context" error, even after govendor sync

Note: this issue was imported from https://github.com/wercker/stern/issues/100, but it was originally created by mirvpgh...

I'm still having trouble installing stern (on Xubuntu 16.04). I ran govendor sync in the stern directory, and it looked like it ran for several seconds. When I run go install (or go build), I get this error:
cmd/cli.go:18:2: cannot find package "context" in any of:
/root/go/src/github.com/wercker/stern/vendor/context (vendor tree)
/usr/lib/go-1.6/src/context (from $GOROOT)
/root/go/src/context (from $GOPATH)

Any suggestions?

Can it support tailing log files, too?

Note: this issue was imported from https://github.com/wercker/stern/issues/54, but it was originally created by ryuheechul...

Just tried stern and it was pretty awesome, thank you for your work!

In our case though, sometimes some logs are not being printed as stdout/stderr. Some logs are being generated as just files.
So I thought it would be great if stern can support tailing some files in containers, too!

Feature request: print all available logs and exit

Note: this issue was imported from https://github.com/wercker/stern/issues/72, but it was originally created by emptywee...

Hello.

Is it possible to add a flag to stern to stop tailing after all available logs are shown? Would be very useful when you need to print logs from multiple different pods (using a regexp or wildcard or label selector) and pipe them to grep. Currently it keeps tailing no matter what and you don't really know if stern has shown all recent logs or still printing, because grep is filtering that info out.

Thank you.

Stern seems to not follow logs of new containers on restart

Note: this issue was imported from https://github.com/wercker/stern/issues/70, but it was originally created by valer-cara...

What

Stern does not print logs from restarted containers. If I re-run stern after a container failure/restart, the new container's logs are printed.

Reproduce

Create the following failing pod:

apiVersion: v1
kind: Pod
metadata:
  name: foo

spec:
  containers:
    - name: foo
      image: busybox
      command: ["sh", "-c", "touch /log; tail -f /log"]
      readinessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 5
        exec:
          command: ["sh", "-c", "echo $(date) ::: Readiness probe >> /log; grep imready /foo"]

      livenessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 5
        exec:
          command: ["sh", "-c", "echo $(date) ::: Liveness probe >> /log; grep imalive /foo"]

Check logs:

stern foo

Can it support tailing log files, too?

Note: this issue was imported from https://github.com/wercker/stern/issues/54, but it was originally created by ryuheechul...

Just tried stern and it was pretty awesome, thank you for your work!

In our case though, sometimes some logs are not being printed as stdout/stderr. Some logs are being generated as just files.
So I thought it would be great if stern can support tailing some files in containers, too!

bug: stern modify kube config

Note: this issue was imported from https://github.com/wercker/stern/issues/119, but it was originally created by kalioz...

I recently started using Azure AD authentification for my cluster, which produces an user in .kube/config like so :

- name: myCluster
  user:
    auth-provider:
      config:
        access-token: <accessToken>
        apiserver-id: <apiServerId>
        client-id: <clientId>
        environment: AzurePublicCloud
        expires-in: "3599"
        expires-on: "1568368062"
        refresh-token: <token>
        tenant-id: <tenantId>
      name: azure

stern seems to modify this file and remove the environment: AzurePublicCloud, which forces the user to reauthenticate after each use of stern.
resulting file :

- name: myCluster
  user:
    auth-provider:
      config:
        access-token: <accessToken>
        apiserver-id: <apiServerId>
        client-id: <clientId>
        expires-in: "3599"
        expires-on: "1568368062"
        refresh-token: <token>
        tenant-id: <tenantId>
      name: azure

re-adding the missing line allows the user to use the old authentification token without reauthenticating.

The surprising part in this is that it only force the user to reauthenticate in the kubectl command, stern will keep working with the modified file.

For those having the same problem, I can temporally bypass it by using admin credentials (it uses certificates to authenticate, stern doesn't affect it).

stern : version 1.11.0
kubectl client: 1.15.2
kubectl server: 1.14.6

Docker for stern

Note: this issue was imported from https://github.com/wercker/stern/issues/114, but it was originally created by karancode...

I am using stern on my macbook and I just realized that while switching to window/ubuntu I have to either build it from source or download the binary.
But I cannot use it as a docker container.

This issue is to for an enhancement request to have a docker image for stern.

Interleaved log output

Note: this issue was imported from https://github.com/wercker/stern/issues/96, but it was originally created by sebastianvoss...

When pods are emitting logs simultaneously the output is sometimes interleaved. This makes it very hard to read the output of stern. I'm using version 1.10.0 (k8s cluster AWS EKS version 2).

Sample output:

my-deployment-6854597977-8ffc6 my-app my-deployment-6854597977-bzmw9 23:25:41.614 [blaze-selector-0-3] INFO  org.http4s.blaze.channel.nio1.NIO1SocketServerGroup - Accepted connection from /xxx:61213
my-app 23:25:41.614 [blaze-selector-0-0] INFO  org.http4s.blaze.channel.nio1.NIO1SocketServerGroup - Accepted connection from /xxx:47462

Should be:

my-deployment-6854597977-8ffc6 my-app 23:25:41.614 [blaze-selector-0-3] INFO  org.http4s.blaze.channel.nio1.NIO1SocketServerGroup - Accepted connection from /xxx:61213
my-deployment-6854597977-bzmw9 my-app 23:25:41.614 [blaze-selector-0-0] INFO  org.http4s.blaze.channel.nio1.NIO1SocketServerGroup - Accepted connection from /xxx:47462

Suggestion - ARM support

Note: this issue was imported from https://github.com/wercker/stern/issues/124, but it was originally created by alexellis...

ARM support is usually a case of adding GOARM=6 and creating a separate binary called stern-armhf or similar through cross-compilation. We have extensive experience of this in the https://github.com/openfaas/ community and with newer projects like inlets and k3sup.

Would you be interested in ARM / Raspberry Pi support and binaries being made available?

Suggestion: show sequential init container logs first, then pod logs

Note: this issue was imported from https://github.com/wercker/stern/issues/120, but it was originally created by drnic...

Currently the logs from all containers + init containers are mashed together.

We know that init containers were run before main containers, and we know that init containers were run sequentially.

Would it make sense for stern to look up the init container names, their sequence, and show those logs first; and then show normal containers' logs mashed together?

unable to authenticate running inside cluster

Note: this issue was imported from https://github.com/wercker/stern/issues/60, but it was originally created by cmosetick...

stern does not fallback to in-cluster authentication mechanisms.

While running kubectl in a container inside a cluster, it can retrieve it's cluster configuration where it's running without a .kube/config file. This is inherent to kubectl, it does not need special command line flags.

For example:
https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/examples/in-cluster-client-configuration

client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.

I'm trying to use the Gitlab Kubernetes Deploy image, which has kubectl baked into the image.
Dockerfile: https://gitlab.com/gitlab-examples/kubernetes-deploy/blob/master/Dockerfile
location: docker pull registry.gitlab.com/gitlab-examples/kubernetes-deploy

Using that image which has kubectl baked in, you can see that kubectl has no trouble communicating with the cluster. It is able to create a namespace and pod. (kubectl is running in a container on the cluster)

kubectl version
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.2", GitCommit:"8e5584fe95c01f2b3a9d60fcccef8fadbb4c8f88", GitTreeState:"clean", BuildDate:"2017-07-12T21:29:15Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl cluster-info
Kubernetes master is running at https://100.64.0.1:443
KubeDNS is running at https://100.64.0.1:443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl apply -f k8s/
namespace "my-namespace" configured
pod "my-pod" created

However at the same moment, stern is not able to authenticate to the cluster:

./stern -n my-namespace -l name=my-app
failed to get client config: stat /root/.kube/config: no such file or directory

Stern is very useful because it can be launched and wait for items that are not even running on the cluster yet.
While the use cases for such a feature may be limited, it could still be incredibly useful to authenticate to the cluster using the native mechanisms already present in Kubernetes itself.

Tailing logs with JSON output.

Note: this issue was imported from https://github.com/wercker/stern/issues/97, but it was originally created by davinchia...

Do the current stern options support JSON log output from pods with traditional stern color?

A subset of our pods currently output JSON logs with additional fields. An example Stern output (Stern 1.10):

dsj-stat-builder-service-daemon-55444b74cb-zwtfx dsj-stat-builder-service-daemon {"@timestamp":"2019-01-08T03:13:41.573Z","source_host":"dsj-stat-builder-service-daemon-55444b74cb-zwtfx","file":"Bucket.java","method":"readMetadataFromFS","level":"INFO","line_number":"441","thread_name":"Step Runner for count-records-and-segments","@Version":1,"logger_name":"com.rapleaf.formats.bucket.Bucket","message":"Metadata obtained at gs://liveramp-eng-dist-incoming-data/input_data/data/spruce/persistent/data_sync_requests_by_job_id/1610409_1546917039906/bucket.meta","class":"com.rapleaf.formats.bucket.Bucket","mdc":{"run_identifier":"2010629208","joblet_identifier":"1736402519276071327217165","application":"dsj-stat-builder","workflow_execution_id":"344533283","workflow_attempt_id":"355297393","source_host":"dsj-stat-builder-service-daemon-55444b74cb-zwtfx","team":"dev-dist"}}

I am able to parse these using JQ, but haven't been able to figure out how to preserve Stern's color coding while doing so. Is this currently supported/I need to read the docs better or is additional parsing of json log messages not yet a feature?

Show all logs from all containers from all states together including previous containers

Note: this issue was imported from https://github.com/wercker/stern/issues/117, but it was originally created by kivagant-ba...

Related to:

Stern is very useful and great but there are some cases where it requires additional commands.
It would be nice to have the only one option to print-and-exit everything that matches the given filter:

  • logs from active pods and all their containers inside.
  • logs from terminated/exited pods (completed jobs) and all their containers.
  • logs from all previous containers that crashed/exited and restarted by Kubernetes.
  • everything else that Stern can support.

So the feature request combines other issues together to get all available logs without using any parameters combinations.

Use case:

  1. Install a Helm chart without analyzing its resources.
  2. With stern print logs from everything installed based on the project name including jobs, crashed pods, running pods etc.

certificate signed by unknown authority

Note: this issue was imported from https://github.com/wercker/stern/issues/121, but it was originally created by kotarusv...

stern -n kube-system etcd

failed to set up watch: failed to set up watch: Get https://api_host_lb_vip(fqdn):443/api/v1/namespaces/kube-system/pods?watch=true: x509: certificate signed by unknown authority

I don't think any issue with cert. API servers configured with prod grade certs. all tools ( api, kubectl, webuI) all works fine

Exampel:

curl https://api_host_lb_vip:443/healthz
ok

Is it a bug or something else?

Srinivas Kotaru

Stern bash completions clash with kubectl completions

Note: this issue was imported from https://github.com/wercker/stern/issues/137, but it was originally created by drzero42...

I just spent some time troubleshooting a weird problem I was experiencing with kubectl on my system. I noticed that recently I could not get kubectl to autocomplete names of deployments, statefulsets, replicasets and pods. It would gladly autocomplete namespaces, crds, storageclasses, secrets, configmaps and others. As most of my cluster don't have anything in the default namespace, but rather put everything into other namespaces, at first I thought name completion was totally broken, but it turned out that it was just looking in the default namespace, even though I was trying to tab-complete the name of a pod in a different namespace, eg: kubectl -n kube-system describe pod <tab><tab>.

I load bash completions from my .bashrc with source <(kubectl completion bash). I also load completions for a bunch of other commands, like minikube, kind, argocd and also stern. I decided to try loading the kubectl completions by hand in my terminal, which made it work. So I went to my .bashrc and started experimenting with the order that I loaded the completions in, and quickly discovered that if stern completions are loaded after kubectl completions, then I can't autocomplete names from any other namespace than default.

I am in no way an expert on how to write bash completion functions, but a quick look at the output of stern --completion bash and comparing to the output of kubectl completion bash, I see stern defining a number of functions that kubectl also defines. Thus, if the stern bash completions are loaded after the kubectl completions, it screws them up. It would be great if the stern bash completions could work without fiddling with kubectl completions ;)

Stern does not support env variable settings defined in kube config

Note: this issue was imported from https://github.com/wercker/stern/issues/122, but it was originally created by michaelgeorgeattard...

Given the following valid kube config section used to connect to an EKS cluster:

users:
- name: xxx
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - xxx
      command: aws-iam-authenticator
      env:
      - name: AWS_PROFILE
        value: foo

Stern or its dependencies are not switching the AWS_PROFILE to foo, thus failing to connect to the EKS cluster.

stern with grep and redirect to file not working

Note: this issue was imported from https://github.com/wercker/stern/issues/136, but it was originally created by deepaksood619...

stern decision-engine | grep -i "bank response source set" > de_26.log

This is not working

cat de_26.log is empty

stern decision-engine | grep -i "bank response source set" is showing output

stern decision-engine | grep -i "bank response source set"

+ decision-engine-7c7d5dd887-j77zq › decision-engine
+ decision-engine-7c7d5dd887-p949d › decision-engine
decision-engine-7c7d5dd887-p949d decision-engine [2020-04-25 12:50:08,205] INFO in bank_api: bank response source set:
decision-engine-7c7d5dd887-j77zq decision-engine-7c7d5dd887-p949d decision-engine decision-engine [2020-04-25 13:54:57,328] INFO in bank_api: bank response source set:
decision-engine-7c7d5dd887-p949d decision-engine [2020-04-25 16:42:34,288] INFO in bank_api: bank response source set:

There are currently two pods with name starting with decision-engine

Expected behavior:

Output should be redirected to file log_26.log

Create chocolatey package and push to community feed

Note: this issue was imported from https://github.com/wercker/stern/issues/88, but it was originally created by RichardSlater...

Chocolatey is a package manager for windows, as of the time of this issue stern is not available in the community feed. Given that many other kubernetes dependencies are available on Chocolatey I think this would be a great addition.

Instructions:

  1. How do I create a package?
  2. Pushing a package to Chocolatey

I'm happy to submit this change as a pull request, if the following are true:

  1. Project maintainers want to include pushes to platform package managers in this project?
  2. There is a way to run chocolatey to package and push through the projects wercker pipeline?

Essentially I'm asking for permission and feasibility before committing a change.

Detect lost connection

Note: this issue was imported from https://github.com/wercker/stern/issues/55, but it was originally created by pbvie...

It would be great if the script would indicate when the stream of a log is broken. At the moment it's not possible to know whether there are no new log message or if the connection has been lost.

Like error: unexpected EOF when running kubectl logs

Getting opening stream error after a while of tailing multiple pods

Note: this issue was imported from https://github.com/wercker/stern/issues/112, but it was originally created by Aracki...

After tailing few pods for a while I am getting:

Error opening stream to <namespace>/<pod>: <container>
: Get https://<URL>/api/v1/namespaces/<namespace>/pods/<pod>/log?container=<container>&follow=true&sinceSeconds=172800: unexpected EOF

The exact pods have not been restarted, so containers didn't stop writing logs to stdout.

What is causing this and can it be caused by stern itself?

Reconnect when receiving GOAWAY

Note: this issue was imported from https://github.com/wercker/stern/issues/130, but it was originally created by grosser...

when streaming logs and killing 1/3 api servers, stern spits out this:

ERROR: logging before flag.Parse: E0205 21:13:53.792712   21184 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""

(I think the api-server it was connected to died) ... but then it just sits there and does nothing even if new logs lines arrive ... it should retry watching instead (ideally with offset so it does not duplicate)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.