Giter Club home page Giter Club logo

kubectl's Introduction

Kubectl

kubectl logo

Build Status GoDoc

The k8s.io/kubectl repo is used to track issues for the kubectl cli distributed with k8s.io/kubernetes. It also contains packages intended for use by client programs. E.g. these packages are vendored into k8s.io/kubernetes for use in the kubectl cli client. That client will eventually move here too.

Contribution Requirements

  • Full unit-test coverage.

  • Go tools compliant (go get, go test, etc.). It needs to be vendorable somewhere else.

  • No dependence on k8s.io/kubernetes. Dependence on other repositories is fine.

  • Code must be usefully commented. Not only for developers on the project, but also for external users of these packages.

  • When reviewing PRs, you are encouraged to use Golang's code review comments page.

  • Packages in this repository should aspire to implement sensible, small interfaces and import a limited set of dependencies.

Community, discussion, contribution, and support

See this document for how to reach the maintainers of this project.

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

kubectl's People

Contributors

alexzielenski avatar apelisse avatar ardaguclu avatar brianpursley avatar dims avatar droot avatar hoegaarden avatar jefftree avatar julianvmodesto avatar k8s-ci-robot avatar k8s-publishing-bot avatar knverey avatar lauchokyip avatar liggitt avatar liujingfang1 avatar marckhouzam avatar monopole avatar mpuckett159 avatar oke-py avatar pacoxu avatar pandaamanda avatar pohly avatar pwittrock avatar sallyom avatar seans3 avatar soltysh avatar thockin avatar totherme avatar verb avatar zhouya0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubectl's Issues

Adding "PodPreset" to "kubectl get help"

@gyliu513 commented on Wed May 10 2017

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version):

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:
The PodPreset was available in Kubernetes 1.6, so we should expose this in the kubectl CLI.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Too many functions are coupled to cobra.Command

Many of the functions in kubectl/cmd require instances of cobra.Command, a complex and occasionally counterintuitive struct. In many cases the business logic of our application is very tightly coupled to cobra.Command.

This is bad for a few reasons:

  1. Switching to an alternative configuration system for any reason is impossible.
  2. Testing even simple methods requires "stubbing" &cobra.Command{}.
  3. We can't refactor similar commands to use a common codepath (as in #11) without introducing unpredictability due to the differences in the Command instances.

cobra.Command would be better used as a value retrieval system.

Unable to kubectl get -o jsonpath annotation value

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kubectl get annotation json

This feature request seems like it would help in this scenario: kubernetes/kubernetes#19817


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Kubernetes version (use kubectl version):

Output:
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:
Unable to get annotation value when there is a / in the annotation key.

What you expected to happen:
Get the annotation value (see below).

How to reproduce it (as minimally and precisely as possible):

Data:
{
    "apiVersion": "extensions/v1beta1",
    "kind": "Ingress",
    "metadata": {
        "annotations": {
            "description": "my frontend",
            "ingress.gcp.kubernetes.io/pre-shared-cert": "tony-cert-1",
            "ingress.kubernetes.io/backends": "{\"k8s-be-30237--d785be79bbf6d463\":\"HEALTHY\"}",
            "ingress.kubernetes.io/https-forwarding-rule": "k8s-fws-default-echo-app-tls-2--d785be79bbf6d463",
            "ingress.kubernetes.io/https-target-proxy": "k8s-tps-default-echo-app-tls-2--d785be79bbf6d463",
            "ingress.kubernetes.io/ssl-cert": "tony-cert-1",
            "ingress.kubernetes.io/url-map": "k8s-um-default-echo-app-tls-2--d785be79bbf6d463",
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"ingress.gcp.kubernetes.io/pre-shared-cert\":\"tony-cert-1\",\"kubernetes.io/ingress.allow-http\":\"false\",\"kubernetes.io/ingress.global-static-ip-name\":\"make-static\"},\"name\":\"echo-app-tls-2\",\"namespace\":\"default\"},\"spec\":{\"backend\":{\"serviceName\":\"echo-app\",\"servicePort\":88}}}\n",
            "kubernetes.io/ingress.allow-http": "false",
            "kubernetes.io/ingress.global-static-ip-name": "make-static"
        },
        "creationTimestamp": "2017-05-24T19:57:16Z",
        "generation": 1,
        "name": "echo-app-tls-2",
        "namespace": "default",
        "resourceVersion": "17247450",
        "selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/echo-app-tls-2",
        "uid": "2fe93467-40bb-11e7-a242-42010a8000f8"
    },
    "spec": {
        "backend": {
            "serviceName": "echo-app",
            "servicePort": 88
        }
    },
    "status": {
        "loadBalancer": {
            "ingress": [
                {
                    "ip": ".............."
                }
            ]
        }
    }
}
Works:
$ kubectl get ing/echo-app-tls-2 -o jsonpath='{.metadata.annotations.description}'
my frontend
Does not work:
$ kubectl get ing/echo-app-tls-2 -o jsonpath='{.metadata.annotations."ingress.kubernetes.io/url-map"}'
<blank>

$ kubectl get ing/echo-app-tls-2 -o jsonpath='{.metadata.annotations.ingress.kubernetes.io/url-map}'
<blank>

$ kubectl get ing/echo-app-tls-2 -o jsonpath="{.metadata.annotations.'ingress.kubernetes.io/url-map'}"
<blank>

$ kubectl get ing/echo-app-tls-2 -o jsonpath="{.metadata.annotations[ingress.kubernetes.io/url-map]}"
error: error parsing jsonpath {.metadata.annotations[ingress.kubernetes.io/url-map]}, invalid array index ingress.kubernetes.io/url-map

$ kubectl get ing/echo-app-tls-2 -o jsonpath="{.metadata.annotations['ingress.kubernetes.io/url-map']}"
<blank>

$ kubectl get ing/echo-app-tls-2 -o jsonpath='{.metadata.annotations["ingress.kubernetes.io/url-map"]}'
error: error parsing jsonpath {.metadata.annotations["ingress.kubernetes.io/url-map"]}, invalid array index "ingress.kubernetes.io/url-map"

Anything else we need to know:
Tested the above data with https://jsonpath.curiousconcept.com/ and these selectors were able to get the value:

  • .metadata.annotations['ingress.kubernetes.io/url-map']
  • .metadata.annotations.['ingress.kubernetes.io/url-map']

A password with special characters is not escaped right when create a docker-registry secret.

BUG REPORT

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T02:00:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T01:48:01Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: all cloudproviders
  • OS : 16.04.3 LTS (Xenial Xerus)
  • Kernel : Linux master 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

What happened:

When you create a docker-registry secret with a special character you can't pull images. Looks like the special character is not escaped.

For example:

kubectl create secret docker-registry index.docker.io-v1 --docker-server=https://index.docker.io/v1/ --docker-username=hello --docker-password=password$withspecialcharacter --namespace=test [email protected]

You can't pull the image because the password is not correctly used.

When you run this command it works:

kubectl create secret docker-registry index.docker.io-v1 --docker-server=https://index.docker.io/v1/ --docker-username=hello --docker-password=password\$withspecialcharacter --namespace=test [email protected]

\$ instead of $

What you expected to happen:

I was expected that the special character can be used as a password without adding my own escape character.

How to reproduce it :

  • Create an account for your private image repo (for example docker hub)
  • Use a special character in the password (for example: mypassword$help)
  • Create the secret
  • Try to deploy a pod with an image from docker hub
  • Go get an error can't login to repo because of wrong username/password

Anything else we need to know:

Ask if you have any question ;-) Keep up the good work! You are all awesome.

`kubectl scale` completion does not include statefulset

Is this a BUG REPORT or FEATURE REQUEST? (choose one): A bit of both, I guess.

Kubernetes version (use kubectl version): 1.6.2

Environment:

  • Cloud provider or hardware configuration: Macbook Pro 2016
  • OS (e.g. from /etc/os-release): macOS Sierra 10.12.4
  • Kernel (e.g. uname -a): Darwin Sanders-MacBook-Pro-2.local 16.5.0 Darwin Kernel Version 16.5.0: Fri Mar 3 16:52:33 PST 2017; root:xnu-3789.51.2~3/RELEASE_X86_64 x86_64
  • Install tools: gcloud components install kubectl?
  • Others: Running zsh

What happened: When trying to scale my statefulset using kubectl, autocomplete only offers --replicas deployment job replicaset replicationcontroller.

Cant set context

When using kubectl

kubectl version (jd-uat/default) Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T22:51:55Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

i have 2 different contexts set in config trying to set context from one to another doesn't switch.

`▶ kubectl config current-context
od-uat

~
▶ kubectl config set-context oi-uat
Context "oi-uat" set.

~
▶ kubectl config current-context
od-uat`

And the config file still shows as

current-context: od-uat

kubectl --sort-by not working in 1.7.0

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
bug report

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T22:55:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:
Kubernetes cluster is running on CoreOS

VERSION=1409.5.0
VERSION_ID=1409.5.0
BUILD_ID=2017-06-22-2222
PRETTY_NAME="Container Linux by CoreOS 1409.5.0 (Ladybug)"

Kernel 4.11.6-coreos-r1, but that should be pretty irrelevant as it seems the problem is in kubectl.
Tried on MacOS 10.12.5 with 1.7.0 kubectl and on various GNU/Linux machines. It doesn't work also against a 1.6.6 Kubernetes cluster so I am quite confident this is a client issue.

What happened:
$ kubectl get pods --sort-by='{.metadata.name}'
error: unknown type *api.Pod, expected unstructured in map[reflect.Type]*printers.handlerEntry{}

What you expected to happen:
A list of pods sorted by name should have appeared as was the case with 1.6.6.

Allow client-side validation against OpenAPI rather than Swagger

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): validation, openapi


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Feature request

Kubernetes version (use kubectl version):
1.7

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

kubectl proxy --reject-methods flag is non-functional

The flag is registered, but the specified value is never read or used

This means all http methods are allowed by default (despite the help saying PUT/POST/PATCH being disallowed), and users have no way to control which methods are accepted

Add kubectl create for missing workloads

Add new kubectl create commands for missing workloads. Flags for fields shared between all workloads - e.g. image (and everything else in Pod or PodTemplate) should be factored into a single library. Flags for field shared between many workloads should be factored into another library - e.g. replicas.

  • create statefulset
  • create daemonset
  • create replicaset
  • create pod

CrashLoopBackOff while creating Pod

  • I am getting below error while creating a pod :
    kubernetes dashboard 1

  • yaml file :

apiVersion: v1
kind: Pod
metadata:
  name: docker-io-image
spec:
  containers:
    - name: dockertest-web-1
      image: inforian/dockertest_web
      ports:
      - containerPort: 8000
  • kubectl config :
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-05-09T23:19:49Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}

Adding "PodPreset" to "kubectl get help"

@gyliu513 commented on Wed May 10 2017

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version):

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:
The PodPreset was available in Kubernetes 1.6, so we should expose this in the kubectl CLI.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:


@gyliu513 commented on Wed May 10 2017

Issue moved to kubernetes/kubectl #9 via ZenHub

kubectl drain errors if a pod has already been deleted

Kubernetes version (use kubectl version):
kubernetes/master

What happened:
kubectl drain of a node threw an error when a pod was already deleted.

What you expected to happen:
kubectl drain should not error if the pod it attempts to delete is already deleted

How to reproduce it (as minimally and precisely as possible):
create a pod, run kubectl drain on the node, delete the pod prior to drain completing on that node.

Anything else we need to know:
nope. i have a pr prepared with a fix.

cmd.Flags().Set() accepts invalid configurations in create_service_test.go

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.0.644+2ddde09a6ca650-dirty", GitCommit:"2ddde09a6ca6504f200572b44282dc9983641618", GitTreeState:"dirty", BuildDate:"2017-06-16T20:43:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-30T22:03:41Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: MacBook Pro mid 2015
  • OS (e.g. from /etc/os-release): OS X 10.11.6 (15G1510)
  • Kernel (e.g. uname -a): 15.6.0 Darwin Kernel Version 15.6.0

What happened:
cmd.Flags().Set() accepts invalid configurations in pkg/kubectl/cmd/create_service_test.go, e.g. cmd.Flags().Set("tcp", "8080:X")

What you expected to happen:
Invalid configs should fail in tests

How to reproduce it (as minimally and precisely as possible):
Change cmd.Flags().Set("tcp", "8080:8080") to cmd.Flags().Set("tcp", "8080:X") in pkg/kubectl/cmd/create_service_test.go and run go test

kubectl patch initContainers image doesn't work

Is this a request for help? : Nope

What keywords did you search in Kubernetes issues before filing this one? : "patch"


Is this a BUG REPORT or FEATURE REQUEST? : BUG REPORT?

Kubernetes version (use kubectl version): v1.6.2

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4+coreos.0", GitCommit:"8996efde382d88f0baef1f015ae801488fcad8c4", GitTreeState:"clean", BuildDate:"2017-05-19T21:11:20Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: aws
  • OS (e.g. from /etc/os-release): CoreOS
  • Install tools: Tack

What happened:

I have a pod that contains nginx and php-fpm containers that need to reference the same sourcecode. To do this I use a initContainers to copy some sourcecode to a emptyDir volume. It works nicely.

To deploy a new version of the app, the only thing that needs changing is the image of the container in the initContainers. I would like to be able to patch it like this:

$ kubectl patch deployment thedeployment -p'{"spec":{"template":{"spec":{"initContainers":[{"name":"init-sourcecode","image":"vendor/app_data:master.53"}]}}}}'

It returns:

deployment "thedeployment" patched

but checking with kubectl get deployment thedeployment -o json | less shows it's not the case.

What you expected to happen:
The deployment should be updated with a the new image. (and a new deployment triggered)

How to reproduce it :

$ cat nginx-test.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-test-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      initContainers:
      - name: init-webpage
        image: busybox
        command: ["sleep", "3"]
        volumeMounts:
        - mountPath: /work-dir
          name: workdir

      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: workdir
          mountPath: /usr/share/nginx/html
      dnsPolicy: Default
      volumes:
      - name: workdir
        emptyDir: {}
kubectl create -f nginx-test.yaml

Try to patch the image to change from busybox:latest to busybox:1.25:

kubectl patch deployment nginx-test-deployment -p'{"spec":{"template":{"spec":{"initContainers":[{"name":"init-webpage","image":"busybox:1.25"}]}}}}'

deployment "nginx-test-deployment" patched

Even though it says it's patched, check busybox image in the deployment to see it's not changed with:

kubectl get deployment nginx-test-deployment -o json | grep busybox

inconsistencies when using flag '-o json'

kubernetes version

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T20:41:24Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4+coreos.0", GitCommit:"8996efde382d88f0baef1f015ae801488fcad8c4", GitTreeState:"clean", BuildDate:"2017-05-19T21:11:20Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • aws
  • coreos
  • Linux ip-10-0-65-22 4.11.2-coreos #1 SMP Tue May 23 22:04:34 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz GenuineIntel GNU/Linux
  • tectonic installer:

What happened:
I have a daemonset, running on 5 nodes. When I kill a pod of this daemonset and get it's status
with kubectl, I will notice inconsistencies with the json output ( -o json )

For example, when I kill the pod and run
kubectl get pod "$POD_NAME",
it's status will be Terminating

But, killing it and running
kubectl get pod "$POD_NAME" -o json | jq '.status.phase',
I will get Running

Also, as I said, this pod is from a daemonset.
When I do kubectl get daemonset "$DAEMONSET_NAME" -o json,
the output of status will sometime be inconsistent too.
for example, after killing a pod and waiting for it to be up again ( status Running ),
the status of the daemonset will look like that

    "status": {
        "currentNumberScheduled": 5,
        "desiredNumberScheduled": 5,
        "numberAvailable": 4,
        "numberMisscheduled": 0,
        "numberReady": 4,
        "numberUnavailable": 1,
        "observedGeneration": 3,
        "updatedNumberScheduled": 5
    }

What you expected to happen:

Both command should give me the same result, ie Terminating

How to reproduce it (as minimally and precisely as possible):

  • kill a pod
  • get it's status using kubectl get pod "$POD_NAME"
  • kill another pod
  • get it's status using kubectl get pod "$POD_NAME" -o json | jq '.status.phase'

BUG: Info log on every kubectl command "duplicate proto type registered"

This a BUG REPORT (choose one):

$ kubectl version
2017-06-13 09:32:31.930035 I | proto: duplicate proto type registered: google.protobuf.Any
2017-06-13 09:32:31.930096 I | proto: duplicate proto type registered: google.protobuf.Duration
2017-06-13 09:32:31.930111 I | proto: duplicate proto type registered: google.protobuf.Timestamp

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"dfaba882698e26a47fb769be45fe5c048a9fe4ad", GitTreeState:"clean", BuildDate:"2017-06-13T01:15:53Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-beta.1", GitCommit:"dfaba882698e26a47fb769be45fe5c048a9fe4ad", GitTreeState:"clean", BuildDate:"2017-06-08T01:43:08Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

What happened:
Every kubectl command outputs a proto info message about dbl registration.

2017-06-13 09:32:31.930035 I | proto: duplicate proto type registered: google.protobuf.Any
2017-06-13 09:32:31.930096 I | proto: duplicate proto type registered: google.protobuf.Duration
2017-06-13 09:32:31.930111 I | proto: duplicate proto type registered: google.protobuf.Timestamp

What you expected to happen:
No output

How to reproduce it (as minimally and precisely as possible):
Run any kubectl command.

Namespace flag causes completion error.

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version):
(Output is current, but this is after I upgraded from 1.6.4 where I first saw the problem)
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:15:53Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.5", GitCommit:"894ff23729bbc0055907dd3a496afb725396adda", GitTreeState:"clean", BuildDate:"2017-03-22T00:17:51Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): VERSION="16.04.2 LTS (Xenial Xerus)"
  • Kernel (e.g. uname -a): 4.8.0-56-generic
  • Install tools:
  • Others: using zsh

What happened:
typed: kubectl --namespace=$NAMESPACE g then hit tab
kubectl threw an error kubectl --namespace=foo g__handle_flag:25: bad math expression: operand expected at end of string

What you expected to happen:
g would complete to 'get'

How to reproduce it (as minimally and precisely as possible):
enter kubectl --namespace=foo g and hit tab

Anything else we need to know:
Does not happen without the --namespace flag

Kubectl should warn when resources aren't found and discovery failed

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):

Aggregator


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version):

$ k version
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.0.527+bb877f1ee6b30c", GitCommit:"bb877f1ee6b30c0f70068aa5ffc4324e6443c89c", GitTreeState:"clean", BuildDate:"2017-06-13T12:26:12Z", GoVersion:"go1.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.0.527+bb877f1ee6b30c", GitCommit:"bb877f1ee6b30c0f70068aa5ffc4324e6443c89c", GitTreeState:"clean", BuildDate:"2017-06-13T12:21:56Z", GoVersion:"go1.8", Compiler:"gc", Platform:"linux/amd64"}

What happened:

I set up an API service with the aggregator that had an SSL certificate which was signed with the wrong DNS name. The API service resource had a good status, but I would get 'resource not found' errors from kubectl trying to use it. Eventually with @deads2k's help we found that there were discovery errors against the aggregated API group. See a gist with relevant details here: https://gist.github.com/pmorie/0e8d5ba6c43d0c7000cd05a39a8ae190

What you expected to happen:

I would expect kubectl to warn that discovery failed instead of just saying that the resource didn't exist.

How to reproduce it (as minimally and precisely as possible):

Set up an API service resource with an incorrectly signed cert and try to use one of the aggregated resources.

kubectl cp doesn't feature tab completion

kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T19:15:41Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

It would be helpful if kubectl cp completed pod-names, like most other kubectl parameters do. This way, you wouldn't have to issue kubectl get pod and copy/paste the pod name before copying a file from or to a pod.

Kubectl fails to resolve names except through DNS

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): Not exactly

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kubectl "unable to connect to the server" "read udp"


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version):

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T15:13:53Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}

Environment:

  • Cloud provider or hardware configuration: unknown
  • OS (e.g. from /etc/os-release): macOS 10.12.6
  • Kernel (e.g. uname -a): n/a
  • Install tools: Homebrew
  • Others:

What happened:

$ time kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T15:13:53Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp: lookup kub1.mycorp.local on 192.168.14.1:53: read udp 192.168.14.131:49181->192.168.14.1:53: i/o timeout
kubectl version  0.11s user 0.02s system 0% cpu 16.153 total
$ ping kub1
PING kub1.mycorp.local (10.10.9.161): 56 data bytes
64 bytes from 10.10.9.161: icmp_seq=0 ttl=61 time=387.593 ms
$ python -c "import socket; print(socket.gethostbyname('kub1'))"
10.10.9.161

It takes 15+ seconds to fail to connect to the cluster master and fails because it can't figure out the name.

192.168.14.1 is the IP address of my local wifi router. It doesn't (and shouldn't) know anything about kub1. As you can see ping and gethostbyname both resolve the name through the Cisco VPN client installed and connected on the host.

What you expected to happen:

kubectl should connect to kub1 and kub1.mycorp.local like any other application on my system. It shouldn't be making UDP calls to the nameserver directly but should use the IP stack on the host.

Additionally, the command probably shouldn't be attempting to connect for a version command. Preferable would be for the command to return the version immediately... and for this issue only to appear if a cluster-relevant command were issued.

How to reproduce it (as minimally and precisely as possible): See above.

Anything else we need to know:

Rejecting valid Environment Variable Names

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23 15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18 21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Azure Container Service
  • OS: Windows 10
  • Install tools:
  • Others:

What happened:
I was invoking the following command:
kubectl run myname --image=myimage:86 --port=80 --env="ConnectionStrings:DefaultConnection=Data Source=tcp:mySqlServer,1433;Initial Catalog=myDB;User Id=myUser;Password=mypassword;"
I got errors about invalid env:
error: invalid env: ConnectionStrings:DefaultConnection=Data Source=tcp:mySqlServer

I tried this with a YAML file as well and got a better error message stating I was using invalid characters for my name. However, : is valid the only values not allowed in names is "=". To work around this I removed the : then I got an error because it split my value on "," between my server and port.

What you expected to happen:
I expected a new environment variable to be created with the following information:

  • name: ConnectionStrings:DefaultConnection
  • value: Data Source=tcp:mySqlServer,1433;Initial Catalog=myDB;User Id=myUser;Password=mypassword;

How to reproduce it (as minimally and precisely as possible):
Just try and set the environment variable name to something that includes a ":" and have a value that contains a ",". The splitting should not split on "," in the value of the environment variable.

Anything else we need to know:
I am using .NET Core and it allows you to create nested configurations that are built using a : in the environment variable name. I was able to use this just fine with docker run commands against my own host because docker and Linux does support the string as I originally submitted it. This only breaks because of the validation of values and parsing logic of kubectl.

kubectl did not print readable message of PVC's capacity.

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug

Kubernetes version (use kubectl version):
master branch

What happened:

Here's the output when we print Quantity as string.

Volume Claims:
  Name:		www
  StorageClass:	anything
  Labels:	<none>
  Annotations:	volume.beta.kubernetes.io/storage-class=anything
  Capacity:	{{%!s(int64=1073741824) %!s(resource.Scale=0)} {%!s(*inf.Dec=<nil>)} 1Gi BinarySI}
  Access Modes:	[ReadWriteOnce]

What you expected to happen:
print 1Gi

How to reproduce it (as minimally and precisely as possible):
kubectl describe statefulsets

/assign

xref kubernetes/kubernetes#47571

Refactor kubectl run + expose and kubectl create xxx to use shared code

The kubectl run and kubectl expose commands can be used to create various workloads and services. kubectl create xxx commands can also create the same resources, but use different code and have support for different flags.

Refactor the commands that create the same objects to use common code and bring feature parity

Issues in Pull local image irrespective of ` imagePullPolicy: Never` tag

  • I am trying to create a pod from local docker image which I have created .

selection_006

  • Below is my yaml file
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: dockertest-web-1
      image: nd/djclone
      ports:
      - containerPort: 8000
      imagePullPolicy: Never
  • Command :
    selection_007

  • Dashboard view :
    kubernetes dashboard

kubectl get should support PodPreset

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):

podpreset


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

bug;

Kubernetes version (use kubectl version):

1.7

What happened:

kubectl get podpresets -n test-ns yields raw output:

$ k get podpreset my-pod-preset -n test-ns
error: unknown type &settings.PodPreset{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-pod-preset", GenerateName:"", Namespace:"test-ns", SelfLink:"/apis/settings.k8s.io/v1alpha1
/namespaces/test-ns/podpresets/my-pod-preset", UID:"0d2170ab-4ba9-11e7-b784-68f728db1985", ResourceVersion:"1441", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{sec:63632454309, nsec:0, loc:(*time.Locat
ion)(0x2bd6c00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finali
zers:[]string(nil), ClusterName:""}, Spec:settings.PodPresetSpec{Selector:v1.LabelSelector{MatchLabels:map[string]string{"app":"my-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}, Env:[]api.EnvVar(ni
l), EnvFrom:[]api.EnvFromSource{api.EnvFromSource{Prefix:"", ConfigMapRef:(*api.ConfigMapEnvSource)(nil), SecretRef:(*api.SecretEnvSource)(0xc420ab8d00)}}, Volumes:[]api.Volume(nil), VolumeMounts:[]api.VolumeMou
nt(nil)}}

What you expected to happen:

Expected normal formatted columns

How to reproduce it (as minimally and precisely as possible):

Make a podpreset, kubectl get it

Consolidate the Deployment Generators

Issue type: refactor

Description

pkg/kubectl/deployment.go is the same 100 lines copy/pasted. DeploymentBasicAppsGeneratorV1 and DeploymentBasicGeneratorV1 are identical except they return extensionsv1beta1.Deployment and appsv1beta1.Deployment respectively.

If I consolidate these structs to use the same codepath, I don't have to make the same changes twice to add new parameters (like --label, for instance).

Acceptance criteria

  • --label can be added to the "kubectl create deployment" Generators without making the same changes in two places.
  • Generator behavior is exactly the same as it is in master.

serviceAccountName default value

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T20:41:07Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2+coreos.0", GitCommit:"79fee581ce4a35b7791fdd92e0fc97e02ef1d5c0", GitTreeState:"clean", BuildDate:"2017-04-19T23:13:34Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1353.8.0
VERSION_ID=1353.8.0
BUILD_ID=2017-05-30-2322
PRETTY_NAME="Container Linux by CoreOS 1353.8.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
  • Kernel (e.g. uname -a):
Linux ip-10-66-21-135.eu-west-1.compute.internal 4.9.24-coreos #1 SMP Tue May 30 23:12:01 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz GenuineIntel GNU/Linux
  • Install tools: terraform
  • Others: N/A

What happened:

  1. Given the following manifest:
---
kind: Namespace
apiVersion: v1
metadata:
  name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-sa
  namespace: test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test-deployment
  namespace: test
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: test-app
    spec:
      containers:
        - name: test-container
          image: gcr.io/google_containers/pause:1.0
$ kubectl apply -f test.yaml
namespace "test" configured
serviceaccount "test-sa" configured
deployment "test-deployment" created
  1. Modify the manifest to specify a serviceAccountName:
      serviceAccountName: test-sa

Applying again with kubectl will update the deployment and cause the running pod to be replaced, as expected.

  1. Modify the manifest again and remove the serviceAccountName, apply again. The deployment is not updated:
$ kubectl -ntest describe deployment test-deployment | grep 'Service Account'
  Service Account:	test-sa

What you expected to happen:
I expected the deployment to be updated to use the default namespace Service Account again.

How to reproduce it (as minimally and precisely as possible):
See the steps above.

Optionally use azure active directory for kubectl auth.

Kubectl authentication is pluggable. Azure Active Directory is a popular authentication service for both cloud and on-premise identity management. Kubectl should support azure active directory authentication to enable those users to continue to use their existing identity management solutions.

kubectl does not set container name according to deployment manifest

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Azure
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a): `Linux k8s-master-42728370-0 4.4.0-77-generic #98-Ubuntu SMP Wed Apr 26 08:34:02 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux``
  • Install tools:
  • Others: The host is kubernetes master host

What happened:
When deploying a manifest that defines name for container, the actual name that is assigned is in a form k8s_dicom-viewer.bd9132cf_dicom-viewer-deployment-665650083-5tl7f_prototype3_765a98cf-4aef-11e7-891b-000d3a304426_35acc9a7 where dicom-viewer-deployment is a name of the deployment.

What you expected to happen:
I would expect that a container will have a name that was defined in manifest file under spec/template/spec/containers/name

How to reproduce it (as minimally and precisely as possible):

  1. create kubernetes cluster using acs-engine
  2. deploy a container with a name

Anything else we need to know:
The cluster has Windows agent nodes.
Additionally, it is impossible to pull logs (via kubernetes dashboard). It shows Get https://42728acs9001:10250/containerLogs/prototype3/minio-deployment-3747382929-07jk9/minio?timestamps=true: dial tcp: lookup 42728acs9001 on 10.7.224.100:53: no such host error message.

Kubectl errors when it needs to be updated instead of notifying.

It would be nice if kubectl would provide a hint when it just needs an update.

Using a slightly older version of kubectl against Api server 1.6 and I got

kubectl get pods
error: group map[autoscaling:0xc8203c7110 batch:0xc8203c72d0 policy:0xc8203c6070 rbac.authorization.k8s.io:0xc8203c60e0 storage.k8s.io:0xc8203c6150 componentconfig:0xc8203c73b0 extensions:0xc8203c6000 federation:0xc8203c69a0 :0xc8203c6e70 apps:0xc8203c6ee0 authentication.k8s.io:0xc8203c6f50 authorization.k8s.io:0xc8203c70a0 certificates.k8s.io:0xc8203c7340] is already registered

Needless to say the user likely is going to waste a lot of time troubleshooting this before replacing kubectrl that was working just fine minutes before.

Mark deprecated commands in 'kubectl help'

We have several deprecated commands. When running kubectl help, we get below (I don't list other commands here).

Basic Commands (Beginner):
  run            Run a particular image on the cluster
  run-container  Run a particular image on the cluster

Deploy Commands:
  rolling-update Perform a rolling update of the given ReplicationController
  rollingupdate  Perform a rolling update of the given ReplicationController
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  resize         Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job

Cluster Management Commands:
  cluster-info   Display cluster info
  clusterinfo    Display cluster info

Advanced Commands:
  replace        Replace a resource by filename or stdin
  update         Replace a resource by filename or stdin

For new end-users, they are confused by the different commands with the same usage. They might be not sure which one to use until they get Command "foo" is deprecated, use "bar" instead. We need to mark the deprecated commands in the help info.

I will create a PR to mark them in help info. However, I wonder if we still need to keep these deprecated commands. Most of them have been deprecated for a long time. For example, rollingupdate was marked in kubernetes/kubernetes#6118 two years ago. We might want to delete them.

@bgrant0607 Do we have deprecation policy for such case?

kubectl set resource/selector/subject -o yaml doesn't return the expected format

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.2.1821+7c87bdd628715c-dirty", GitCommit:"7c87bdd628715c3a12a90267c689bdacaf99a069", GitTreeState:"dirty", BuildDate:"2017-08-14T02:21:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.2.1821+7c87bdd628715c-dirty", GitCommit:"7c87bdd628715c3a12a90267c689bdacaf99a069", GitTreeState:"dirty", BuildDate:"2017-08-14T02:21:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

What happened:

# kubectl set subject rolebinding/admin --user=user1 --group=group1 -o yaml
rolebinding "admin" subjects updated

What you expected to happen:

# kubectl set subject rolebinding/admin --user=user1 --group=group1 -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: 2017-08-14T02:26:11Z
  name: admin
  namespace: default
  resourceVersion: "381"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings/admin
  uid: f0238ea3-8097-11e7-ab4c-7427ea6f0fe3
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: foo1
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: admin
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: user1
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: group1

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Cannot fetch multi-container pod logs by selector

kubectl version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

What happened:

When attempting to fetch the logs of a pod using a selector kubectl seems to demand a container name, but then does not allow a container name to be passed.

$ kubectl logs --selector app=kube-dns
Error from server (BadRequest): a container name must be specified for pod kube-dns-2325730542-99bh1, choose one of: [kubedns dnsmasq dnsmasq-metrics healthz]
$ kubectl logs --selector app=kube-dns --container  kubedns
error: a container cannot be specified when using a selector (-l)

What you expected to happen:

  1. With kubectl logs --selector app=kube-dns I expected to see the logs for all containers in all my kube-dns labeled pods.
  2. With kubectl logs --selector app=kube-dns --container kubedns I expected to see the logs for all the kubedns containers in all my pod kube-dns labeled pods.

Also acceptable would be if one of those two commands failed with one of the above errors, but then the other command worked.

How to reproduce it:

Attempt to get logs for a multi-pod container using a selector.

`kubectl top nodes/pods` doesn't show Storage usage

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Mostly feature

On running,

kubectl top -h
Display Resource (CPU/Memory/Storage) usage.

Though it says but there is no option to get storage information.
Storage information should also be displayed by kubectl top nodes/pods

Kubernetes version (use kubectl version):

1.6

Environment:

  • Cloud provider or hardware configuration:
    GKE
  • OS (e.g. from /etc/os-release):
    mac
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

kubectl run --env option restricts the format of the variable values

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"clean", BuildDate:"2017-05-10T23:29:08Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-05-09T23:22:45Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • OS: Mac OS Sierra v10.12.4
  • Kernel: Darwin mylaptop.localdomain 16.5.0 Darwin Kernel Version 16.5.0: Fri Mar 3 16:52:33 PST 2017; root:xnu-3789.51.2~3/RELEASE_X86_64 x86_64 i386 MacBookPro11,2 Darwin
  • Others:
    • minikube version: v0.19.0
    • Docker version 17.03.1-ce, build c6d412e

What happened:

I'm invoking kubectl run with the --env option. For my deployment I must pass a JSON string as the value of an environment variable. It seems like kubectl doesn't accept image environment variable values that are not C identifiers although If my JSON string has only one attribute then kubectl accepts the command. The 2 commands can be found in the How to reproduce it section below.

The relevant code seems:

https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/run.go#L863

https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/validation/validation.go#L178

What you expected to happen:

The environment variable should have been accepted and the image started.

How to reproduce it:

This works:

$> kubectl run consul --image=consul:0.8.3 --image-pull-policy=IfNotPresent --port=8500 --env="CONSUL_LOCAL_CONFIG={'acl_datacenter':'dc1'}"
deployment "consul" created

This does not:

$> kubectl run consul --image=consul:0.8.3 --image-pull-policy=IfNotPresent --port=8500 --env="CONSUL_LOCAL_CONFIG={'acl_datacenter':'dc1','acl_default_policy':'allow','acl_down_policy':'extend-cache','acl_master_token':'the_one_ring','bootstrap_expect':1,'datacenter':'dc1','data_dir':'/usr/local/bin/consul.d/data','server':true}"
error: invalid env: 'acl_default_policy':'allow'

With the latter, even adding a single attribute seems to prevent kubectl from processing the command - perhaps related to the comma in the string.

Anything else we need to know:

All this is running in a local development environment on my laptop. The versions of the software I'm using are specified above.

kubectl run --dry-run raises error

What keywords did you search in Kubernetes issues before filing this one?: There were several issues in github.com/kubernetes/kubernetes, chiefly this one that suggested creating an issue in this repo instead, which I'm doing now since nothing seems to exist in this repo: kubernetes/kubernetes#47180


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report

Kubernetes version (use kubectl version): kubectl 1.7.0, server 1.7.1-gke.0

Environment:

  • Cloud provider or hardware configuration: GKE 1.7.1

What happened:
Running kubectl run nginx --image=nginx --dry-run, as suggested in the docs, raises an error:

error: unknown type *v1beta1.Deployment, expected unstructured in map[reflect.Type]*printers.handlerEntry{(*reflect.rtype)(0x2c435c0):(*printers.handlerEntry)(0xc4203f9b80), (*reflect.rtype)(0x2c442e0):(*printers.handlerEntry)(0xc4206341e0), (*reflect.rtype)(0x2b3ce20):(*printers.handlerEntry)(0xc420635400), (*reflect.rtype)(0x2c40140):(*printers.handlerEntry)(0xc4203f8280), (*reflect.rtype)(0x2c3df20):(*printers.handlerEntry)(0xc4203f8960), (*reflect.rtype)(0x2c403e0):(*printers.handlerEntry)(0xc4203f90e0), (*reflect.rtype)(0x2c3e9a0):(*printers.handlerEntry)(0xc4203f93b0), (*reflect.rtype)(0x2c43da0):(*printers.handlerEntry)(0xc4203f9680), (*reflect.rtype)(0x2b3dc20):(*printers.handlerEntry)(0xc420635680), (*reflect.rtype)(0x2b3d360):(*printers.handlerEntry)(0xc4204fbbd0), (*reflect.rtype)(0x2b3cf00):(*printers.handlerEntry)(0xc4204fbd10), (*reflect.rtype)(0x2b3c480):(*printers.handlerEntry)(0xc4203f8320), (*reflect.rtype)(0x2b3caa0):(*printers.handlerEntry)(0xc4203f99a0), (*reflect.rtype)(0x2b3cfe0):(*printers.handlerEntry)(0xc4203f9860), (*reflect.rtype)(0x2b3bbc0):(*printers.handlerEntry)(0xc4203f8e60), (*reflect.rtype)(0x2b3d440):(*printers.handlerEntry)(0xc4203f9d60), (*reflect.rtype)(0x2b3d600):(*printers.handlerEntry)(0xc4206343c0), (*reflect.rtype)(0x2c45540):(*printers.handlerEntry)(0xc4206355e0), (*reflect.rtype)(0x2c3f960):(*printers.handlerEntry)(0xc4204fb950), (*reflect.rtype)(0x2b3bca0):(*printers.handlerEntry)(0xc4203f88c0), (*reflect.rtype)(0x2b3b680):(*printers.handlerEntry)(0xc4203f95e0), (*reflect.rtype)(0x2b3b840):(*printers.handlerEntry)(0xc4203f86e0), (*reflect.rtype)(0x2c3fea0):(*printers.handlerEntry)(0xc4203f8fa0), (*reflect.rtype)(0x2b3be60):(*printers.handlerEntry)(0xc4203f94a0), (*reflect.rtype)(0x2c457e0):(*printers.handlerEntry)(0xc4206354a0), (*reflect.rtype)(0x2c3d9e0):(*printers.handlerEntry)(0xc4203f9a40), (*reflect.rtype)(0x2c44040):(*printers.handlerEntry)(0xc4203f9cc0), (*reflect.rtype)(0x2b3d980):(*printers.handlerEntry)(0xc420634dc0), (*reflect.rtype)(0x2b3cc60):(*printers.handlerEntry)(0xc4203f81e0), (*reflect.rtype)(0x2b3d0c0):(*printers.handlerEntry)(0xc4203f8460), (*reflect.rtype)(0x2b3c8e0):(*printers.handlerEntry)(0xc4203f85a0), (*reflect.rtype)(0x2c3e460):(*printers.handlerEntry)(0xc4203f8dc0), (*reflect.rtype)(0x2c3ec40):(*printers.handlerEntry)(0xc4203f9270), (*reflect.rtype)(0x2c40bc0):(*printers.handlerEntry)(0xc4206357c0), (*reflect.rtype)(0x2b3cd40):(*printers.handlerEntry)(0xc4204fbe50), (*reflect.rtype)(0x2c3dc80):(*printers.handlerEntry)(0xc4203f8640), (*reflect.rtype)(0x2b3d8a0):(*printers.handlerEntry)(0xc420635130), (*reflect.rtype)(0x2bedce0):(*printers.handlerEntry)(0xc420635720), (*reflect.rtype)(0x2c44ac0):(*printers.handlerEntry)(0xc420634b40), (*reflect.rtype)(0x2b3bf40):(*printers.handlerEntry)(0xc4204fb450), (*reflect.rtype)(0x2c3e700):(*printers.handlerEntry)(0xc4203f8780), (*reflect.rtype)(0x2b3b920):(*printers.handlerEntry)(0xc4203f8a00), (*reflect.rtype)(0x2b3c1e0):(*printers.handlerEntry)(0xc4203f8d20), (*reflect.rtype)(0x2c3d740):(*printers.handlerEntry)(0xc4203f9540), (*reflect.rtype)(0x2c420c0):(*printers.handlerEntry)(0xc4204fbf90), (*reflect.rtype)(0x2b3d520):(*printers.handlerEntry)(0xc4203f9720), (*reflect.rtype)(0x2b3b760):(*printers.handlerEntry)(0xc4203f9ae0), (*reflect.rtype)(0x2c3d200):(*printers.handlerEntry)(0xc4203f9e00), (*reflect.rtype)(0x2c452a0):(*printers.handlerEntry)(0xc4206348c0), (*reflect.rtype)(0x2b3d7c0):(*printers.handlerEntry)(0xc4204fb8b0), (*reflect.rtype)(0x2c43860):(*printers.handlerEntry)(0xc4204fba90), (*reflect.rtype)(0x2c42360):(*printers.handlerEntry)(0xc4204fbdb0), (*reflect.rtype)(0x2b3c2c0):(*printers.handlerEntry)(0xc4203f9040), (*reflect.rtype)(0x2c42de0):(*printers.handlerEntry)(0xc4203f97c0), (*reflect.rtype)(0x2b3dd00):(*printers.handlerEntry)(0xc420635540), (*reflect.rtype)(0x2c41b80):(*printers.handlerEntry)(0xc4203f9900), (*reflect.rtype)(0x2b3d1a0):(*printers.handlerEntry)(0xc4206340a0), (*reflect.rtype)(0x2c45000):(*printers.handlerEntry)(0xc420634500), (*reflect.rtype)(0x2c3f420):(*printers.handlerEntry)(0xc4204fb5e0), (*reflect.rtype)(0x2b3c020):(*printers.handlerEntry)(0xc4204fb6d0), (*reflect.rtype)(0x2c40e60):(*printers.handlerEntry)(0xc4203f8500), (*reflect.rtype)(0x2c3e1c0):(*printers.handlerEntry)(0xc4203f8aa0), (*reflect.rtype)(0x2c3fc00):(*printers.handlerEntry)(0xc4203f8c80), (*reflect.rtype)(0x2b3da60):(*printers.handlerEntry)(0xc420634a00), (*reflect.rtype)(0x2c44820):(*printers.handlerEntry)(0xc4204fb810), (*reflect.rtype)(0x2c42b40):(*printers.handlerEntry)(0xc4204fbc70), (*reflect.rtype)(0x2c43080):(*printers.handlerEntry)(0xc4203f83c0), (*reflect.rtype)(0x2b3c100):(*printers.handlerEntry)(0xc4204fb9f0), (*reflect.rtype)(0x2b3bd80):(*printers.handlerEntry)(0xc4203f9310), (*reflect.rtype)(0x2b3b220):(*printers.handlerEntry)(0xc4203f9ea0), (*reflect.rtype)(0x2c43320):(*printers.handlerEntry)(0xc4203f9f40), (*reflect.rtype)(0x2c44d60):(*printers.handlerEntry)(0xc420634fa0), (*reflect.rtype)(0x2b3db40):(*printers.handlerEntry)(0xc4206346e0), (*reflect.rtype)(0x2c428a0):(*printers.handlerEntry)(0xc420635360), (*reflect.rtype)(0x2c3eee0):(*printers.handlerEntry)(0xc4204fb4a0), (*reflect.rtype)(0x2b3ba00):(*printers.handlerEntry)(0xc4203f8b40), (*reflect.rtype)(0x2b3c3a0):(*printers.handlerEntry)(0xc4203f9180), (*reflect.rtype)(0x2b3d280):(*printers.handlerEntry)(0xc4203f9c20), (*reflect.rtype)(0x2b3c800):(*printers.handlerEntry)(0xc420635860)}

What you expected to happen:
It should print out the object that would be sent to the server.

How to reproduce it (as minimally and precisely as possible):

  1. Create a new GKE cluster running version 1.7.1
  2. Configure kubectl to use the new cluster as it's default context
  3. Run: kubectl run nginx --image=nginx --dry-run

Current bash-completion instructions do not work on Mac OS 10.11.6 and bash 3.2

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): "bash-completion"


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}

Environment:

  • Cloud provider or hardware configuration: Local env
  • OS (e.g. from /etc/os-release): Mac OS 10.11.6
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others: bash 3.2

What happened:
Following instructions here https://kubernetes.io/docs/tasks/tools/install-kubectl/#on-macos-using-bash, bash autocompletion did not work for me.

What you expected to happen:
Expected bash-completion to work.

How to reproduce it (as minimally and precisely as possible):

  • Manually install kubectl
  • Install bash-completion using homebrew
  • source <(kubectl completion bash)

To get my bash completion to work, I had to follow a final step that is hinted at here: https://kubernetes.io/docs/user-guide/kubectl/v1.6/#completion. I had to write the bash completion output to a file and then source that file.

kubectl completion bash > ~/.kube/completion.bash.inc
printf "\n# Kubectl shell completion\nsource '$HOME/.kube/completion.bash.inc'\n" >> $HOME/.bash_profile
source $HOME/.bash_profile

Anything else we need to know:
Seems at least one other person has experienced this, but I couldn't find an open issue: kubernetes/kubernetes#27876 (comment)

Scaling a deployment always sets 'schedulerName' to 'default-scheduler' (kubectl 1.5.7, master 1.6.4)

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
'schedulerName', 'scale'


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version):

$ ./kubectl-1.5.7 version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.7", GitCommit:"8eb75a5810cba92ccad845ca360cf924f2385881", GitTreeState:"clean", BuildDate:"2017-04-27T10:00:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): client: darwin and linux(alpine 3.5.2). Server Debian 7.
  • Kernel (e.g. uname -a): client darwin 15.6.0 and Debian 3.16.39-1. Server Debian 3.16.39-1
  • Install tools: GKE for master. Curl'ed version of kubectl from GCS
  • Others:

What happened:
When scaling a replication controller or deployment a using version 1.5.7 of kubectl against a 1.6.4 masterkubectl scale sets the schedulerName attribute of podspec within the controller to 'default-scheduler' regardless of the original value. All new pods created by the controller have their schedulerName attribute then set to 'default-scheduler'.

What you expected to happen:
The value of 'schedulerName' should not be set to default-scheduler.

How to reproduce it (as minimally and precisely as possible):
Complete example showing the problem:

$ cat deployment.yaml 
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      schedulerName: third-party
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

$ ./kubectl-1.5.7 create -f deployment.yaml 
deployment "nginx-deployment" created
$ ./kubectl-1.5.7 get deployment nginx-deployment -o yaml | grep schedulerName
      schedulerName: third-party
$ ./kubectl-1.5.7 scale deployment nginx-deployment --replicas=3
deployment "nginx-deployment" scaled
$ ./kubectl-1.5.7 get deployment nginx-deployment -o yaml | grep schedulerName
      schedulerName: default-scheduler

Anything else we need to know:
The system works as expected if you have a 1.6.4 version of kubectl. Additionally I would expect other scalable controllers to have this issue; I just tested with deployments and replication controllers. As seen in my example the system allows you to create a deployment with a schedulerName set to a value of 'third-party' just the patching of the object during the scale operation seems to cause the problem.

kubectl Get command should use open-api extension to display a resource

kubectl Get command does not provide a rich experience for resources retrieved through federated apiservers and types not compiled into the kubectl binary.

Open-api schema for resources retrieved through federated api servers will have additional metadata as x-kubernetes-print-columns extension. Kubectl should make use of the extension information to display richer output for such types. So behavior is: if user has not specified any custom output format and x-kubernetes-print-column extension is defined for that type, use extension information to format the output for that resource.

This is part of the detailed proposal described here

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

kubectl get po columns aren't aligned when using watch mode

kubectl version

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}

Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

See the below screenshot, when executing kubectl get po -o wide -w, the columns aren't aligned

untitled-1

Error pulling images from external registry when the user changes

What keywords did you search in Kubernetes issues before filing this one?
docker login private registry

Is this a BUG REPORT or FEATURE REQUEST? BUG REPORT

Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Private virtual servers.

  • OS (e.g. from /etc/os-release):
    NAME="Red Hat Enterprise Linux Server"
    VERSION="7.3 (Maipo)"
    ID="rhel"
    ID_LIKE="fedora"
    VERSION_ID="7.3"
    PRETTY_NAME="Red Hat Enterprise Linux Server 7.3 (Maipo)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:server"
    HOME_URL="https://www.redhat.com/"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"
    REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
    REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
    REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
    REDHAT_SUPPORT_PRODUCT_VERSION="7.3"

  • Kernel (e.g. uname -a):
    Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Wed Nov 16 13:15:13 EST 2016 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:

  • Others:

What happened:
1.- Having a Kubernetes cluster using a private V2 docker registry, basic http authentication and following this steps (https://kubernetes.io/docs/concepts/containers/images/#configuring-nodes-to-authenticate-to-a-private-repository) to configure the authentication, everything was working as spected.
2.- The registry has two users granted to download images, let´s say user1 and user2 (both with the same privileges) but only user1 is configured along all kube-nodes, as said before everything ok.
3.- For some reason has been un-granted user1 from docker registry, as expected, images pull starts to fail.
4.- The problem comes here, changing all kube-nodes configuration to authenticate with user2, it doesn´t work. It is observed in pod status: ImagePullBackOff and in pod events:
Error syncing pod, skipping: failed to "StartContainer" for "imagexxx" with ErrImagePull: "manifest unknown: The named manifest is not known to the registry."
5.- Manual pull from kube-nodes (docker pull ...) works.
6.- Restoring user1 in docker registry (keeping user2 in all nodes) it start to work.

Seems that the first configuration keeps cached somewhere in kubertentes.

What you expected to happen:
Kubernetes should get and use the new configuration related with registry authentication.

How to reproduce it (as minimally and precisely as possible):
1.- Grant two users (user1 and user2) with the same privileges to download images in docker private registry.
2.- Configure all kube-nodes authentication (docker login <registry_url>) with user1
3.- Deploy something pulling some image from the registry.
4.- Ungrant user1 from docker registry configuration and configure all kube-nodes with user2.
5.- Deploy something forcing to download a new image.

Anything else we need to know:
I have restarted all kube services in master and slave, even I have restarted the server but it didn´t help.

Cached discovery issue on windows

From the master branch at least (need to check previous versions), kubectl get and potentially a number of other commands are failing miserably on Windows:

$ kubectl get pod -v 6
REDACTED
I0518 16:51:47.561501    4196 cached_discovery.go:134] failed to write cache to C:\Users\star\.kube\$master_cluster\servergroups.json due to chmod C:\Users\star\.kube\$master_cluster\servergroups.json.689571775: not supported by windows

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.