Giter Club home page Giter Club logo

compose-on-kubernetes's Introduction

⚠️ This project is no longer maintained ⚠️

Compose on Kubernetes

CircleCI

Compose on Kubernetes allows you to deploy Docker Compose files onto a Kubernetes cluster.

Table of contents

More documentation can be found in the docs/ directory. This includes:

Get started

Install Compose on Kubernetes on Docker Desktop

Pre-requisites

On Docker Desktop you will need to activate Kubernetes in the settings to use Compose on Kubernetes.

Create compose namespace

  • Create a compose namespace by running kubectl create namespace compose

Deploy etcd

Compose on Kubernetes requires an etcd instance (in addition to the kube-system etcd instance). Please follow How to deploy etcd.

Deploy Compose on Kubernetes

Run installer-[darwin|linux|windows.exe] -namespace=compose -etcd-servers=http://compose-etcd-client:2379.

Check that Compose on Kubernetes is installed

You can check that Compose on Kubernetes is installed by checking for the availability of the API using the command:

$ kubectl api-versions | grep compose
compose.docker.com/v1beta1
compose.docker.com/v1beta2

Deploy a stack

To deploy a stack, you can use the Docker CLI:

$ cat docker-compose.yml
version: '3.3'

services:

  db:
    build: db
    image: dockersamples/k8s-wordsmith-db

  words:
    build: words
    image: dockersamples/k8s-wordsmith-api
    deploy:
      replicas: 5

  web:
    build: web
    image: dockersamples/k8s-wordsmith-web
    ports:
     - "33000:80"

$ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml hellokube

Remove a stack

$ docker stack rm --orchestrator=kubernetes hellokube

Developing Compose on Kubernetes

See the contributing guides for how to contribute code.

Pre-requisites

  • make
  • Docker Desktop (Mac or Windows) with engine version 18.09 or later
  • Enable Buildkit by setting DOCKER_BUILDKIT=1 in your environment
  • Enable Kubernetes in Docker Desktop settings

For live debugging

  • Debugger capable of remote debugging with Delve API version 2
    • Goland run-configs are pre-configured

Debug quick start

Debug install

To build and install a debug version of Compose on Kubernetes onto Docker Desktop, you can use the following command:

$ make -f debug.Makefile install-debug-images

This command:

  • Builds the images with debug symbols
  • Runs the debug installer:
    • Installs debug versions of API server and Compose controller in the docker namespace
    • Creates two debugging LoadBalancer services (unused in this mode)

You can verify that Compose on Kubernetes is running with kubectl as follows:

$ kubectl get all -n docker
NAME                               READY   STATUS    RESTARTS   AGE
pod/compose-7c4dfcff76-jgwst       1/1     Running   0          59s
pod/compose-api-759f8dbb4b-2z5n2   2/2     Running   0          59s

NAME                                      TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
service/compose-api                       ClusterIP      10.98.42.151     <none>        443/TCP           59s
service/compose-api-server-remote-debug   LoadBalancer   10.101.198.179   localhost     40001:31693/TCP   59s
service/compose-controller-remote-debug   LoadBalancer   10.101.158.160   localhost     40000:31167/TCP   59s

NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/compose       1         1         1            1           59s
deployment.apps/compose-api   1         1         1            1           59s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/compose-7c4dfcff76       1         1         1       59s
replicaset.apps/compose-api-759f8dbb4b   1         1         1       59s

If you describe one of the deployments, you should see *-debug:latest in the image name.

Live debugging install

To build and install a live debugging version of Compose on Kubernetes onto Docker Desktop, you can use the following command:

$ make -f debug.Makefile install-live-debug-images

This command:

  • Builds the images with debug symbols
  • Sets the image entrypoint to run a Delve server
  • Runs the debug installer
    • Installs debug version of API server and Compose controller in the docker namespace
    • Creates two debugging LoadBalancer services
      • localhost:40000: Compose controller
      • localhost:40001: API server
  • The API server and Compose controller only start once a debugger is attached

To attach a debugger you have multiple options:

  • Use GoLand: configuration can be found in .idea of the repository
    • Select the Debug all config, setup breakpoints and start the debugger
  • Set your Delve compatible debugger to point to use locahost:40000 and localhost:40001
    • Using a terminal: dlv connect localhost:40000 then type continue and hit enter

To verify that the components are installed, you can use the following command:

$ kubectl get all -n docker

To verify that the API server has started, ensure that it has started logging:

$ kubectl logs -f -n docker deployment.apps/compose-api compose
API server listening at: [::]:40000
ERROR: logging before flag.Parse: I1207 15:25:13.760739      11 plugins.go:158] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook.
ERROR: logging before flag.Parse: I1207 15:25:13.763211      11 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
ERROR: logging before flag.Parse: W1207 15:25:13.767429      11 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
ERROR: logging before flag.Parse: W1207 15:25:13.851500      11 genericapiserver.go:319] Skipping API compose.docker.com/storage because it has no resources.
ERROR: logging before flag.Parse: I1207 15:25:13.998154      11 serve.go:116] Serving securely on [::]:9443

To verify that the Compose controller has started, ensure that it is logging:

kubectl logs -f -n docker deployment.apps/compose
API server listening at: [::]:40000
Version:    v0.4.16-dirty
Git commit: b2e3a6b-dirty
OS/Arch:    linux/amd64
Built:      Fri Dec  7 15:18:13 2018
time="2018-12-07T15:25:19Z" level=info msg="Controller ready"

Reinstall default

To reinstall the default Compose on Kubernetes on Docker Desktop, simply restart your Kubernetes cluster. You can do this by deactivating and then reactivating Kubernetes or by restarting Docker Desktop. See the contributing and debugging guides.

Deploying Compose on Kubernetes

compose-on-kubernetes's People

Contributors

aiordache avatar chris-crone avatar djs55 avatar guillaumerose avatar ijc avatar jcsirot avatar jdrouet avatar lalyos avatar lorenrh avatar masayuki14 avatar mattbrowne1 avatar ndeloof avatar neerolyte avatar rumpl avatar silvin-lubecki avatar simonferquel avatar thajeztah avatar ulyssessouza avatar vdemeester avatar waveywaves avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

compose-on-kubernetes's Issues

Support private images by leveraging pull secrets

Depends on #26.
We need to add a field on ServiceConfig for referencing a PullSecret, and convert it on reconciliation.
The flow would be:

  • $ docker stack deploy --with-registry-auth
  • for each service
    • docker cli get the registry auth info and populate a k8s secret named <service>.pull-secret
    • docker cli populate the PullSecret field in the service config with the secret name
  • docker cli post the stack
  • compose controller converts service config PullSecret field into the PullSecret field of a ContainerSpec
  • kubernetes is capable of using the pull secret to make kubelet pull the image

Stack Status: more details

Currently, the stack status is very limited in term of information it brings (just the reconciliation phase an a single error message)

We'd like to show more informations there:

  • Details about each service:
    • Link to every created child resource
    • Copy of Child Resources status
  • Scale information (similar to the scale subresource)
  • TBD

Persistent Volumes do not interact with PostgreSQL correctly

Problem
It is not possible to start up a postgres container with compose on kubernetes when using a persistent volume. Docker containers are permissioned for root but Postgres creates a postgres user (with reduced privilege) which needs permissions for the data directory .

Steps to Reproduce

  1. Create a docker-compose.yaml file with the following contents:
version: "3.7"
services:
  pg:
    image: postgres
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:
  1. Start the cluster using compose on Kubernetes with docker stack deploy -c .\docker-compose.yaml pg_test
  2. Observe the container repeatedly crash

The logs for the container are below:

> kubectl logs pod/pg-0 --namespace=default --container=pg -f
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
2019-05-13 09:27:09.724 UTC [77] FATAL:  data directory "/var/lib/postgresql/data" has wrong ownership
2019-05-13 09:27:09.724 UTC [77] HINT:  The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
running bootstrap script ...

Something that I think is weird is that it apparently fixes the permissions on the data directory, but then fails because of the ownership.

If I disable compose on kubernetes and start the stack using docker swarm init; docker stack deploy -c .\docker-compose.yaml pg_test the stack starts up fine, with the following logs:

> docker logs -f 48db29015d9015cc0cff33f12c4c409f194cd523dbee96e01bb1e52ac5faee23
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok


WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start
... etc.

Version Info:

> docker version
Client: Docker Engine - Community
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        6247962
 Built:             Sun Feb 10 04:12:31 2019
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false
> docker info
Containers: 36
 Running: 36
 Paused: 0
 Stopped: 0
Images: 15
Server Version: 18.09.2
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.125-linuxkit
Operating System: Docker for Windows
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.934GiB
Name: linuxkit-00155dc9730b
ID: MCUE:5JKF:T6TX:GRB3:HYDH:QE4I:ITOG:COUT:RSUR:3OB3:AJPV:VSR6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 119
 Goroutines: 124
 System Time: 2019-05-13T09:05:10.5035081Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
> kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

PerssitentVolume enhancement

Our current persistent volume support lacks some important features. Mainly we need support for:

  • configurable size
  • configurable storage class
  • configurable mode
  • provisioner-specific options

For this, we'll need to introduce a root-level volume-list in the stack spec so that we can populate those options easily (instead of treating all volume mounts as default-100MB volumes)

k8s 1.16: serialization issue, namespace deletion hangs

Yesterday, I tried to integrate compose-on-kubernetes v0.4.24 in Docker Desktop but faced one strange issue. It's not triggered when using kind in the CI here.

Steps:

  1. Install Docker Desktop with Kubernetes 1.16 - https://download-stage.docker.com/mac/edge/39313/Docker.dmg
  2. Enable Kubernetes
  3. kubectl create ns redis
  4. kubectl delete ns redis hangs forever.

kube-controller log shows plenty of:

E1010 09:32:12.184709       1 namespace_controller.go:148] deletion of namespace redis failed: object *v1alpha3.StackList does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message

kube-compose-apiserver logs:

E1010 09:32:27.861111       1 writers.go:172] apiserver was unable to write a JSON response: object *v1alpha3.StackList does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message

Nothing special in kube-apiserver logs.

compose pods in CrashLoopBackOff with minikube

Attempting to try compose-on-kubernetes with a minikube I have followed https://github.com/docker/compose-on-kubernetes/blob/master/docs/install-on-minikube.md .

I have ended up rebuilding minikube with 8 vCPUs and 8GB of RAM to see if it was a VM resources issue, so this is from a fresh install.

Minikube already seemed to have an etcd so I skipped manually installing it:

$ kubectl get pods --all-namespaces | grep etcd
kube-system   etcd-minikube                          1/1     Running            0          17h

The installer appears to run ok:

$ ./installer-linux -namespace=compose -namespace=compose -etcd-servers=http://compose-etcd-client:2379 -tag=v0.4.18
INFO[0000] Checking installation state                  
INFO[0000] Install image with tag "v0.4.18" in namespace "compose" 
INFO[0000] Api server: image: "docker/kube-compose-api-server:v0.4.18", pullPolicy: "Always" 
INFO[0000] Controller: image: "docker/kube-compose-controller:v0.4.18", pullPolicy: "Always" 

I'm unclear from the instructions if we're supposed to replacing the -etcd-servers parameter, but it didn't error.

The next step is to check that Compose on Kubernetes is installed:

$ kubectl api-versions | grep compose
# returns nothing

At least one of the compose pods generally shows up as Error or CrashLoopBackOff:

$ kubectl get pods --namespace=compose
NAME                           READY   STATUS             RESTARTS   AGE
compose-68d845b598-95d42       1/1     Running            4          5m9s
compose-api-5f4b9d785c-lfppl   0/1     CrashLoopBackOff   4          5m9s

It looks like the liveness probe is failing too many times:

$ kubectl get events -w --namespace=compose
LAST SEEN   TYPE      REASON              KIND         MESSAGE
3m57s       Normal    Scheduled           Pod          Successfully assigned compose/compose-68d845b598-95d42 to minikube
101s        Normal    Pulling             Pod          pulling image "docker/kube-compose-controller:v0.4.18"
98s         Normal    Pulled              Pod          Successfully pulled image "docker/kube-compose-controller:v0.4.18"
98s         Normal    Created             Pod          Created container
98s         Normal    Started             Pod          Started container
52s         Warning   Unhealthy           Pod          Liveness probe failed: Get http://172.17.0.6:8080/healthz: dial tcp 172.17.0.6:8080: connect: connection refused
101s        Normal    Killing             Pod          Killing container with id docker://compose:Container failed liveness probe.. Container will be killed and recreated.
3m57s       Normal    SuccessfulCreate    ReplicaSet   Created pod: compose-68d845b598-95d42
3m57s       Normal    Scheduled           Pod          Successfully assigned compose/compose-api-5f4b9d785c-lfppl to minikube
94s         Normal    Pulling             Pod          pulling image "docker/kube-compose-api-server:v0.4.18"
90s         Normal    Pulled              Pod          Successfully pulled image "docker/kube-compose-api-server:v0.4.18"
90s         Normal    Created             Pod          Created container
90s         Normal    Started             Pod          Started container
72s         Warning   Unhealthy           Pod          Liveness probe failed: Get http://172.17.0.5:8080/healthz: dial tcp 172.17.0.5:8080: connect: connection refused
109s        Warning   BackOff             Pod          Back-off restarting failed container
3m57s       Normal    SuccessfulCreate    ReplicaSet   Created pod: compose-api-5f4b9d785c-lfppl
3m57s       Normal    ScalingReplicaSet   Deployment   Scaled up replica set compose-api-5f4b9d785c to 1
3m57s       Normal    ScalingReplicaSet   Deployment   Scaled up replica set compose-68d845b598 to 1

The installer CLI properties use single dash instead doubles dashes

The GKE documentation shows the following:

Run installer-[darwin|linux|windows.exe] -namespace=compose -etcd-servers=http://compose-etcd-client:2379

While it's true that the Darwin installer for example shows single dashes in the usage instructions, but single dashes will eventually cause the installation fail in GKE. Double dashes work though:

Run installer-[darwin|linux|windows.exe] --namespace=compose --etcd-servers=http://compose-etcd-client:2379

"the server has asked for the client to provide credentials"

After following the install instructions and installing an extra etcd cluster, the API got stable up. However, directing any command via docker stack gives the answer "the server has asked for the client to provide credentials", even the kubectl commands working nicely:

$ kubectl get services --namespace compose
NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
compose-api                   ClusterIP   10.100.84.128    <none>        443/TCP             41m
etcd-restore-operator         ClusterIP   10.100.249.3     <none>        19999/TCP           28h
example-etcd-cluster          ClusterIP   None             <none>        2379/TCP,2380/TCP   45m
example-etcd-cluster-client   ClusterIP   10.100.249.165   <none>        2379/TCP            45m

$ kubectl api-versions | grep compose
compose.docker.com/v1beta1
compose.docker.com/v1beta2

$ kubectl get stacks --namespace compose
No resources found.

$ docker stack deploy --orchestrator kubernetes --namespace compose --compose-file docker-compose-v3.yaml mystackname
the server has asked for the client to provide credentials

$ docker --version
Docker version 19.03.0-rc2, build f97efcc

Am I doing something wrong or is docker stack doing wrong?

(cc: @henriquegibin, @adelsjnr, @thiagoscherrer)

Deploying Helm Charts using docker-compose

I was reading https://www.docker.com/blog/simplifying-kubernetes-with-docker-compose-and-friends/

We already have some thoughts about a Helm plugin to make describing your application with Compose and deploying with Helm as easy as possible.

I haven't seen any issues, documentation, or examples in this repo talking about deploying helm charts with docker-compose so I thought I would create a new one here. Is there any information about this helm plugin?

Invalid Gopkg.toml file

I wanted to give you a heads up that your Gopkg.toml file is invalid and causes problems for people trying to depend on your library.

Entries like this

[[constraint]]
name = "k8s.io/apiextensions-apiserver"
revision = "kubernetes-1.11.5"

should be

[[constraint]]
  name = "k8s.io/apiextensions-apiserver"
  version = "kubernetes-1.11.5"

revision in the Gopkg.toml means a specific git commit hash. version indicates a tag.

Otherwise consumers of your library run into this error when running dep ensure when another package requires one of these projects and uses version correctly

Solving failure: Could not introduce github.com/docker/[email protected], as it has a dependency on k8s.io/apimachinery with constraint kubernetes-1.11.5, which has no overlap with existing constraint kubernetes-1.11.5 from (root)

Add tests to the installer code

Currently, the installer is not tested at all (except the parts used for e2e test environment deployment).

We need to improve our confidence in the installer, and thus require a bunch of unit tests and e2e tests (tests with custom TLS bundle, various ETCD configurations, ...)

Update Kubernetes vendor to 1.13 or later

Kubernetes vendor is using 1.11. This ticket is to track the effort to bump to 1.13 or later.

Notice there are API changes for the bump. For example scheme.AddGeneratedConversionFuncs was removed.

"invalid configuration: no configuration has been provided" error with microk8s

I'm trying to get compose running with microk8s.

Originally I ran in to an error about .kube/config not being present:

$ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml foo
stat /home/foo/.kube/config: no such file or directory

I was able to resolve that by asking kubectl to set a context name (this writes out a basic config file):

$ kubectl config set-context microk8s
Context "microk8s" created.

but then I get stuck on a less obvious error:

$ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml foo
invalid configuration: no configuration has been provided

It'd be great if this error could give more context because it's unclear where it's coming from and what config is missing.

I can see with strace that docker itself is attempting to read a token file that's not present in my set up:

19480 execve("/usr/bin/docker", ["docker", "stack", "deploy", "--orchestrator=kubernetes", "-c", "docker-compose.yml", "foo"], 0x7ffc2ae6aea8 /* 65 vars */) = 0
[...]
19480 newfstatat(AT_FDCWD, "/var/run/secrets/kubernetes.io/serviceaccount/token", 0xc4202c32e8, 0) = -1 ENOENT (No such file or directory)
19480 write(2, "invalid configuration: no config"..., 58) = 58

I'm not clear what that token is actually for or how I'd tell docker where to find it.

P.s I originally logged this at moby/moby#38602 as SEO guides me there - it'd be great to have a prominent guide for where to find bugs for the various docker projects.

How to uninstall / remove?

I'm running docker-desktop (on Mac) v2.0.4.1 (34207) from the edge channel.

I'd like use Kubernetes, but without this Compose integration that is automatically installed along with Kubernetes. I suspect this is the cause of High CPU usage of the hyperkit VM but I can't prove it. I notice in the kube-apiserver logs these types of things (every minute):

I0530 18:49:45.353416       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.compose.docker.com
I0530 18:49:45.973085       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta2.compose.docker.com
I0530 18:49:46.613462       1 controller.go:107] OpenAPI AggregationController: Processing item v1alpha3.compose.docker.com

So question: is there a way to have just Kubernetes without the Compose integration in docker-desktop community edition? I tried deleting the docker namespace and that didn't help (the namespace gets stuck in Terminating state). Deleting the namespace showed these errors in kube-apiserver logs (every minute):

I0530 18:52:46.620395       1 controller.go:107] OpenAPI AggregationController: Processing item v1alpha3.compose.docker.com
W0530 18:52:46.620493       1 handler_proxy.go:89] no RequestInfo found in the context
E0530 18:52:46.620538       1 controller.go:114] loading OpenAPI spec for "v1alpha3.compose.docker.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0530 18:52:46.620550       1 controller.go:127] OpenAPI AggregationController: action for item v1alpha3.compose.docker.com: Rate Limited Requeue.

with continuing high cpu usage for the hyperkit VM...

I can not deploy to GKE with "failed to find a Stack API version"

I tryed to install with Document as follows:
https://github.com/docker/compose-on-kubernetes/blob/master/docs/install-on-gke.md
But I couldn't run compose-on-kubernetes with GKE.

So I run installing process 2 times. But I can't.
These are commands I executed 2nd time.

% kubectl create namespace compose
Error from server (AlreadyExists): namespaces "compose" already exists

% kubectl -n kube-system create serviceaccount tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists

% kubectl -n kube-system create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller" already exists

% helm init --service-account tiller
$HELM_HOME has been configured at /Users/masayuki14/.helm.
Warning: Tiller is already installed in the cluster.

% helm install --name etcd-operator stable/etcd-operator --namespace compose
Error: a release named etcd-operator already exists.
Run: helm ls --all etcd-operator; to check the status of the release
Or run: helm del --purge etcd-operator; to delete it

% kubectl apply -f compose-etcd.yaml
etcdcluster.etcd.database.coreos.com "compose-etcd" configured

% kubectl create clusterrolebinding <ACCOUNT>-cluster-admin-binding --clusterrole=cluster-admin --user=<ACCOUNT>
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "<ACCOUNT>-cluster-admin-binding" already exists

% ./installer-darwin -namespace=compose -etcd-servers=http://compose-etcd-client:2379 -tag=v0.4.18
INFO[0000] Checking installation state
INFO[0000] Compose version v0.4.18 is already installed in namespace "compose" with the same settings

After that run deploy.

% cli/build/docker-darwin-amd64 stack deploy -c compose-on-k8s.yml --orchestrator=kubernetes mycluster
failed to find a Stack API version

Docker CLI version

% cli/build/docker-darwin-amd64 version 
Client:
 Version:           19.09.0-dev
 API version:       1.39 (downgraded from 1.40)
 Go version:        go1.12.5
 Git commit:        0f337f1
 Built:             Wed May 29 07:55:36 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false

I cannot find compose api

% kubectl api-versions | grep compose

`Unauthorized` error with k3s on `docker stack deploy` command.

Hello Folks,

I'm trying to use compose-on-kubernetes with k3s 0.8.0 and I followed the minikube instructions. In the last step, that uses docker stack deploy command, I just get an unauthorized response.

Compose-api logs:

I0810 02:26:28.784451       1 plugins.go:158] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook.
I0810 02:26:28.784490       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
W0810 02:26:28.784675       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0810 02:26:28.786584       1 client.go:352] parsed scheme: ""
I0810 02:26:28.786600       1 client.go:352] scheme "" not registered, fallback to default scheme
I0810 02:26:28.786653       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{compose-etcd-client:2379 0  <nil>}]
I0810 02:26:28.786811       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]
I0810 02:26:28.787854       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]
I0810 02:26:28.787877       1 client.go:352] parsed scheme: ""
I0810 02:26:28.787893       1 client.go:352] scheme "" not registered, fallback to default scheme
I0810 02:26:28.787932       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{compose-etcd-client:2379 0  <nil>}]
I0810 02:26:28.787970       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]
I0810 02:26:28.788409       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]
I0810 02:26:28.789231       1 client.go:352] parsed scheme: ""
I0810 02:26:28.789248       1 client.go:352] scheme "" not registered, fallback to default scheme
I0810 02:26:28.789286       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{compose-etcd-client:2379 0  <nil>}]
I0810 02:26:28.789362       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]
I0810 02:26:28.789770       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]
W0810 02:26:28.806836       1 genericapiserver.go:344] Skipping API compose.docker.com/storage because it has no resources.
I0810 02:26:28.871306       1 secure_serving.go:116] Serving securely on [::]:9443
I0810 02:26:29.116527       1 client.go:352] parsed scheme: ""
I0810 02:26:29.116561       1 client.go:352] scheme "" not registered, fallback to default scheme
I0810 02:26:29.116624       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{compose-etcd-client:2379 0  <nil>}]
I0810 02:26:29.116704       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]
I0810 02:26:29.117845       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{compose-etcd-client:2379 <nil>}]

Compose logs:

Version:    v0.4.23
Git commit: cc4914d
OS/Arch:    linux/amd64
Built:      Wed Jun  5 12:33:17 2019
W0810 02:26:26.550350       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2019-08-10T02:26:29Z" level=info msg="Controller ready"

How can I Debug this? Is it somewhat related with the controller log?
I'm using K3S 0.8.0 (which uses kubernetes 1.14.5-k3s).

Thanks in advance.

Install on Azure AKS guide: Adjust etcd server URL when using TLS

In the subsection Compose on Kubernetes of the Installation on Azure AKS guide, it seems that the installer command

installer-[darwin|linux|windows.exe] ... -etcd-servers=http://compose-etcd-client:2379 ...

should be tweaked when using Mutual TLS into something like

installer-[darwin|linux|windows.exe] ... -etcd-servers=https://compose-etcd-client:2379 ...

(i.e., https instead of http), right?

Also, it seems vital that the server certificates used for setting etcd up with mutual TLS, as indicated in the Deploy etcd guide, Option 2, must contain the name given above as argument to --etcd-servers=, right? Currently, the guide just says it must contain compose-etcd.compose.svc, not compose-etcd-client.

I suggest adjusting the two guides to clarify these points.

As an aside, debugging compose-api when misconfigured with HTTP instead of HTTPS was not easy, as the log messages did not indicate any such misconfiguration.

Stack API shows UNKNOWN under Compose on Kubernetes for Minikube

I followed official Compose on Kubernetes README doc for Minikube. I installed Docker Desktop on macOS 18.09.0 and followed the complete doc(as shown in detail). The StackAPI still fails and shows UNKNOWN. Does it still require 19.03.x release? The Compose for Minikube doesnt talk about 19.03.x release. Do you think we need to update our doc?

Below is the detailed steps to reproduce the issue:

Pre-requisite

  • Install Docker Desktop on MacOS
  • Enable Kubernetes with the below feature enabled

Verifying Docker Desktop

🚩 >  docker version
Client: Docker Engine - Community
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:47:43 2018
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:55:00 2018
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.10.3
  StackAPI:         v1beta2

Installing Minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
  && chmod +x minikube

Verifying Minikube Version

minikube version
minikube version: v0.32.0

Checking Minikube Status

minikube status
host: Stopped
kubelet:
apiserver:

Starting Minikube

]🚩 >  minikube start
Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Machine exists, restarting cluster components...
Verifying kubelet health ...
Verifying apiserver health ....Kubectl is now configured to use the cluster.
Loading cached images from config file.


Everything looks great. Please enjoy minikube!

Checking the Status

🚩 >  minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100[Captains-Bay]🚩 >

Verifying Minikube Cluster Nodes

 kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    12h       v1.12.4

kubectl create namespace compose
namespace "compose" created

Creating the tiller service account

kubectl -n kube-system create serviceaccount tiller
serviceaccount "tiller" created

Give it admin access to your cluster (note: you might want to reduce the scope of this):

kubectl -n kube-system create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding "tiller" created

Initializing the helm component.

🚩 >  helm init --service-account tiller
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

🚩 >  minikube status
host: Running
kubelet: Running
apiserver: Running

kubectl -n kube-system get pod
NAME                                    READY     STATUS    RESTARTS   AGE
coredns-576cbf47c7-fsk76                1/1       Running   1          12h
coredns-576cbf47c7-xc2br                1/1       Running   1          12h
etcd-minikube                           1/1       Running   1          12h
kube-addon-manager-minikube             1/1       Running   1          12h
kube-apiserver-minikube                 1/1       Running   0          11m
kube-controller-manager-minikube        1/1       Running   0          11m
kube-proxy-8kcjr                        1/1       Running   0          11m
kube-scheduler-minikube                 1/1       Running   1          12h
kubernetes-dashboard-5bff5f8fb8-qfrwl   1/1       Running   3          12h
storage-provisioner                     1/1       Running   3          12h
tiller-deploy-694dc94c65-tt27k          1/1       Running   0          4m

Deploy etcd operator and create an etcd cluster

🚩 >  helm install --name etcd-operator stable/etcd-operator --namespace compose
NAME:   etcd-operator
LAST DEPLOYED: Fri Jan 11 10:08:06 2019
NAMESPACE: compose
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                                               SECRETS  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1s
etcd-operator-etcd-operator-etcd-operator          1        1s
etcd-operator-etcd-operator-etcd-restore-operator  1        1s

==> v1beta1/ClusterRole
NAME                                       AGE
etcd-operator-etcd-operator-etcd-operator  1s

==> v1beta1/ClusterRoleBinding
NAME                                               AGE
etcd-operator-etcd-operator-etcd-backup-operator   1s
etcd-operator-etcd-operator-etcd-operator          1s
etcd-operator-etcd-operator-etcd-restore-operator  1s

==> v1/Service
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
etcd-restore-operator  ClusterIP  10.104.102.245  <none>       19999/TCP  1s

==> v1beta1/Deployment
NAME                                               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1        1           0          1s
etcd-operator-etcd-operator-etcd-operator          1        1        1           0          1s
etcd-operator-etcd-operator-etcd-restore-operator  1        1        1           0          1s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
etcd-operator-etcd-operator-etcd-backup-operator-7978f8bc4r97s7  0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-operator-6c57fff9d5-kdd7d       0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-restore-operator-6d787599vg4rb  0/1    ContainerCreating  0         1s


NOTES:
1. etcd-operator deployed.
  If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
  Check the etcd-operator logs
    export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace compose --output name)
    kubectl logs $POD --namespace=compose
🚩 >

Copy the below content into compose-etcd.yml

cat compose-etcd.yaml
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
  name: "compose-etcd"
  namespace: "compose"
spec:
  size: 3
  version: "3.2.13"

kubectl apply -f compose-etcd.yml
etcdcluster "compose-etcd" created


This should bring an etcd cluster in the compose namespace.

Download the Compose Installer

wget https://github.com/docker/compose-on-kubernetes/releases/download/v0.4.17/installer-darwin

./installer-darwin -namespace=compose -etcd-servers=http://compose-etcd-client:2379 -tag=v0.4.17
INFO[0000] Checking installation state
INFO[0000] Install image with tag "v0.4.16" in namespace "compose"
INFO[0000] Api server: image: "docker/kube-compose-api-server:v0.4.17", pullPolicy: "Always"
INFO[0000] Controller: image: "docker/kube-compose-controller:v0.4.17", pullPolicy: "Always"

kubectl api-versions | grep compose
🚩 >

minikube service list
|-------------|-----------------------|--------------|
|  NAMESPACE  |         NAME          |     URL      |
|-------------|-----------------------|--------------|
| compose     | compose-api           | No node port |
| compose     | compose-etcd          | No node port |
| compose     | compose-etcd-client   | No node port |
| compose     | etcd-restore-operator | No node port |
| default     | kubernetes            | No node port |
| kube-system | kube-dns              | No node port |
| kube-system | kubernetes-dashboard  | No node port |
| kube-system | tiller-deploy         | No node port |
|-------------|-----------------------|--------------|
🚩 >

Verifying StackAPI

🚩 >  docker version
Client: Docker Engine - Community
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:47:43 2018
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:55:00 2018
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.12.4
  StackAPI:         Unknown
🚩 >

Installation instructions needed for microk8s

Installation on Microk8s, following the minikube instructions (expanded on https://collabnix.com/a-first-look-at-compose-on-kubernetes-for-minikube/) starts everything running:

$ kubectl get all --namespace compose
NAME                                                                  READY   STATUS             RESTARTS   AGE
pod/compose-68d845b598-9wg4x                                          0/1     CrashLoopBackOff   5          7m40s
pod/compose-api-5f4b9d785c-42kgf                                      0/1     CrashLoopBackOff   5          7m40s
pod/compose-etcd-client-4m5r9kkrwk                                    0/1     Init:0/1           0          11m
pod/etcd-operator-etcd-operator-etcd-backup-operator-59856df67qsbqb   1/1     Running            0          15m
pod/etcd-operator-etcd-operator-etcd-operator-796dccdf7f-46fvr        1/1     Running            0          15m
pod/etcd-operator-etcd-operator-etcd-restore-operator-55db95d6t64rv   1/1     Running            0          15m

NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/compose-api                  ClusterIP   10.152.183.62    <none>        443/TCP             7m40s
service/compose-etcd-client          ClusterIP   None             <none>        2379/TCP,2380/TCP   11m
service/compose-etcd-client-client   ClusterIP   10.152.183.105   <none>        2379/TCP            11m
service/etcd-restore-operator        ClusterIP   10.152.183.193   <none>        19999/TCP           15m

NAME                                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/compose                                             0/1     1            0           7m40s
deployment.apps/compose-api                                         0/1     1            0           7m40s
deployment.apps/etcd-operator-etcd-operator-etcd-backup-operator    1/1     1            1           15m
deployment.apps/etcd-operator-etcd-operator-etcd-operator           1/1     1            1           15m
deployment.apps/etcd-operator-etcd-operator-etcd-restore-operator   1/1     1            1           15m

NAME                                                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/compose-68d845b598                                             1         1         0       7m40s
replicaset.apps/compose-api-5f4b9d785c                                         1         1         0       7m40s
replicaset.apps/etcd-operator-etcd-operator-etcd-backup-operator-59856df67f    1         1         1       15m
replicaset.apps/etcd-operator-etcd-operator-etcd-operator-796dccdf7f           1         1         1       15m
replicaset.apps/etcd-operator-etcd-operator-etcd-restore-operator-55db95d6f8   1         1         1       15m

However the API server crashes before providing the API.

$ kubectl logs --namespace compose pod/compose-api-5f4b9d785c-42kgf    
ERROR: logging before flag.Parse: I0112 21:16:45.598859       1 plugins.go:158] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook.
ERROR: logging before flag.Parse: I0112 21:16:45.598949       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
ERROR: logging before flag.Parse: W0112 21:16:45.599137       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
ERROR: logging before flag.Parse: F0112 21:17:05.601791       1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry/docker.com/stacks [http://compose-etcd-client:2379]    true false 0 0xc000386000 <nil> 5m0s 1m0s}), err (dial tcp: lookup compose-etcd-client on 127.0.0.53:53: read udp 127.0.0.1:38953->127.0.0.53:53: read: connection refused)

Not able to pull development images from docker hub registry

While installing compose-on-kubernetes on Openshift, docker is not able to pull the images.
I had a sucessful installation before this a few days ago and upon trying to install again got the following errors in the events log.

$ oc get events

25s         33s          3         compose-6f86bfd5df-kb9p2.15872989417a6a81       Pod                                     Normal    SandboxChanged      kubelet, shiftnode1     Pod sandbox changed, it will be killed and re-created.
22s         1m           2         compose-6f86bfd5df-kb9p2.1587297fa20fd69c       Pod          spec.containers{compose}   Normal    Pulling             kubelet, shiftnode1     pulling image "docker/kube-compose-dev-controller:v0.4.18"
12s         55s          2         compose-api-7d54c4c88b-5dsv4.1587298436dad512   Pod          spec.containers{compose}   Warning   Failed              kubelet, shiftnode1     Failed to pull image "docker/kube-compose-dev-api-server:v0.4.18": rpc error: code = Unknown desc = repository docker.io/docker/kube-compose-dev-api-server not found: does not exist or no pull access
12s         55s          2         compose-api-7d54c4c88b-5dsv4.1587298436db5795   Pod          spec.containers{compose}   Warning   Failed              kubelet, shiftnode1     Error: ErrImagePull
11s         51s          3         compose-api-7d54c4c88b-5dsv4.15872985211b345f   Pod          spec.containers{compose}   Normal    BackOff             kubelet, shiftnode1     Back-off pulling image "docker/kube-compose-dev-api-server:v0.4.18"
11s         51s          3         compose-api-7d54c4c88b-5dsv4.15872985211bae48   Pod          spec.containers{compose}   Warning   Failed              kubelet, shiftnode1     Error: ImagePullBackOff

I was able to pull the docker/kube-compose-controller and docker/kube-compose-api-server images but not the developer ones.

docker images | grep compose
docker/kube-compose-controller                                       latest              634012027887        4 weeks ago         30.8MB
docker/kube-compose-api-server                                       latest              3a4f4d2514c0        4 weeks ago         48.4MB
$ docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        6247962
 Built:             Sun Feb 10 04:13:56 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 03:47:25 2019
  OS/Arch:          linux/amd64
  Experimental:     false
$ docker info
Containers: 19
 Running: 0
 Paused: 0
 Stopped: 19
Images: 21
Server Version: 18.09.2
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.18.10-200.fc28.x86_64
Operating System: Fedora 28 (Workstation Edition)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 19.31GiB
Name: localhost.localdomain
ID: 6DYY:6P6W:LHQN:S7HN:MQHN:3W3T:SCWZ:WLWT:GWXS:4RQM:OLCD:DEBO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

Issue with Compose on Kubernetes Installation on GKE

Infrastructure Setup:

  • Docker Desktop 2.0 - Docker for Mac
  • Docker Version: 18.09
  • Kubernetes Enabled

Verifying Docker Version

[Captains-Bay]🚩 >  docker version
Client: Docker Engine - Community
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:47:43 2018
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:55:00 2018
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.10.3
  StackAPI:         v1beta2

Authenticating GCP Account

[Captains-Bay]🚩 >  gcloud auth login
Your browser has been opened to visit:

    


WARNING: `gcloud auth login` no longer writes application default credentials.
If you need to use ADC, see:
  gcloud auth application-default --help

You are now logged in as [[email protected]].
Your current project is [None].  You can change this setting by running:
  $ gcloud config set project PROJECT_ID


Updates are available for some Cloud SDK components.  To install them,
please run:
  $ gcloud components update

[Captains-Bay]🚩 >

Verifying GKE context on UI

[Captains-Bay]🚩 >
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project sturdy-pivot-225203
Fetching cluster endpoint and auth data.
kubeconfig entry generated for mycluster.
[Captains-Bay]🚩 >

Listing the GKE Cluster Nodes

[Captains-Bay]🚩 >  kubectl get nodes
NAME                                 STATUS    ROLES     AGE       VERSION
gke-mycluster-pool-1-c1fb7d56-kjbf   Ready     <none>    5m        v1.10.9-gke.5
[Captains-Bay]🚩 >

Creating Compose Namespace

[Captains-Bay]🚩 >  kubectl create namespace compose
namespace "compose" created
[Captains-Bay]🚩 >

Install Helm server side component

[Captains-Bay]🚩 >  kubectl -n kube-system create serviceaccount tiller
serviceaccount "tiller" created
usterrole cluster-admin --serviceaccount kube-system:tillerebinding tiller --cl
clusterrolebinding "tiller" created
[Captains-Bay]🚩 >  helm init --service-account tiller
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Deploy etcd operator and create an etcd cluster

🚩 >  helm install --name etcd-operator stable/etcd-operator --nam
NAME:   etcd-operator
LAST DEPLOYED: Tue Dec 11 09:10:05 2018
NAMESPACE: compose
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/ClusterRole
NAME                                       AGE
etcd-operator-etcd-operator-etcd-operator  2s

==> v1beta1/ClusterRoleBinding
NAME                                               AGE
etcd-operator-etcd-operator-etcd-backup-operator   2s
etcd-operator-etcd-operator-etcd-operator          2s
etcd-operator-etcd-operator-etcd-restore-operator  2s

==> v1/Service
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)    AGE
etcd-restore-operator  ClusterIP  10.23.242.119  <none>       19999/TCP  2s

==> v1beta1/Deployment
NAME                                               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1        1           0          2s
etcd-operator-etcd-operator-etcd-operator          1        1        1           0          1s
etcd-operator-etcd-operator-etcd-restore-operator  1        1        1           0          1s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
etcd-operator-etcd-operator-etcd-backup-operator-687bb97bfz6gpr  0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-operator-cdd58665b-bhk4w        0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-restore-operator-65585cb5psjw8  0/1    ContainerCreating  0         1s

==> v1/ServiceAccount
NAME                                               SECRETS  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        2s
etcd-operator-etcd-operator-etcd-operator          1        2s
etcd-operator-etcd-operator-etcd-restore-operator  1        2s


NOTES:
1. etcd-operator deployed.
  If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
  Check the etcd-operator logs
    export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace compose --output name)
    kubectl logs $POD --namespace=compose
[Captains-Bay]🚩 >

Create an etcd cluster definition like this one in a file named compose-etcd.yaml:

[Captains-Bay]🚩 >  cat compose-etcd.yaml
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
  name: "compose-etcd"
  namespace: "compose"
spec:
  size: 3
  version: "3.2.13"
[Captains-Bay]🚩 >

Applying via Kubectl

[Captains-Bay]🚩 >  kubectl apply -f compose-etcd.yaml
etcdcluster "compose-etcd" created

Deploy Compose on Kubernetes

./installer-darwin -namespace=compose -etcd-servers=http://compose-etcd-client:2379 -tag=v0.4.16
INFO[0001] Checking installation state
INFO[0001] Install image with tag "v0.4.16" in namespace "compose"
panic: clusterroles.rbac.authorization.k8s.io "compose-service" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:[""], Resources:["users"], Verbs:["impersonate"]} PolicyRule{APIGroups:[""], Resources:["groups"], Verbs:["impersonate"]} PolicyRule{APIGroups:[""], Resources:["serviceaccounts"], Verbs:["impersonate"]} PolicyRule{APIGroups:["authentication.k8s.io"], Resources:["*"], Verbs:["impersonate"]} PolicyRule{APIGroups:[""], Resources:["services"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["deployments"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["statefulsets"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["daemonsets"], Verbs:["get"]} PolicyRule{APIGroups:["apps"], Resources:["services"], Verbs:["get"]} PolicyRule{APIGroups:["apps"], Resources:["deployments"], Verbs:["get"]} PolicyRule{APIGroups:["apps"], Resources:["statefulsets"], Verbs:["get"]} PolicyRule{APIGroups:["apps"], Resources:["daemonsets"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["pods/log"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["pods/log"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["pods/log"], Verbs:["list"]} PolicyRule{APIGroups:["compose.docker.com"], Resources:["stacks"], Verbs:["*"]} PolicyRule{APIGroups:["compose.docker.com"], Resources:["stacks/owner"], Verbs:["get"]} PolicyRule{APIGroups:["admissionregistration.k8s.io"], Resources:["validatingwebhookconfigurations"], Verbs:["get"]} PolicyRule{APIGroups:["admissionregistration.k8s.io"], Resources:["validatingwebhookconfigurations"], Verbs:["watch"]} PolicyRule{APIGroups:["admissionregistration.k8s.io"], Resources:["validatingwebhookconfigurations"], Verbs:["list"]} PolicyRule{APIGroups:["admissionregistration.k8s.io"], Resources:["mutatingwebhookconfigurations"], Verbs:["get"]} PolicyRule{APIGroups:["admissionregistration.k8s.io"], Resources:["mutatingwebhookconfigurations"], Verbs:["watch"]} PolicyRule{APIGroups:["admissionregistration.k8s.io"], Resources:["mutatingwebhookconfigurations"], Verbs:["list"]} PolicyRule{APIGroups:["apiregistration.k8s.io"], Resources:["apiservices"], ResourceNames:["v1beta1.compose.docker.com"], Verbs:["*"]} PolicyRule{APIGroups:["apiregistration.k8s.io"], Resources:["apiservices"], ResourceNames:["v1beta2.compose.docker.com"], Verbs:["*"]} PolicyRule{APIGroups:["apiregistration.k8s.io"], Resources:["apiservices"], Verbs:["create"]}] user=&{[email protected]  [system:authenticated] map[user-assertion.cloud.google.com:[AM6SrXjz6L54zf1yqYO3RrbOiwbxOHBLEr6A+7JhbGB0b46crtNgcevIEOLNBEV6BwPVdc22jnS80nU78tJkHqHBswocvOetoqpTdcbw2lBxD8jezfLsJqet7R74gGAMuVYuPAcIaA2OjZKBaAgAtXQ+TZF249TQ4WUwsgmAJH7jMBHj5X9NxFOkGrtRrU8yjeOCuS11uWJDkV2oxuzT5BB+ILHDXkYUz7Id6JpoDiU=]]} ownerrules=[PolicyRule{APIGroups:["authorization.k8s.io"], Resources:["selfsubjectaccessreviews" "selfsubjectrulesreviews"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/openapi" "/openapi/*" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version" "/version/"], Verbs:["get"]}] ruleResolutionErrors=[]

goroutine 1 [running]:
main.main()
	/root/src/github.com/docker/compose-on-kubernetes/cmd/installer/main.go:105 +0x1da

Verifying if RBAC is already enabled

[Captains-Bay]🚩 >  kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
cloud.google.com/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scalingpolicy.kope.io/v1alpha1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
[Captains-Bay]🚩 >

Any idea what could be the issue?

Feature request: Ingress resources

I'm experimenting with deploying existing Swarm stacks to Kubernetes via the Compose on Kubernetes controller. On Docker Swarm, the services are using Traefik for http/https ingress, configured with labels on Swarm services. Is there some way to adapt this to Ingress resources on Kubernetes via the Compose controller?

Build error: too many arguments in call to yaml.NewDecoder

With Golang 1.12.6:

BUILDSTDERR: # github.com/docker/compose-on-kubernetes/internal/parsing
BUILDSTDERR: _build/src/github.com/docker/compose-on-kubernetes/internal/parsing/loader.go:14:22: too many arguments in call to yaml.NewDecoder
BUILDSTDERR: _build/src/github.com/docker/compose-on-kubernetes/internal/parsing/loader.go:14:45: undefined: yaml.WithLimitDecodedValuesCount

NewDecoder seems to only take one argument:

func newDecoder(strict bool) *decoder {
d := &decoder{mapType: defaultMapType, strict: strict}
d.aliases = make(map[*node]bool)
return d
}

I can't find yaml.WithLimitDecodedValuesCount anywhere.

Error: the server is currently unable to handle the request (post stacks.compose.docker.com)

Pre-requisite

  • Install Docker Desktop on MacOS
  • Enable Kubernetes

Verifying Minikube Status

🚩 >  minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

Verifying Current Context

kubectl config current-context
minikube

Listing out Services running in Minikube

🚩 >  minikube service list
|-------------|-----------------------|--------------|
|  NAMESPACE  |         NAME          |     URL      |
|-------------|-----------------------|--------------|
| compose     | compose-api           | No node port |
| compose     | compose-etcd          | No node port |
| compose     | compose-etcd-client   | No node port |
| compose     | etcd-restore-operator | No node port |
| default     | kubernetes            | No node port |
| kube-system | kube-dns              | No node port |
| kube-system | kubernetes-dashboard  | No node port |
| kube-system | tiller-deploy         | No node port |
|-------------|-----------------------|--------------|
kubectl api-versions| grep compose
compose.docker.com/v1beta1
compose.docker.com/v1beta2

Verifying Docker Version

🚩 >  docker version
Client: Docker Engine - Community
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        4c52b90
 Built:             Wed Jan  9 19:33:12 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:49 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.12.4
  StackAPI:         v1beta2

Create docker-compose file

version: '3.4'
services:
  web1:
    image: nginx:alpine
    ports:
      - "81:80"
    deploy:
      replicas: 3
  db1:
    image: redis:alpine
    deploy:
      replicas: 2

Listing out the stack

docker stack ls
the server is currently unable to handle the request (get stacks.compose.docker.com)
 docker stack deploy --orchestrator=kubernetes -c docker-compose.yml
the server is currently unable to handle the request (post stacks.compose.docker.com)

  kubectl get all --namespace compose
NAME                                                                 READY     STATUS             RESTARTS   AGE
po/compose-api-7f95fcd458-x8zrn                                      0/1       CrashLoopBackOff   7          19m
po/compose-d4696764f-fh6tq                                           1/1       Running            8          19m
po/etcd-operator-etcd-operator-etcd-backup-operator-7978f8bc4hbrk9   1/1       Running            0          15m
po/etcd-operator-etcd-operator-etcd-operator-6c57fff9d5-772w9        1/1       Running            0          15m
po/etcd-operator-etcd-operator-etcd-restore-operator-6d787599pz788   1/1       Running            0          15m

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
svc/compose-api             ClusterIP   10.100.245.80    <none>        443/TCP             19m
svc/compose-etcd            ClusterIP   None             <none>        2379/TCP,2380/TCP   14m
svc/compose-etcd-client     ClusterIP   10.107.111.166   <none>        2379/TCP            14m
svc/etcd-restore-operator   ClusterIP   10.108.77.217    <none>        19999/TCP           15m

NAME                                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/compose                                             1         1         1            1           19m
deploy/compose-api                                         1         1         1            0           19m
deploy/etcd-operator-etcd-operator-etcd-backup-operator    1         1         1            1           15m
deploy/etcd-operator-etcd-operator-etcd-operator           1         1         1            1           15m
deploy/etcd-operator-etcd-operator-etcd-restore-operator   1         1         1            1           15m

NAME                                                              DESIRED   CURRENT   READY     AGE
rs/compose-api-7f95fcd458                                         1         1         0         19m
rs/compose-d4696764f                                              1         1         1         19m
rs/etcd-operator-etcd-operator-etcd-backup-operator-7978f8bc47    1         1         1         15m
rs/etcd-operator-etcd-operator-etcd-operator-6c57fff9d5           1         1         1         15m
rs/etcd-operator-etcd-operator-etcd-restore-operator-6d787599f8   1         1         1         15m
[Captains-Bay]🚩 >
ubectl logs --namespace compose po/compose-api-7f95fcd458-x8zrn
ERROR: logging before flag.Parse: I0118 15:58:53.147046       1 plugins.go:158] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook.
ERROR: logging before flag.Parse: I0118 15:58:53.148052       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
ERROR: logging before flag.Parse: W0118 15:58:53.148642       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
ERROR: logging before flag.Parse: F0118 15:59:13.173387       1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry/docker.com/stacks [http://compose-etcd-client:2379]    true false 0 0xc000461900 <nil> 5m0s 1m0s}), err (dial tcp 10.107.111.166:2379: i/o timeout)

Any idea? It was working for me few minutes back.

etcd dependency

Why store all the state in an etcd? You could store the state back into k8s via crd specifically for compose-on-kubernetes. This would make compose-on-kubernetes totally stateless and thus, easier to manage.

Document installation on other managed clusters

We currently have documentation for installing on Minikube and Azure AKS.
We should also document the installation on:

  • Google GKE
  • Amazon EKS
  • Kubernetes on Amazon AWS
  • Play with Kubernetes
  • Kind (Kubernetes in Docker)
  • Digital Ocean
  • Alibaba Cloud (Help wanted)
  • [ insert your favorite k8s environment here ]

Based on the existing documentation, this should be straight forward.

Replace testkit with Kind, and test on Kubernetes v1.13

Originally we used to run e2e tests with Testkit, an internal tool at Docker to create cluster.
We have recently moved to Kind for testing PRs, and as the current in-dev branch will only be deployed in Docker products based on Kubernetes 1.11+, we can now remove all things related to testkit from the master branch, and use Kind everywhere. (we'll keep testkit config files on our CI environment to be able to build older 1.10+ branches if required)

Additionally we need to run our e2e suite on Kubernetes 1.11, 1.12 and 1.13, and we need to re-establish a baseline for our benchmark to avoid false-positives.

Prepare v0.4.18

We are about to tag before merging v1alpha3.
We need first to update docs as the --skip-livenessprobes won't be needed anymore

[Networking] Use ClusterIP when possible

Context

Today, Compose on Kubernetes creates Headless Services to enable Service to Service communication. This
brings the same developer experience as when working with docker-compose or swarm, but it is not always the best option. We should prefer creating ClusterIP if we get enough information about how to configure it properly.

Headless service pros

  • No KubeProxy involved: direct POD to POD communication means less latency, less load on the kube-proxy component
  • Do not require a list of exposed port: As there is no proxy involved, a headless service alows to reach any port opened by a POD

Headless service cons

  • No real load balancing: the DNS server only answers with as many records as there are PODs matching its selector. It is the responsibility of the client network stack to decide if it should do some round-robin on those DNS entries. Many development runtime don't do that, and worse than that, some just take the first entry into account
  • Scalability issue: depending on DNS caching policy on the client, its knowledge of the valid PODs IP can be stale. Worse, some network client stacks have a very aggressive DNS caching policy, and never re-query the DNS server: those can become stuck trying to reach PODs that have been unscheduled due to a change in service scaling.

Potential solution

Introduce the option to use ClusterIP services instead of headless using this strategy:

  • introduce a field on ServiceConfig named InternalServiceType that can be Headless or ClusterIP
  • introduce a default behavior, that looks for the Expose field in ServiceConfig (see https://docs.docker.com/compose/compose-file/#expose): if it is empty, defaults to Headless. Else use ClusterIP

ClusterIP ports configuration will use the Expose field in the ServiceConfig to decide which ports to route.

Mapping to job?

deploy:
  replicas: 1
  restart_policy:
    condition: on-failure

has two very different behavior:
in the docker world, the service is restarted until it completes successfully, then it is left to rest.
in the kubernetes world, the service is constantly restarted, even after it has completed successfully

It seems that the (expected) docker behavior would best be achieved by mapping the service to a Job

Be able to choose the healthcheck api-server port

Currently in Docker Desktop, we deploy Compose on Kubernetes with option --network-host=true so that we can reuse the same etcd as Kubernetes core API.

The default port for api-server is 8080 which collides with containers deployed by hand by users.

Steps:

  1. Activate Kubernetes in Docker Desktop
  2. docker run -p 8080:80 -it nginx

Output: Error starting userland proxy: listen tcp 0.0.0.0:8080: bind: address already in use.

When doing netstat inside the VM: tcp 0 0 :::8080 :::* LISTEN 8655/api-server

Is it possible to add a flag for this port in order to use a non-usual port and let 8080 free for our users ?

Introduce a v1alpha3 api version so that we can bring new features

We currently support v1beta1 and v1beta2 api versions.

In the future, we want to introduce new features (candidate areas being pull secrets, pull policy, ingest controller interaction, …). For this we will need to make changes to the stack spec structure, and thus we need a new alpha API version that we can modify over time.

panic: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 10.96.0.1:443: connect: connection refused

When I restart my Docker Desktop on Windows there's a (presumably transient) "connection refused" which causes compose-on-kubernetes to print its command-line arguments and panic.

I see the following:

PS C:\Users\djs> kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS             RESTARTS   AGE
docker        compose-6c67d745f6-n78x4                 1/1     Running            1          11m
docker        compose-api-57ff65b8c7-5n9zc             0/1     CrashLoopBackOff   1          11m
kube-system   coredns-fb8b8dccf-gvss4                  1/1     Running            3          13m
kube-system   coredns-fb8b8dccf-wwq9z                  1/1     Running            3          13m
kube-system   etcd-docker-desktop                      1/1     Running            1          12m
kube-system   kube-apiserver-docker-desktop            1/1     Running            1          12m
kube-system   kube-controller-manager-docker-desktop   1/1     Running            1          12m
kube-system   kube-proxy-qcld8                         1/1     Running            1          13m
kube-system   kube-scheduler-docker-desktop            1/1     Running            1          12m
PS C:\Users\djs> kubectl logs -n docker compose-api-57ff65b8c7-5n9zc
I0724 14:31:56.884772       1 client.go:352] parsed scheme: ""
I0724 14:31:56.884881       1 client.go:352] scheme "" not registered, fallback to default scheme
I0724 14:31:56.885429       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0724 14:31:56.885994       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0724 14:31:56.906510       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 10.96.0.1:443: connect: connection refused
Usage:
   [flags]

Flags:
      --admission-control-config-file string                    File with admission control configuration.
      --audit-dynamic-configuration                             Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
      --audit-log-batch-buffer-size int                         The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
      --audit-log-batch-max-size int                            The maximum size of a batch. Only used in batch mode. (default 1)
      --audit-log-batch-max-wait duration                       The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
      --audit-log-batch-throttle-burst int                      Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
      --audit-log-batch-throttle-enable                         Whether batching throttling is enabled. Only used in batch mode.
      --audit-log-batch-throttle-qps float32                    Maximum average number of batches per second. Only used in batch mode.
      --audit-log-format string                                 Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
      --audit-log-maxage int                                    The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
      --audit-log-maxbackup int                                 The maximum number of old audit log files to retain.
      --audit-log-maxsize int                                   The maximum size in megabytes of the audit log file before it gets rotated.
      --audit-log-mode string                                   Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
      --audit-log-path string                                   If set, all requests coming to the apiserver will be logged to this file.  '-' means standard out.
      --audit-log-truncate-enabled                              Whether event and batch truncating is enabled.
      --audit-log-truncate-max-batch-size int                   Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
      --audit-log-truncate-max-event-size int                   Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
      --audit-log-version string                                API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
      --audit-policy-file string                                Path to the file that defines the audit policy configuration.
      --audit-webhook-batch-buffer-size int                     The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
      --audit-webhook-batch-max-size int                        The maximum size of a batch. Only used in batch mode. (default 400)
      --audit-webhook-batch-max-wait duration                   The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
      --audit-webhook-batch-throttle-burst int                  Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
      --audit-webhook-batch-throttle-enable                     Whether batching throttling is enabled. Only used in batch mode. (default true)
      --audit-webhook-batch-throttle-qps float32                Maximum average number of batches per second. Only used in batch mode. (default 10)
      --audit-webhook-config-file string                        Path to a kubeconfig formatted file that defines the audit webhook configuration.
      --audit-webhook-initial-backoff duration                  The amount of time to wait before retrying the first failed request. (default 10s)
      --audit-webhook-mode string                               Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
      --audit-webhook-truncate-enabled                          Whether event and batch truncating is enabled.
      --audit-webhook-truncate-max-batch-size int               Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
      --audit-webhook-truncate-max-event-size int               Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
      --audit-webhook-version string                            API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
      --authentication-kubeconfig string                        kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenaccessreviews.authentication.k8s.io.
      --authentication-skip-lookup                              If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
      --authentication-token-webhook-cache-ttl duration         The duration to cache responses from the webhook token authenticator. (default 10s)
      --authentication-tolerate-lookup-failure                  If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
      --authorization-always-allow-paths strings                A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
      --authorization-kubeconfig string                         kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
      --authorization-webhook-cache-authorized-ttl duration     The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
      --authorization-webhook-cache-unauthorized-ttl duration   The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
      --bind-address ip                                         The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
      --ca-bundle-file string                                   defines the path to the CA bundle file
      --cert-dir string                                         The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
      --client-ca-file string                                   If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
      --contention-profiling                                    Enable lock contention profiling, if profiling is enabled
      --default-watch-cache-size int                            Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
      --delete-collection-workers int                           Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
      --disable-admission-plugins strings                       admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
      --enable-admission-plugins strings                        admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
      --enable-garbage-collector                                Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
      --encryption-provider-config string                       The file containing configuration for encryption providers to be used for storing secrets in etcd
      --etcd-cafile string                                      SSL Certificate Authority file used to secure etcd communication.
      --etcd-certfile string                                    SSL certification file used to secure etcd communication.
      --etcd-compaction-interval duration                       The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
      --etcd-count-metric-poll-period duration                  Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
      --etcd-keyfile string                                     SSL key file used to secure etcd communication.
      --etcd-prefix string                                      The prefix to prepend to all resource paths in etcd. (default "/registry/docker.com/stacks")
      --etcd-servers strings                                    List of etcd servers to connect with (scheme://ip:port), comma separated.
      --etcd-servers-overrides strings                          Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
      --healthz-check-port int                                  defines the port used by healthz check server (0 to disable it) (default 8080)
  -h, --help                                                    help for this command
      --http2-max-streams-per-connection int                    The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. (default 1000)
      --kubeconfig string                                       kubeconfig file pointing at the 'core' kubernetes server.
      --log-flush-frequency duration                            Maximum number of seconds between log flushes (default 5s)
      --profiling                                               Enable profiling via web interface host:port/debug/pprof/ (default true)
      --requestheader-allowed-names strings                     List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
      --requestheader-client-ca-file string                     Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
      --requestheader-extra-headers-prefix strings              List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
      --requestheader-group-headers strings                     List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
      --requestheader-username-headers strings                  List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
      --secure-port int                                         The port on which to serve HTTPS with authentication and authorization.If 0, don't serve HTTPS at all. (default 443)
      --service-name string                                     defines the name of the service exposing the aggregated API
      --service-namespace string                                defines the namespace of the service exposing the aggregated API
      --storage-backend string                                  The storage backend for persistence. Options: 'etcd3' (default).
      --storage-media-type string                               The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. (default "application/json")
      --tls-cert-file string                                    File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
      --tls-cipher-suites strings                               Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use.  Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12
      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.
      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
      --watch-cache                                             Enable watch caching in the apiserver (default true)
      --watch-cache-sizes strings                               Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size

panic: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 10.96.0.1:443: connect: connection refused

goroutine 1 [running]:
main.main()
        /go/src/github.com/docker/compose-on-kubernetes/cmd/api-server/main.go:17 +0x5a
PS C:\Users\djs>

A little while later the other pod failed: (presumably this is a cascade failure)

PS C:\Users\djs> kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS                 RESTARTS   AGE
docker        compose-6c67d745f6-n78x4                 0/1     CrashLoopBackOff       1          20m
docker        compose-api-57ff65b8c7-5n9zc             0/1     CreateContainerError   1          20m
kube-system   coredns-fb8b8dccf-gvss4                  1/1     Running                3          22m
kube-system   coredns-fb8b8dccf-wwq9z                  1/1     Running                3          22m
kube-system   etcd-docker-desktop                      1/1     Running                1          21m
kube-system   kube-apiserver-docker-desktop            1/1     Running                1          21m
kube-system   kube-controller-manager-docker-desktop   1/1     Running                1          21m
kube-system   kube-proxy-qcld8                         1/1     Running                1          22m
kube-system   kube-scheduler-docker-desktop            1/1     Running                1          21m
PS C:\Users\djs> kubectl logs -n docker compose-6c67d745f6-n78x4
Version:    v0.4.23
Git commit: cc4914d
OS/Arch:    linux/amd64
Built:      Wed Jun  5 12:33:17 2019
W0724 15:31:53.121828       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2019-07-24T14:42:17Z" level=fatal msg="cannot watch stacks"

Compose on K8 Crashes when Azure Kubernetes (AKS) Node Restarts

We followed the blog here to install Compose-on-Kubernetes on a 1 Node Azure AKS Cluster https://github.com/docker/compose-on-kubernetes/blob/master/docs/install-on-aks.md

We've followed these instructions to the T, including ensuring that we install an etc-d cluster separate from the default etc-d instance that comes with K8.

Everything works great on first install.

As advertised, we are able to run docker stack deploy successfully, and deploy our containers and services using our compose YAML files.

Problem

However, when we restart the AKS Node, the Compose and Compose API deployments fail to start with the following errors:

Compose

Liveness probe failed: Get http://10.240.0.27:8080/healthz: dial tcp 10.240.0.27:8080: connect: connection refused

Compose API

Liveness probe failed: Get http://10.240.0.11:8080/healthz: dial tcp 10.240.0.11:8080: connect: connection refused
Back-off restarting failed container

The pods fail to start, with the following error:

Waiting: CrashLoopBackOff

Deleting the pods does not help. New Pods throw the same error.

Also, trying to run any docker stack command when the compose containers are in this state throws the following error:

$ docker stack ls --orchestrator=kubernetes

the server is currently unable to handle the request (get stacks.compose.docker.com)

Deleting and re-installing the compose api using installer-windows.exe -namespace=compose -etcd-servers=http://compose-etcd-client:2379 -tag=v0.4.18 gets the pods to start again, but the service remains broken -- throwing the previous error the server is currently unable to handle the request (get stacks.compose.docker.com) when we run any docker stack command. This is despite all pods and deployment now being in a green state.

In short. Restarting the AKS Node completely breaks the Compose API.

The only way we've found so far to restore the API is to completely delete the AKS cluster and create a new one. Not a tenable production solution.

Expected behavior:

Restarting AKS nodes should bring all components of Compose on Kubernetes back online, automatically, and Developers should be able to run docker stack as soon as the node is back online - without further interventions.

How do I check Docker CLI version included PR or not.

Now I am trying to use Compose On Kubernetes on GKE with as follows.

Important: You will need a custom build of Docker CLI to deploy stacks onto GKE. The build must include this PR which has been merged onto the master branch.

But I'm not sure convinced the right Docker CLI version of local machine.
How Do I check it.

Thank you for the developer Team.

Bind mount e2e test has been reported as flaky

This requires more investigation. The logs I have are as follows:

�[91m�[1m• Failure in Spec Teardown (AfterEach) [37.347 seconds]�[0m
Compose fry
�[90m/go/src/github.com/docker/compose-on-kubernetes/e2e/compose_test.go:97�[0m
  �[91m�[1mShould support bind volumes [AfterEach]�[0m
  �[90m/go/src/github.com/docker/compose-on-kubernetes/e2e/compose_test.go:877�[0m

  �[91mExpected
      <bool>: false
  to be true�[0m

  /go/src/github.com/docker/compose-on-kubernetes/e2e/compose_test.go:898

Add remove stack command to README

I tried to use compose on kubernetes according to README.
And I could access deployed stack with localhost:33000 .
But I could not stop stack because I am beginner.
So I think that adding how to remove stack to README is better for beginner.

thanks.

Installer should create default cluster roles

Kubernetes has a feature called aggregated cluster roles (see https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles). The idea is that these roles are an union of all roles matching their label selector. Kubernetes comes with 3 such built-in clusterroles (view, edit, admin). So...

As a Kubernetes administrator

  • I want users with global view, edit, or admin roles
  • to be able to view/edit/admin stacks
  • without additional role bindings

Acceptance criteria:

  • Compose on Kubernetes installer creates compose-stack-view, compose-stack-edit, compose-stack-admin Cluster Roles
  • Those roles have the correct labels so that they match the builtin view, edit, admin aggregated cluster roles selectors

Liveness Probe fails on Minikube

When installing without using --skip-liveness-probes, the installation process also fails on Minikube.

Environments:

  • Minikube 0.32
  • Kubernetes v1.11.5, v1.11.6

e2e test "delete stacks with propagation=Foreground" is flaky

Executing this test may lead to a crash of the compose controller pod

time="2019-02-21T16:48:19Z" level=debug msg="Sending stack deletion request: e2e-tests-compose-481797610012260660/app"
time="2019-02-21T16:48:19Z" level=error msg="Unable to get stack \"e2e-tests-compose-481797610012260660/app\" owner: stacks.compose.docker.com \"app\" not found"
panic: fatal error: controller cannot retrive ownership information from a stack

goroutine 85 [running]:
github.com/docker/compose-on-kubernetes/internal/controller.(*stackOwnerCache).get(0xc0000980e0, 0xc0001fcb40, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/docker/compose-on-kubernetes/internal/controller/stackownercache.go:131 +0x175
github.com/docker/compose-on-kubernetes/internal/controller.(*impersonatingResourceUpdaterProvider).getUpdater(0xc0001bc3c0, 0xc0001fcb40, 0x0, 0xc00052c780, 0x0, 0x0, 0x28)
	/go/src/github.com/docker/compose-on-kubernetes/internal/controller/resourceupdater.go:24 +0x7f
github.com/docker/compose-on-kubernetes/internal/controller.(*StackReconciler).deleteStackChildren(0xc000682240, 0xc0001fcb40)
	/go/src/github.com/docker/compose-on-kubernetes/internal/controller/stackreconciler.go:114 +0x219
github.com/docker/compose-on-kubernetes/internal/controller.(*StackReconciler).Start.func1(0xc00048e240, 0xc00048e420, 0xc000682240, 0xc0000803c0)
	/go/src/github.com/docker/compose-on-kubernetes/internal/controller/stackreconciler.go:80 +0x19a
created by github.com/docker/compose-on-kubernetes/internal/controller.(*StackReconciler).Start
	/go/src/github.com/docker/compose-on-kubernetes/internal/controller/stackreconciler.go:70 +0x6d

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.