Giter Club home page Giter Club logo

docker_certified_associate_certification's Introduction

1- Installation and Configuration (15% of Exam)

  • Docker Installation on Multiple Platforms (CentOS/Red Hat) / (Debian/Ubuntu)
  • Selecting a Storage Driver
  • Configuring Logging Drivers (Splunk, Journald, etc.)
  • Setting Up Swarm (Configure Managers)
  • Setting Up Swarm (Add Nodes)
  • Setting Up a Swarm (Backup and Restore)
  • Outline the Sizing Requirements Prior to Installation
  • Install/uninstall Universal Control Plane (UCP)
  • Install Docker Trusted Repository (DTR) for Secure Cluster Management
  • Backups for UCP and DTR
  • Create and Manage UCP Users and Teams
  • Namespaces, CGroups, and Certificates

2- Image Creation, Management, and Registry (20% of Exam)

  • Pull an Image from a Registry (Using Docker Pull and Docker Images)
  • Searching an Image Repository
  • Tag an Image
  • Use CLI Commands to Manage Images (List, Delete, Prune, RMI, etc)
  • Inspect Images and Report Specific Attributes Using Filter and Format
  • Container Basics - Running, Attaching to, and Executing Commands in Containers
  • Create an Image with Dockerfile
  • Dockerfile Options, Structure, and Efficiencies (Part I)
  • Dockerfile Options, Structure, and Efficiencies (Part II)
  • Describe and Display How Image Layers Work
  • Modify an Image to a Single Layer
  • Selecting a Docker Storage Driver
  • Prepare for a Docker Secure Registry
  • Deploy, Configure, Log Into, Push, and Pull an Image in a Registry
  • Managing Images in Your Private Repository

3- Orchestration (25% of Exam)

  • State the Difference Between Running a Container and Running a Service
  • Demonstrate Steps to Lock (and Unlock) a Cluster
  • Extend the Instructions to Run Individual Containers into Running Services Under Swarm and Manipulate a Running Stack of Services
  • Increase and Decrease the Number of Replicas in a Service
  • Running Replicated vs. Global Services
  • Demonstrate the Usage of Templates with 'docker service create'
  • Apply Node Labels for Task Placement
  • Convert an Application Deployment into a Stack File Using a YAML Compose File with 'docker stack deploy'
  • Understanding the 'docker inspect' Output
  • Identify the Steps Needed to Troubleshoot a Service Not Deploying
  • How Dockerized Apps Communicate with Legacy Systems
  • Paraphrase the Importance of Quorum in a Swarm Cluster

4- Storage and Volumes (10% of Exam)

  • State Which Graph Driver Should Be Used on Which OS
  • Summarize How an Image Is Composed of Multiple Layers on the Filesystem
  • Describe How Storage and Volumes Can Be Used Across Cluster Nodes for Persistent Storage
  • Identify the Steps You Would Take to Clean Up Unused Images (and Other Resources) On a File System (CLI)

5- Networking (15% of Exam)

  • Create a Docker Bridge Network for a Developer to Use for Their Containers
  • Configure Docker for External DNS
  • Publish a Port So That an Application Is Accessible Externally and Identify the Port and IP It Is On
  • Deploy a Service on a Docker Overlay Network
  • Describe the Built In Network Drivers and Use Cases for Each and Detail the Difference Between Host and Ingress Network Port Publishing Mode
  • Troubleshoot Container and Engine Logs to Understand Connectivity Issues Between Containers
  • Understanding the Container Network Model
  • Understand and Describe the Traffic Types that Flow Between the Docker Engine, Registry and UCP Components

6- Security (15% of Exam)

  • Describe the Process of Signing an Image and Enable Docker Content Trust
  • Demonstrate That an Image Passes a Security Scan
  • Identity Roles
  • Configure RBAC and Enable LDAP in UCP
  • Demonstrate Creation and Use of UCP Client Bundles and Protect the Docker Daemon With Certificates
  • Describe the Process to Use External Certificates with UCP and DTR
  • Describe Default Docker Swarm and Engine Security
  • Describe MTLS

docker_certified_associate_certification's People

Contributors

ajnouri avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

phahok phulei

docker_certified_associate_certification's Issues

docker-compose : cannot remove a container stopped by docker client: removing: device or resource busy

I accidently stopped a container started by docker-compose, now I couldn't remove it from either docker or docker-compose:

$ docker-compose down
Removing testdb_myfakesql_1 ... error

ERROR: for testdb_myfakesql_1  Driver aufs failed to remove root filesystem 7ace9f9d55471a4e338cc61971979a2dbffb94f5e9f90760b805a0b576497a47: rename /var/lib/docker/aufs/mnt/c2fca7ae013e977ddb81c5b8113432938587ae4a77dc95274e2ff56fd4f6d054 /var/lib/docker/aufs/mnt/c2fca7ae013e977ddb81c5b8113432938587ae4a77dc95274e2ff56fd4f6d054-removing: device or resource busy
Removing network testdb_default
ajn@~/github/fake-mysql-db/test_db$ docker-compose ps
       Name                    Command             State    Ports
-----------------------------------------------------------------
testdb_myfakesql_1   docker-entrypoint.sh mysqld   Exit 0       

Rancher: hosts join rancher only with IP

Using host name: (/etc/hosts on both rancher and rancheros node are properly configurd to resolve rancheros1 )

[docker@rancheros1 ~]$ sudo docker run -e CATTLE_AGENT_IP="rancheros1"  -e CATTLE_HOST_LABELS='typelabel=node&rolelabel=test'  --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://ajnouri.info:8080/v1/scripts/DE0B5699DFFA63EE9734:1514678400000:5SVXq7ImsOJkT9NugiquuC7kimU
ERROR: Invalid CATTLE_AGENT_IP (rancheros1)

Using host IP:

[docker@rancheros1 ~]$ sudo docker run -e CATTLE_AGENT_IP="192.168.99.100"  -e CATTLE_HOST_LABELS='typelabel=node&rolelabel=test'  --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://ajnouri.info:8080/v1/scripts/DE0B5699DFFA63EE9734:1514678400000:5SVXq7ImsOJkT9NugiquuC7kimU

INFO: Running Agent Registration Process, CATTLE_URL=http://ajnouri.info:8080/v1
INFO: Attempting to connect to: http://ajnouri.info:8080/v1
INFO: http://ajnouri.info:8080/v1 is accessible
INFO: Inspecting host capabilities
INFO: Boot2Docker: false
INFO: Host writable: true
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_ACCESS_KEY=C4900994E8A2EBB705BA
INFO: ENV: CATTLE_AGENT_IP=192.168.99.100
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_HOST_LABELS=typelabel=node&rolelabel=test
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_URL=http://ajnouri.info:8080/v1
INFO: ENV: DETECTED_CATTLE_AGENT_IP=192.168.0.254
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
INFO: Deleting container rancher-agent
INFO: Launched Rancher Agent: c1723a28c8ea593815f5e87eec83ad24113c09292699f39532a5295bf324e67e
[docker@rancheros1 ~]$ 

Rancher: Error using Rancher-NFS volume with a service

I have a three node Rancher cluster and an NFS server:
selection_001_02_05

Tested NFS from another client and it works:

$ sudo mount X.X.X.83:/var/nfs/general /mnt/nfs
$ ls /mnt/nfs
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
X.X.X.83:/var/nfs/general 46G 2.6G 41G 6% /mnt/nfs
$ ls /mnt/nfs
fromserver

Configured the Rancher-NFS driver:
selection_004_02_05

Configured a service:

selection_006_02_05

But got error related to NFS:
selection_003_02_05

FATA[0000] The Docker port is externally accessible on this node, accepting connections on port 2375. This node is insecure. Learn more at https://docker.com/ddc-18

When trying to install UCP on dind (docker-in-docker cluster):

docker container run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.4 install --host-address 172.17.0.10 --interactive

INFO[0000] Verifying your system is compatible with UCP 2.2.4 (168ec746e)
INFO[0000] Your engine version 17.12.1-ce, build 7390fc6 (3.16.0-4-amd64) is compatible
Admin Username: admin
Admin Password:
Confirm Admin Password:
INFO[0013] All required images are present
WARN[0013] None of the hostnames we'll be using in the UCP certificates [124915a288ea 127.0.0.1 172.18.0.1 172.17.0.10] contain a domain component. Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect. You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
FATA[0000] The Docker port is externally accessible on this node, accepting connections on port 2375. This node is insecure. Learn more at https://docker.com/ddc-18
/ #

Cannot login to the private registry

Configuring private docker registry with self-signed certification.

Connection to the new registry is refused :

here is the prior configurations:

mkdir certs
mkdir auths

openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/dockerrepo.key -x509 -days 365 -out certs/dockerrepo.crt -subj /CN=myregistry.ajnouri.com                                                                       

Generating a 4096 bit RSA private key
............................................................................................................................++
.....................................................................................................................++
writing new private key to 'certs/dockerrepo.key'


mkdir -p /etc/docker/certs.d/myregistrydomain.com:5000

cd /etc/docker/certs.d/myregistrydomain.com\:5000 

cp /home/certs/dockerrepo.crt   ca.crt
make sure root owns ca.crt

docker pull registry:2

docker run --entrypoint htpasswd registry:2 -Bbn test password > auth/htpasswd

docker run -d -p 5000:5000 -v /etc/docker/certs.d/myregistry.ajnouri.com\:500/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/home/ajn/certs/dockerrepo.crt -e REGISTRY_HTTP_TLS_KEY=/home/ajn/certs/dockerrepo.key -v /etc/docker/certs.d/myregistry.ajnouri.com\:500/auth:/auth -e REGISTRY_AUTH=htpasswd -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/home/ajn/auth/htpasswd registry:2

And when trying to login:

[root@dockerstd1 myregistry.ajnouri.com:500]# docker login myregistry.ajnouri.com:5000/mybusybox

Username: test
Password:
Error response from daemon: Get https://myregistry.ajnouri.com:5000/v2/: dial tcp 192.168.0.149:5000: getsockopt: connection refused

Mounting NFS Volume: permission denied

Created volume with docker NFS driver
[ajn@manager1 ~]$ docker volume create --driver local --opt type=nfs --opt o=addr=192.168.0.146,rw --opt device=:/var/nfs/general native-nfs

Tried to run a container that mount a local directory to NFS volume
[ajn@manager1 ~]$ docker run --rm -it -v native-nfs:/mnt alpine

docker: Error response from daemon: error while mounting volume with options: type='nfs' device=':/nfs' o='addr=192.168.0.146,rw': permission denied.

DTR installation : FATA[0001] failed to choose ucp node: The UCP node 'swarm-manager1' has port conflicts, please pick another node or choose a different port.

Docker in Docker swarm cluster
Trying to install DTR on a worker, using the UCP command:

selection_002_28_03

/ # docker run -it --rm docker/dtr install \
>   --ucp-node swarm-manager1 \
>   --ucp-username admin \
>   --ucp-url https://192.168.123.2 \
>   --ucp-insecure-tls \
>   --ucp-password KlRMLDmZThD80v0uF1Zmc42rA2AJ2u/S

Unable to find image 'docker/dtr:latest' locally
latest: Pulling from docker/dtr
605ce1bd3f31: Already exists
3229f5297e59: Pull complete
311610a93755: Pull complete
33fb3c0b5eca: Pull complete
Digest: sha256:713cd5692136d203d10a94084dca13c1918f3ef25543e3908d9358dad83e2aac
Status: Downloaded newer image for docker/dtr:latest
INFO[0000] Beginning Docker Trusted Registry installation
INFO[0000] Validating UCP cert
INFO[0000] Connecting to UCP
FATA[0001] failed to choose ucp node: The UCP node 'swarm-manager1' has port conflicts, please pick another node or choose a different port.
/ #

UCP install : FATA[0002] The following required ports are already in use on your host

When installing UCP:

docker container run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.4 install --host-address 192.168.218.104 --interactive

WARNING: IPv4 forwarding is disabled. Networking will not work.
INFO[0000] Verifying your system is compatible with UCP 2.2.4 (168ec746e) 
INFO[0000] Your engine version 17.12.0-ce, build c97c6d6 (3.10.0-693.el7.x86_64) is compatible 
WARN[0000] Your system does not have enough memory.  UCP suggests a minimum of 2.00 GB, but you only have 1.87 GB.  You may have unexpected errors. 
Admin Username: admin
Admin Password: 
invalid: Admin Password - must be at least 8 characters
Admin Password: 
Confirm Admin Password: 
INFO[0026] Pulling required images... (this may take a while) 
INFO[0026] Pulling docker/ucp-swarm:2.2.4               
INFO[0031] Pulling docker/ucp-auth:2.2.4                
INFO[0035] Pulling docker/ucp-controller:2.2.4          
INFO[0040] Pulling docker/ucp-etcd:2.2.4                
INFO[0045] Pulling docker/ucp-cfssl:2.2.4               
INFO[0049] Pulling docker/ucp-auth-store:2.2.4          
INFO[0055] Pulling docker/ucp-agent:2.2.4               
INFO[0060] Pulling docker/ucp-compose:2.2.4             
INFO[0066] Pulling docker/ucp-hrm:2.2.4                 
INFO[0069] Pulling docker/ucp-dsinfo:2.2.4              
INFO[0106] Pulling docker/ucp-metrics:2.2.4             
**WARN[0113] None of the hostnames we'll be using in the UCP certificates [manager1 127.0.0.1 172.17.0.1 192.168.218.104] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
FATA[0002] The following required ports are already in use on your host - 12386, 12376, 12381, 2376, 12380, 12385, 12384, 12379, 12382, 443, 12383, 12387.  You may specify an alternative port number to 2376 with the --swarm-port argument.** 

here is the content of /etc/hosts on all nodes:

[root@manager1 ajn]# sudo cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.218.104 manage1
192.168.218.104 ucp.example.com
192.168.218.155 worker1
192.168.218.155 dtr.example.com
192.168.218.134 worker2

And none of the mentioned ports is opened:

[root@manager1 ajn]#netstat -ntulpn

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 974/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1185/master
tcp6 0 0 :::80 :::* LISTEN 977/dockerd
tcp6 0 0 :::22 :::* LISTEN 974/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1185/master
tcp6 0 0 :::2377 :::* LISTEN 977/dockerd
tcp6 0 0 :::7946 :::* LISTEN 977/dockerd
udp 0 0 0.0.0.0:4789 0.0.0.0:* -
udp6 0 0 :::7946 :::* 977/dockerd

NFS storage: Inconsistent storage between replicas in different swarm nodes

1- Persistent NFS volume with containers, this is used as a control test of NFS volume.

Created volume with docker NFS driver
[ajn@manager1 ~]$ docker volume create --driver local --opt type=nfs --opt o=addr=192.168.0.146,rw --opt device=:/var/nfs/general native-nfs

Run a container that mount a local directory to NFS volume
[ajn@manager1 ~]$ docker run --rm -it -v native-nfs:/data alpine

/ # ls /data
fromserver
/ #

It works fine because the file "fromserver" is created on NFS server and it is visible from container1 (OK)
Let's create a file "fromcontainer1"

/ # touch /data/fromcontainer1
/ # exit

Now let's create a second container and check container1 written data
[ajn@manager1 ~]$ docker run --rm -it -v native-nfs:/data alpine

/ # ls /data
fromcontainer1 fromserver

Data created rom the first container is visible from the second container (OK)


2- Persistent NFS volume with swarm service

Now, let's test NFS storage with services.
We keep the same volume created in part1:

[ajn@manager1 ~]$ docker volume ls

DRIVER VOLUME NAME
local native-nfs

Launched the service
[ajn@manager1 ~]$ docker service create --name serviceweb1 -p 80:80 --mount source=native-nfs,target=/data --detach=false --replicas 3 nginx

x84mzibymv9ju4nx4eicq83hh
overall progress: 3 out of 3 tasks
1/3: running
2/3: running
3/3: running
verify: Service converged

[ajn@manager1 ~]$ docker service ps serviceweb1

ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
jjnlx1mceoem serviceweb1.1 nginx:latest worker3.ajnouri.com Running Running less than a second ago
lguwb24jrnuw serviceweb1.2 nginx:latest manager1.ajnouri.com Running Running 40 seconds ago
ku833ahrgw9l serviceweb1.3 nginx:latest worker1.ajnouri.com Running Running 39 seconds ago

selection_001_26_02

Let's access "serviceweb1.1" container from the manager1 node:
[ajn@manager1 ~]$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b89267982ae2 nginx:latest "nginx -g 'daemon of_" 2 minutes ago Up 2 minutes 80/tcp serviceweb1.2.lguwb24jrnuw3h6ahtkw7wta9

let's check peristent stogare from serviceweb1.2

[ajn@manager1 ~]$ docker exec b89267982ae2 ls /data

fromcontainer1
fromcontainer2
fromserver

So far so good, the replica on the manager connects to NFS storage (OK)

Now, let's create data on the peristent stogare from serviceweb1.2
[ajn@manager1 ~]$ docker exec b89267982ae2 touch /data/serviceweb1.2

Now let's connect to another replica on another node "worker1"

[root@worker1 ~]# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a7542ddb01c3 nginx:latest "nginx -g 'daemon of_" 8 minutes ago Up 8 minutes 80/tcp serviceweb1.3.ku833ahrgw9lt1ssvy3ufdlmo

[root@worker1 ~]# docker exec a7542ddb01c3 ls /data

serviceweb1.3

!?!?! Strange! because "serviceweb1.3" is a file I've created within a replica of a prior service

From the third replica on node "worker3"
[root@worker3 ~]$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6bd96c2ddcca nginx:latest "nginx -g 'daemon of_" 12 minutes ago Up 12 minutes 80/tcp serviceweb1.1.jjnlx1mceoempu2ayapq6bhpl

[ajn@worker3 ~]$ sudo docker exec 6bd96c2ddcca ls /data

service-ctn1-worker1
service-ctn2-worker1

These were files created from different replicas on prior service, removed since. So it didn't connect to the same storage as the replica on "manager1" (NOK)

Let's create a file from here:
[ajn@worker3 ~]$ sudo docker exec 6bd96c2ddcca touch /data/serviceweb1.1
[ajn@worker3 ~]$ sudo docker exec 6bd96c2ddcca ls /data

service-ctn1-worker1
service-ctn2-worker1
serviceweb1.1

Now back to "manager1" node and the replica there to inspect the volume

[ajn@manager1 ~]$ docker exec b89267982ae2 ls /data

fromcontainer1
fromcontainer2
fromserver
serviceweb1.2

Only the container replica on "manage1" seems to connect successfully to NFS share.*

How comes that some replicas (on manager) correctly connects to NFS volume, but not other replicas on other nodes, they connects each to a different volume with different data????

Worker: Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader.

When trying to re-join a manager after

Important condition: ==> #2
For training purposes, I am using a lited hosted node service with dynamic IP addresses renewed each time the server are rebooted, and they are rebooted after 2 hours.

docker swarm join --token SWMTKN-1-06xygghtoyyvchg736y1wet3li909s8nrzol8oubov9q85icu2-8vmn0d4rb3g34bw7uqycpsisw 172.31.27.4:2377

Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.

The manager is reachable:
ping 172.31.27.4

PING 172.31.27.4 (172.31.27.4) 56(84) bytes of data.
64 bytes from 172.31.27.4: icmp_seq=1 ttl=64 time=1.04 ms
64 bytes from 172.31.27.4: icmp_seq=2 ttl=64 time=0.944 ms
64 bytes from 172.31.27.4: icmp_seq=3 ttl=64 time=0.934 ms
64 bytes from 172.31.27.4: icmp_seq=4 ttl=64 time=0.916 ms
^C
--- 172.31.27.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.916/0.958/1.041/0.061 ms

Cannot see pulled images on swarm cluster (UCP+DTR)

[ajn@manager1 ~]$ docker pull hello-world

Using default tag: latest
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:66ef312bbac49c39a89aa9bcc3cb4f3c9e7de3788c944158df3ee0176d32b751
Status: Downloaded newer image for hello-world:latest

[ajn@manager1 ~]$ docker images

docker/ucp-controller 2.2.4 0dd8233eed53 2 months ago 38.7MB
docker/ucp-metrics 2.2.4 57797febc06b 2 months ago 79.6MB
docker/ucp-hrm 2.2.4 b7b61db92308 2 months ago 7.14MB
docker/ucp-etcd 2.2.4 d51f71163016 2 months ago 29.6MB
docker/ucp-dsinfo 2.2.4 3172cba25055 2 months ago 548MB
docker/ucp-compose 2.2.4 ed9aee006cf0 2 months ago 39.6MB
docker/ucp-auth-store 2.2.4 ea0bf03a897d 2 months ago 60.5MB
docker/ucp-cfssl 2.2.4 ebdf8c052738 2 months ago 12.2MB
docker/ucp-agent 2.2.4 f149286db4e6 2 months ago 20.2MB
docker/ucp 2.2.4 133ae4e2494d 2 months ago 20MB
docker/ucp-swarm 2.2.4 8f4ee7209ae0 2 months ago 21.6MB
[ajn@manager1 ~]$

Error response from daemon: rpc error: code = Unavailable desc = grpc: the connection is unavailable

New formed cluster on VM machines.
workers do not join the cluster:

docker swarm join --token SWMTKN-1-1vusjeqebj058b4i5pmxkreilrdb4vqg44orr83k39l9hbdf3h-dtmhw5pjcjiya992soltgsf92 192.168.0.104:2377

Error response from daemon: rpc error: code = Unavailable desc = grpc: the connection is unavailable

On the manager, the service is listening on the port:
[ajn@manager1 ~]$ ss -n | grep 2377

tcp    ESTAB      0      0      192.168.0.104:52300              192.168.0.104:2
377               
tcp    ESTAB      0      0      ::ffff:192.168.0.104:2377               ::ffff:1
92.168.0.104:52300              

DTR installation : FATA[0005] failed to get new conv client: failed to create http client: Failed to get UCP CA: Bad status code fetching CA: 500

When installing DTR

[ajn@worker1 ~]$ docker run -it --rm docker/dtr install --ucp-node worker1.ajnouri.com,cin --ucp-username admin --ucp-url https://manager1.ajnouri.com --ucp-insecure-tls

INFO[0000] Beginning Docker Trusted Registry installation
ucp-password:
INFO[0004] Validating UCP cert
INFO[0004] Connecting to UCP
FATA[0005] failed to get new conv client: failed to create http client: Failed to get UCP CA: Bad status code fetching CA: 500

cannot list logs from a container

After changing the default logging driver from json-file to syslog, I cannot list logs from a container

docker logs httpd2

Error response from daemon: configured logging driver does not support reading

All Swarm workers have STATUS Down

After a backup & restore of a cluster, All workers have STATUS Down

docker node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
owrqevo5ubcy0aqu8jm324g11 * ajnouri4.mylabserver.com Ready Active Leader
og3e9ccf5pcjb1uinzsln9vl9 ajnouri5.mylabserver.com Down Active
qsc44dvdqic6stir5zqnirp4b ajnouri5.mylabserver.com Down Active
m6ymjtebnlt8dmjduh1s7l3kr ajnouri6.mylabserver.com Down Active
pk4ch7o20gv3allwhpln8vzj8 ajnouri6.mylabserver.com Down Active

Eventhough, the service is restored correctly, but looks like hosted on the manager only:

docker service ls

ID NAME MODE REPLICAS IMAGE PORTS
cln2fxtjdnf2 backupweb replicated 2/2 httpd:latest *:80->80/tcp

Mounting NFS Volume: operation not permitted.

Created a volume usiung Docker native NFS driver

[ajn@manager1 ~]$ docker volume create --driver local --opt type=nfs --opt o=addr=192.168.0.146,rw --opt device=:/var/nfs/general native-nfs

native-nfs

But when creating a container using the NFS volume I got this error:
[ajn@manager1 ~]$ docker run --rm -it -v native-nfs:/mnt alpine

docker: Error response from daemon: chown /var/lib/docker/volumes/native-nfs/_data: operation not permitted.
See 'docker run --help'.

docker-compose down: containers processes are not stopped properly

$ docker-compose down
Removing elkmysql_kibana_1        ... error
Removing elkmysql_mylogstash_1    ... error
Removing elkmysql_elasticsearch_1 ... error

ERROR: for elkmysql_elasticsearch_1  Driver aufs failed to remove root filesystem 714c154a3ae43c69f247df92bba15e2dffd99e3298251738ac9ba490101d5172: rename /var/lib/docker/aufs/mnt/e4d19daff2ffc2e6cf76c5cea3cd40710b8469ee6926fcbf7e096a886dd26899 /var/lib/docker/aufs/mnt/e4d19daff2ffc2e6cf76c5cea3cd40710b8469ee6926fcbf7e096a886dd26899-removing: device or resource busy

ERROR: for elkmysql_kibana_1  Driver aufs failed to remove root filesystem 31e61d3317fbd60e677580191be4ff7fcc9085eea1edd70b71711069141d6e01: rename /var/lib/docker/aufs/mnt/415678d820ecde3db4a2deb62855283ae68ead700703f0ee2fbc24b0718f9119 /var/lib/docker/aufs/mnt/415678d820ecde3db4a2deb62855283ae68ead700703f0ee2fbc24b0718f9119-removing: device or resource busy

ERROR: for elkmysql_mylogstash_1  Driver aufs failed to remove root filesystem 4e8e70061209e25764c9146df1c71642605331381f48f045dd2883ce31fd3f10: rename /var/lib/docker/aufs/mnt/12e28ce36eaebcd283b997badcac2e5bbe5c6d3a408b09f14da359327905b261 /var/lib/docker/aufs/mnt/12e28ce36eaebcd283b997badcac2e5bbe5c6d3a408b09f14da359327905b261-removing: device or resource busy
Removing network elkmysql_default
docker-compose ps
          Name                        Command                State     Ports
----------------------------------------------------------------------------
elkmysql_elasticsearch_1   /docker-entrypoint.sh elas ...   Exit 143        
elkmysql_kibana_1          /docker-entrypoint.sh kibana     Exit 143        
elkmysql_mylogstash_1      /docker-entrypoint.sh bash       Exit 0  

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.