Giter Club home page Giter Club logo

ckad-exercises's Introduction

Software License PRs Welcome unofficial Google Analytics for GitHub

CKAD Exercises

A set of exercises that helped me prepare for the Certified Kubernetes Application Developer exam, offered by the Cloud Native Computing Foundation, organized by curriculum domain. They may as well serve as learning and practicing with Kubernetes.

Make a mental note of the breadcrumb at the start of the excercise section, to quickly locate the relevant document in kubernetes.io. It is recommended that you read the official documents before attempting exercises below it. During the exam, you are only allowed to refer to official documentation from a browser window within the exam VM. A Quick Reference box will contain helpful links for each exam exercise as well.

Contents

If your work is related to multiplayer game servers, checkout out thundernetes, a brand new project to host game servers on Kubernetes!

Can I PR? There is an error/an alternative way/an extra question/solution I can offer

Absolutely! Feel free to PR and edit/add questions and solutions, but please stick to the existing format.

If this repo has helped you in any way, feel free to post on discussions or buy me a coffee!

Buy Me A Coffee

ckad-exercises's People

Contributors

aamirpinger avatar abhidp avatar arpanbalpande avatar bargom avatar bmuschko avatar camba1 avatar danyc97 avatar daviddykeuk avatar derdanu avatar dgkanatsios avatar diegoparra avatar dirc avatar erkanerol avatar iogbole avatar ismailyenigul avatar itsmitul9 avatar kakakakakku avatar katarzyna-zarnowiec avatar krishna1m avatar lucassha avatar nargit avatar ojongerius avatar ozkanpakdil avatar pacozaa avatar parth-pandit avatar renanlj avatar riyazwalikar avatar seancrasto avatar vcillusion avatar vray27 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ckad-exercises's Issues

Helm command has mistakes

Helm install command has a mistake

now

helm install -f myvalues.yaml my redis ./redis

correct

helm install -f myvalues.yaml myredis ./redis

Incorrect answer to solution?

https://github.com/dgkanatsios/CKAD-exercises/blob/master/a.core_concepts.md

In the question "Create a busybox pod (using YAML) that runs the command "env". Run it and see the output", one would expect to see something like:

kubectl run busybox --image=busybox --restart=Never --dry-run -o yaml

Then add in the command slot to get a final yaml looking like this:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - image: busybox
    imagePullPolicy: IfNotPresent
    name: busybox
    command: ["env"]
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

Apply and use kubectl logs to get the output.

Have I completely misunderstood here?

Multi_Container_Pods: Error from server (AlreadyExists): pods "box" already exists

In https://github.com/dgkanatsios/CKAD-exercises/blob/master/b.multi_container_pods.md there is a small error in the second question:

...
# Execute wget
kubectl run box --image=busybox --restart=Never -it --rm -- /bin/sh -c "wget -O- IP"
...

The new pod must have another name than "pod" because "pod" was already started, otherwise you will get a
Error from server (AlreadyExists): pods "box" already exists

So instead of
kubectl run box --image=busybox --restart=Never -it --rm -- /bin/sh -c "wget -O- IP"
use something like
kubectl run box2 --image=busybox --restart=Never -it --rm -- /bin/sh -c "wget -O- IP"

[Help] OCI-compliant container

I'm a newbie of Docker and met a docker question

  1. Write a Dockerfile that creates a image with the following spec
  • use nginx as base image
  • adds the following files to /user/share/nginx/:
    • ~/student/text1.html
    • ~/student/text2.html
  • Overwrites /user/share/nginx/index.html with ~/student/text2.html
  1. Using the created Dockerfile and build an OCI-compliant container image

Wrong order of tasks in Pod-Design > Jobs

After task Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute the next task currently is:

Create the same job, make it run 5 times, one after the other. Verify its status and delete it

While you could certainly reuse the job that automatically terminated to run 5 times in sequence for this task, the shown answer suggests the referenced "same job" is actually the one from the earlier task Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world'.

I'd put the task "Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute" last in section Pod-Design > Jobs to avoid confusion.

Misleading solution in 'hitting IP with wget'?

Hi,

I was looking today on https://github.com/dgkanatsios/CKAD-exercises/blob/master/f.services.md - and I think I noticed an issue there:

in question:
Get service's ClusterIP, create a temp busybox pod and 'hit' that IP with wget

Second solution is given as:

IP=$(kubectl get svc nginx --template={{.spec.clusterIP}}) # get the IP (something like 10.108.93.130)
kubectl run busybox --rm --image=busybox -it --restart=Never --env="IP=$IP" -- wget -O- $IP:80 --timeout 2

It works only by coincident as we use the same environment variable 'IP' in host system and inside of pod, because if we change name of variable containing ip address - it fails:

IP=$(kubectl get svc nginx --template={{.spec.clusterIP}})
kubectl run busybox --rm --image=busybox -it --restart=Never --env="HA=$IP" -- wget -O- $HA:80 --timeout 2
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: container busybox not found in pod busybox_default
wget: bad address ':80'
pod "busybox" deleted
pod default/busybox terminated (Error)

so, either we skip --env part:

IP=$(kubectl get svc nginx --template={{.spec.clusterIP}}) # get the IP (something like 10.108.93.130)
kubectl run busybox --rm --image=busybox -it --restart=Never -- wget -O- $IP:80 --timeout 2

or we use again first solution but with --env variable:

IP=$(kubectl get svc nginx --template={{.spec.clusterIP}})
kubectl run busybox --rm --image=busybox -it --restart=Never --env=HA=$IP -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -O- $HA:80
Connecting to 10.152.183.190:80 (10.152.183.190:80)
writing to stdout
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
-                    100% |***********************************************|   615  0:00:00 ETA
written to stdout
/ # exit
pod "busybox" deleted

Thank you for verification.

Regards,
Lukasz

Multi-container Pods issue

In Multi-container Pods section the second question is throwing an error

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: box
  name: box
spec:
  initContainers: #
  - args: #
    - /bin/sh #
    - -c #
    - wget -O /work-dir/index.html http://kubernetes.io #
    image: busybox #
    name: box #
    volumeMounts: #
    - name: vol #
      mountPath: /work-dir #
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
    volumeMounts: #
    - name: vol #
      mountPath: /usr/share/nginx/html #
  volumes: #
  - name: vol #
    emptyDir: {} #

image

Wrong solution in 4th exercise of e.observability.md

"Lots of pods are running in qa,alan,test,production namespaces. All of these pods are configured with liveness probe. Please list all pods whose liveness probe are failed in the format of / per line."

The exercise asks for a list of failed prob in the format of :"namespace/pod name" but the given solution makes an awk on the 4th word. The output is:"pod/nameOfThePod>, not Namespace/nameOfThePod

a more clear solution could be: kubectl get events -A | grep -i "Liveness probe failed" | awk '{print $1,$5}'

CronJob param activeDeadlineSeconds vs startingDeadlineSeconds

I think the solution of the exercise https://github.com/dgkanatsios/CKAD-exercises/blob/master/c.pod_design.md#create-a-cron-job-with-image-busybox-that-runs-every-minute-and-writes-date-echo-hello-from-the-kubernetes-cluster-to-standard-output-the-cron-job-should-be-terminated-if-it-takes-more-than-17-seconds-to-start-execution-after-its-schedule should use CronJob.spec.startingDeadlineSeconds instead of cronjob.spec.jobTemplate.spec.activeDeadlineSeconds, as we are requested to put a deadline between scheduling and job start and not to a deadline for job execution time.

CKD new question

Create a similar 2 deployment and mark them as Blue and Green. Create a Service so that the load is balanced between Blue and Green at 75%-25% ratio.
Does anyone know how can we do this kind if question?
I tried doing this by create deployments of 2 different pods of 3 numbers and 1 number and expected to the traffic to flow that way. However i am not successful. If you want to try, I have also created a sample for you to try. look below.
If you know the answer please let me know below

small contribution in d.configration.md file

there are small correction required in answer # 4

Q4) Create and display a configmap from a .env file >> requires no change
Ans4)
Orignal Ans
kubectl create cm configmap3 --from-file=config.txt
kubectl get cm configmap2 -o yaml --export

Updated Answer
kubectl create cm configmap3 --from-env-file=config.env
kubectl get cm configmap3 -o yaml --export

Question

Would you say that if you're able to solve this problems within a reasonable time, you are well prepared for the CKAD?

This solution isn't working (or is it just me?)

I followed all the steps in this exercise https://github.com/dgkanatsios/CKAD-exercises/blob/master/f.services.md#create-an-nginx-deployment-of-2-replicas-expose-it-via-a-clusterip-service-on-port-80-create-a-networkpolicy-so-that-only-pods-with-labels-access-granted-can-access-the-deployment-and-apply-it

But for some reason busybox pod without label is still able to access Nginx service. i am not sure what is the issue.
kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://nginx:80 --timeout 2
should timeout but I get index.html fine

Thanks

Jobs

Hi,
in the JOBS part during the first exercise it misses the version of the perl images.
If you use the latest version the command will break.

Not clear suggestion in deployment canary exercise

Isn't clear that user cannot run this command throught their shell.

Test if the deployment was successful:

curl $(kubectl get svc my-app-svc -o jsonpath="{.spec.clusterIP}")
version-1

Infact they should run a pod instead and then run this command in interactive mode.
Another way of test the canary exercise is throught a NodePort service.

Certificates / setting up TLS.

I've heard from multiple people that you've got to setup TLS / certificates for the exam. Just thought I'd give you a heads up ๐Ÿ‘

Question about ports is incorrect

I am referring to Create a pod with image nginx called nginx and allow traffic on port 80

The solution is kubectl run nginx --image=nginx --restart=Never --port=80 but this isn't correct.

This command results in the following yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

containerPort does not permit or deny traffic, it simply exposes the port. Exposing ports is a good practice because it serves as documentation but if port 80 was not exposed traffic to port 80 would still work.

What I did to make this question correct and challenging was to reword the question to be

Create a pod with image nginx called nginx and deny all traffic except port 80

From here I created the pod as per the example but then had to create a Network Policy to block all other ports

activeDeadlineSeconds Vs. startingDeadlineSeconds

Issue: CronJob - Terminate if Job takes more than 17 S

For the question on scheduled CronJob which must be terminated within 17 seconds, my opinion is that we must use activeDeadlineSeconds = 17 seconds and not startingDeadlineSeconds =17s.

activeDeadlineSeconds
Source: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations

another way to terminate a Job is by setting an active deadline. Do this by setting the .spec.activeDeadlineSeconds field of the Job to a number of seconds. The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded.

startingDeadlineSeconds,
Source: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations

For example, suppose a CronJob is set to schedule a new Job every one minute beginning at 08:30:00, and its startingDeadlineSeconds field is not set. If the CronJob controller happens to be down from 08:29:00 to 10:21:00, the job will not start as the number of missed jobs which missed their schedule is greater than 100.

To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at 08:30:00, and its startingDeadlineSeconds is set to 200 seconds. If the CronJob controller happens to be down for the same period as the previous example (08:29:00 to 10:21:00,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (ie, 3 missed schedules), rather than from the last scheduled time until now.

`--export` is deprecated

My Environment

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-gke.12", GitCommit:"188432a69210ca32cafded81b4dd1c063720cac0", GitTreeState:"clean", BuildDate:"2019-11-07T19:27:01Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}

Overview

https://github.com/dgkanatsios/CKAD-exercises/blob/master/a.core_concepts.md#get-this-pods-yaml-without-cluster-specific-information

$ kubectl get pods nginx-xxxxxx-xxxxxx -o yaml --export
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
...

As far as I examined, it seems there is no alternative solution.

sharing hostpath

IINM, sharing the PV as done in shared-volume will only work if the pods are run on the same node. If the pods are on different nodes, you will find the passwd file missing when you go to second pod.

One way to ensure this would be to set nodeName property in second pod, to have same value as first pod (determined by scheduler). This overrides the scheduler and will always work even if the reader is running this in a cluster with multiple worker nodes.

f.service.md network policy example does not block the service

In the exercise for applying a network policy, the command a busybox without the label also return nginx home page.

kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://nginx:80                       # This should not work but it works fine

kubectl run busybox --image=busybox --rm -it --restart=Never --labels=access=true -- wget -O- http://nginx:80  # This should be fine

I am using the latest minikube in Ubuntu 16.04

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

missing tag

i dosent work like this: "--image=dgkanatsios/simpleapp"

the tag is missing: "--image=dgkanatsios/simpleapp:2.0"

shared PVC between pods not working

I'm doing the State Persistence exercises, and when it comes to having two pods both accessing the same PersistentVolumeClaim, I can't see /etc/foo/passwd from the second pod. All the steps up to that point seem to work fine, and my YAML files look the same.
pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: myvolume
spec:
  accessModes: [ReadWriteOnce,ReadWriteMany]
  capacity:
    storage: 10Gi
  hostPath:
    path: /etc/foo
  storageClassName: normal

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  storageClassName: normal
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 4Gi
$ kubectl get pvc
NAME    STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    myvolume   10Gi       RWO,RWX        normal         30m

$ kubectl get pv myvolume 
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
myvolume   10Gi       RWO,RWX        Retain           Bound    mynamespace/mypvc   normal                  34m

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busysleep
  name: busysleep
spec:
  containers:
  - args:
    - sleep
    - "3600"
    image: busybox
    name: busysleep
    resources: {}
    volumeMounts:
    - mountPath: /etc/foo
      name: myvol
  dnsPolicy: ClusterFirst
  restartPolicy: Never
  volumes:
  - name: myvol
    persistentVolumeClaim:
      claimName: mypvc
status: {}
$ kubectl exec busysleep -it -- cp /etc/passwd /etc/foo/passwd
$ kubectl exec busysleep -it -- ls /etc/foo
passwd

Then change pod.yaml to have metadata.name = busysleep2 with nothing else changed and create it.

$ kubectl exec busysleep2 -it -- ls /etc/foo

No output from the above command.

Am I missing something?

Multi-container Pods > 2nd exercise > Getting error following the solution provided

I am following the steps in the 2nd exercise answer and I get on the STATUS column "Init:Error", so I can't finish the exercise.

When I check the logs:
kubectl logs box
The output is:
Error from server (BadRequest): container "nginx" in pod "box" is waiting to start: PodInitializing

When I describe the pod box:
kubectl describe pod box
The output is:
Name: box
Namespace: default
Priority: 0
Node: kubenode-2/10.132.0.4
Start Time: Wed, 24 Feb 2021 15:07:58 +0000
Labels: run=box
Annotations: cni.projectcalico.org/podIP: 192.168.77.91/32
cni.projectcalico.org/podIPs: 192.168.77.91/32
Status: Pending
IP: 192.168.77.91
IPs:
IP: 192.168.77.91
Init Containers:
box:
Container ID: docker://6cdd084115dddc1f66aa397ef83e0770a651516a0ae9f6bca955c6c095c526ed
Image: busybox
Image ID: docker-pullable://busybox@sha256:c6b45a95f932202dbb27c31333c4789f45184a744060f6e569cc9d2bf1b9ad6f
Port:
Host Port:
Args:
/bin/sh
-c
wget -O /work-dir/index.html http://neverssl.com/online
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 24 Feb 2021 15:11:42 +0000
Finished: Wed, 24 Feb 2021 15:12:12 +0000
Ready: False
Restart Count: 4
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bwqjq (ro)
/work-dir from vol (rw)
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/usr/share/nginx/html from vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bwqjq (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
vol:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
default-token-bwqjq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bwqjq
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 4m36s default-scheduler Successfully assigned default/box to kubenode-2
Normal Pulled 4m34s kubelet Successfully pulled image "busybox" in 1.622840177s
Normal Pulled 4m2s kubelet Successfully pulled image "busybox" in 1.445794935s
Normal Pulled 3m15s kubelet Successfully pulled image "busybox" in 1.497044653s
Normal Created 2m14s (x4 over 4m33s) kubelet Created container box
Normal Pulled 2m14s kubelet Successfully pulled image "busybox" in 1.458443236s
Normal Started 2m13s (x4 over 4m33s) kubelet Started container box
Warning BackOff 65s (x7 over 3m31s) kubelet Back-off restarting failed container
Normal Pulling 54s (x5 over 4m35s) kubelet Pulling image "busybox"
Normal Pulled 53s kubelet Successfully pulled image "busybox" in 1.461627372s

kubectl run is deprecated with deployment generator

Here, you put this answer:

kubectl run nginx --image=nginx:1.7.8 --replicas=2 --port=80

Since kubeclt run is now deprecated with deployments, this is the alternative:

kubectl create deployment nginx  --image=nginx:1.7.8  --dry-run -o yaml | sed 's/replicas: 1/replicas: 2/g'  | sed 's/image: nginx:1.7.8/image: nginx:1.7.8\n        ports:\n        - containerPort: 80/g' | kubectl apply -f -

Unfortunately, kubectl create does not support --replicas nor --port. That's why the alternative was too long ๐Ÿ˜

I will open PR or anyone can open it.
Thanks!

second pod not always on first pod's node in pv/pvc example

I don't have rights to submit a new branch and PR to your repo, so here are the details:

Create a second pod (deployed to the same node the first pod is on) which is identical with the one you just created (you can easily do it by changing the 'name' property on pod.yaml). Connect to it and verify that '/etc/foo' contains the 'passwd' file. Delete pods to cleanup

show

Create the second pod, called busybox2:

# get details about the first pod
# take the node name from the 7th column
# remove the title/header line
# add a label to that node
kubectl get pods busybox -o wide | awk '{print $7}' | grep -v NODE | xargs -I{} kubectl label node {} use=thisone
vim pod.yaml
# change 'metadata.name: busybox' to 'metadata.name: busybox2'
# add a nodeSelector with the same label as the node to the yaml
kubectl create -f pod.yaml
kubectl exec busybox2 -- ls /etc/foo # will show 'passwd'
# cleanup
kubectl delete po busybox busybox2

Network Policies quizlet doesn't prohibit access to the protected service

Hi,

I'll look at this once I'm done w/ my CKAD exam. This set of commands doesn't protect the target service/pods
kubectl run nginx --image=nginx --replicas=2 --port=80 --expose

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-nginx # pick a name
spec:
  podSelector:
    matchLabels:
      run: nginx # selector for the pods
  ingress: # allow ingress traffic
  - from:
    - podSelector: # from pods
        matchLabels: # with this label
          access: granted

kubectl create -f <file>

kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://nginx:80 --timeout 2

The call goes through to the Pod. The network policy is ignored. Still looking at it, maybe a PR on the way.

Liveness probe

During my recent CKAD exam, I was asked similar question but I'm not sure how the solution you have provided is matching the question. Please clarify.

As per the question we need to get namespace and pod name in the format /, but the answer you have given is to get all columns, please clarify.

"Lots of pods are running in qa,alan,test,production namespaces. All of these pods are configured with liveness probe. Please list all pods whose liveness probe are failed in the format of / per line."

[HELP] Network policy not working [Services and Networking]

In the last question of "Services and Networking":

Network policy seems not working. I am able to get responses for both the busybox commands:

controlplane $ kubectl get po --show-labels 
NAME                    READY   STATUS    RESTARTS   AGE   LABELS
nginx-f89759699-hdd27   1/1     Running   0          16m   app=nginx,pod-template-hash=f89759699
nginx-f89759699-pcgbq   1/1     Running   0          16m   app=nginx,pod-template-hash=f89759699

controlplane $ cat npolicy.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: nginx
#  policyTypes:
 # - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          access: granted

Network policy created

controlplane $ kubectl get netpol -o wide
NAME                  POD-SELECTOR   AGE
test-network-policy   app=nginx      9m26s


controlplane $ kubectl run busybox --image=busybox --rm -it --restart=Never -- wget  http://nginx:80 --timeout 2
Connecting to nginx:80 (10.109.201.22:80)
saving to 'index.html'
index.html           100% |********************************|   612  0:00:00 ETA
'index.html' saved
pod "busybox" deleted


controlplane $ kubectl run busybox --image=busybox --rm -it --restart=Never --labels=access=granted -- wget  http://nginx:80 --timeout 2
Connecting to nginx:80 (10.109.201.22:80)
saving to 'index.html'
index.html           100% |********************************|   612  0:00:00 ETA
'index.html' saved
pod "busybox" deleted
controlplane $ 
controlplane $ kubectl run busybox --image=busybox --rm -it --restart=Never --labels=app=db -- wget  http://nginx:80 --timeout 2
Connecting to nginx:80 (10.109.201.22:80)
saving to 'index.html'
index.html           100% |********************************|   612  0:00:00 ETA
'index.html' saved
pod "busybox" deleted
controlplane $ 

Can you identify what is missing here. TIA

all resources must be specified before annotation changes

Hi,
When I ran the command kubectl annotate po nginx1 nginx2 nginx3 description='my description', I got this error kubectl annotate po nginx1 nginx2 nginx3 description='my description'.
The three pods are all with running status. I googled the error message but I have not found similar issues to this. Can you please help to suggest what is causing this error?

nginx1 1/1 Running 0 47m
nginx2 1/1 Running 0 47m
nginx3 1/1 Running 0 47m

This is the version in my enviornment.
Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.0-beta.1", GitCommit:"40a411a61af315f955f11ee97397beecf432ff4f", GitTreeState:"clean", BuildDate:"2021-03-09T09:23:56Z", GoVersion:"go1.16", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}

Deployment can not be created directly with port and replicas in new version of kubernetes

So there are issues for the following questions in f.services.md

  1. Create a deployment called foo using image 'dgkanatsios/simpleapp' (a simple server that returns hostname) and 3 replicas. Label it as 'app=foo'. Declare that containers in this pod will accept traffic on port 8080 (do NOT create a service yet)
  2. Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: granted' can access the deployment and apply it

Instead, we need create a yaml with dry-run and update it.

a.core_concepts.md minor issue

For the question Create the pod that was just described using YAML
When you create the pod.yaml file with
kubectl run nginx --image=nginx --restart=Never --dry-run=client -n mynamespace -o yaml > pod.yaml

The resulting yaml file will have the namespace listed in it ie

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
  namespace: mynamespace
....

So, there's no need to specify the namespace when creating the pod with
kubectl create -f pod.yaml

--rm flag giving error

I am getting below error when I pass --rm flag as shown in the solution for 3rd questions of Services and Networking section.

error: --rm should only be used for attached containers.

Any idea why?

Exercise in State section not always verifiable

Screenshot 2020-04-13 at 18 04 55

The second exercise on the above image is not always verifiable.

The problem happens because the two pods sometimes get created on and end up mounting /etc/foo in different nodes.

I think some informative message should be added to the problem statement.

"--restart=Never" Still needed?

I don't think we need --restart=Never option anymore with the latest K8s version as no deployment is created when we run

kubectl run nginx --image=nginx

k8s@k8s-m:~$ kubectl run nginx --image=nginx
pod/nginx created
k8s@k8s-m:~$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          4s
k8s@k8s-m:~$
k8s@k8s-m:~$ kubectl get deployments.apps
No resources found in default namespace.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.