apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: new-replica-set
namespace: default
spec:
replicas: 4
selector:
matchLabels:
name: busybox-pod
template:
metadata:
labels:
name: busybox-pod
spec:
containers:
- command:
- sh
- -c
- echo Hello Kubernetes! && sleep 3600
image: busybox777
imagePullPolicy: Always
name: busybox-container
#Create a YAML file then apply it
vim deployment-1.yml
kubectl apply -f deployment-1.yml
kubectl get replicaset.apps
the image is busybox777
kubectl describe replicaset.apps
kubectl get replicaset.apps
I think the PODS are not ready due to an error in fetching the image from container registery which is DockerHub (no such image called busybox777)
kubectl describe replicaset.apps #OR
kubectl get pods
still 4 pods due to specified desired number of pods
kubectl delete po new-replica-set-69bsf
Because of the controller on master node checks the worker node for the specified desired number of pods in the YAML file of deployment if there's any pod down for any reason it automatically creates another one
apiVersion: v1
kind: ReplicaSet
metadata:
name: replicaset-1
spec:
replicas: 2
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx
The issue in the YAML file is the apiVersion the apiVersion of Replica sets is apps/v1
the right answer is
apiVersion: apps/v1
Edited the file and applied it, now the Deployment is running
kubectl create deployment my-first-deployment --image=nginx:alpine --dry-run=client -oyaml > dep1.yml
kubectl apply -f dep1.yml
kubectl scale deployment my-first-deployment --replicas=3
#OR best practice is modify the Yaml file itself to make replicas 3
kubectl scale deployment my-first-deployment --replicas=2
#OR best practice is modify the Yaml file itself to make replicas 2
vim dep1.yml
#change the image
kubectl apply -f dep1.yml
kubectl delete deployment my-first-deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
namespace: default
spec:
replicas: 4
selector:
matchLabels:
name: busybox-pod
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
name: busybox-pod
spec:
containers:
- command:
- sh
- -c
- echo Hello Kubernetes! && sleep 3600
image: busybox888
imagePullPolicy: Always
name: busybox-container
vim dep2.yml
kubectl apply -f dep2.yml
kubectl get deployment
6 total replica sets
kubectl get pods
6 pods
2 pods Are ready
kubectl describe deployment
Error pulling the image from docker hub, No such image.
No YAML file was provided
The issue is there is no Yaml file included :'D
kubectl get svc
The Default type is ClusterIP
kubectl describe svc
target port is 6443/TCP
Labels: component=apiserver
provider=kubernetes
1 Endpoint = 172.30.1.2:6443
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-webapp-deployment
namespace: default
spec:
replicas: 4
selector:
matchLabels:
name: simple-webapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: simple-webapp
spec:
containers:
- image: kodekloud/simple-webapp:red
imagePullPolicy: IfNotPresent
name: simple-webapp
ports:
- containerPort: 8080
protocol: TCP
vim deployment.yml
kubectl apply -f deployment.yml
kubectl describe deployment
kodekloud/simple-webapp:red
Name: webapp-service
Type: NodePort
targetPort: 8080
port: 8080
nodePort: 30080
The syntax of this Yaml file is wrong and it's missing key components
The updated version :
vim scv.yml
kubectl apply -f scv.yml