Giter Club home page Giter Club logo

kubernetes-in-action's Introduction

kubernetes-in-action's People

Contributors

krasish avatar luksa avatar machine424 avatar okandas avatar sergius71 avatar stgleb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-in-action's Issues

Chaper 11.2.1 Queries about Nodeport when requesting one of node

Environments

  • Cluster version: v1.21.4
  • Virtual Machine: Oracle VM
  • CNI: Calico

Problem

Different results from the book.

''' Book '''
Did you also notice where the pod thought the request came from? Look at the Client IP at the end of the response. That’s not the IP of the computer from which I sent the request. You may have noticed that it’s the IP of the node I sent the request to. I explain why this is and how you can prevent it in section 11.2.3.
''''

[node info]

root@k8s-m:~# kubectl get nodes -o wide
NAME     STATUS   ROLES                  AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-m    Ready    control-plane,master   86d   v1.21.4   192.168.100.10    <none>        Ubuntu 20.04.3 LTS   5.4.0-88-generic   docker://20.10.13
k8s-w1   Ready    <none>                 86d   v1.21.4   192.168.100.101   <none>        Ubuntu 20.04.3 LTS   5.4.0-88-generic   docker://20.10.13
k8s-w2   Ready    <none>                 86d   v1.21.4   192.168.100.102   <none>        Ubuntu 20.04.3 LTS   5.4.0-88-generic   docker://20.10.13
k8s-w3   Ready    <none>                 86d   v1.21.4   192.168.100.103   <none>        Ubuntu 20.04.3 LTS   5.4.0-88-generic   docker://20.10.13

[Request to nodes result]

# Requesting k8s-w1 node
$ curl 192.168.100.101:30080

==== REQUEST INFO
Request processed by Kiada 0.5 running in pod "kiada-001" on node "k8s-w1".
Pod hostname: kiada-001; Pod IP: 172.16.228.71; Node IP: 192.168.100.101; Client IP: ::ffff:10.0.2.15

==== REQUEST INFO
Request processed by Kiada 0.5 running in pod "kiada-003" on node "k8s-w2".
Pod hostname: kiada-003; Pod IP: 172.16.46.4; Node IP: 192.168.100.102; Client IP: ::ffff:172.16.228.64

==== REQUEST INFO
Request processed by Kiada 0.5 running in pod "kiada-canary" on node "k8s-w3".
Pod hostname: kiada-canary; Pod IP: 172.16.197.4; Node IP: 192.168.100.103; Client IP: ::ffff:172.16.228.64

It's not the Node IP (192.168.100.10 * ) I have no idea where these IP (10.0.2.15, 172.16.228.64, 172.16.228.64) came from..

and it's fixed value but no idea where it came from
"k8s-w1" ↔ 10.0.2.15
"k8s-w2" ↔ 172.16.228.64
"k8s-w3" ↔ 172.16.228.64


[All resource Information]

root@k8s-m:~# kubectl get all -A -o wide
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES
default       pod/nginx                                      1/1     Running   0          21d   172.16.228.70     k8s-w1   <none>           <none>
john          pod/kiada-001                                  2/2     Running   0          8h    172.16.228.71     k8s-w1   <none>           <none>
john          pod/kiada-002                                  2/2     Running   0          8h    172.16.197.3      k8s-w3   <none>           <none>
john          pod/kiada-003                                  2/2     Running   0          8h    172.16.46.4       k8s-w2   <none>           <none>
john          pod/kiada-canary                               2/2     Running   0          8h    172.16.197.4      k8s-w3   <none>           <none>
john          pod/nginx                                      1/1     Running   0          22d   172.16.228.68     k8s-w1   <none>           <none>
john          pod/quiz                                       2/2     Running   0          23d   172.16.46.3       k8s-w2   <none>           <none>
john          pod/quote-001                                  2/2     Running   0          23d   172.16.46.1       k8s-w2   <none>           <none>
john          pod/quote-002                                  2/2     Running   0          23d   172.16.228.65     k8s-w1   <none>           <none>
john          pod/quote-003                                  2/2     Running   0          23d   172.16.197.2      k8s-w3   <none>           <none>
john          pod/quote-canary                               2/2     Running   0          23d   172.16.197.1      k8s-w3   <none>           <none>
kube-system   pod/calico-kube-controllers-6fd7b9848d-k7v4w   1/1     Running   0          86d   172.16.29.3       k8s-m    <none>           <none>
kube-system   pod/calico-node-nz65k                          1/1     Running   0          86d   192.168.100.102   k8s-w2   <none>           <none>
kube-system   pod/calico-node-pd9pt                          1/1     Running   0          86d   192.168.100.10    k8s-m    <none>           <none>
kube-system   pod/calico-node-w9rf5                          1/1     Running   0          86d   192.168.100.103   k8s-w3   <none>           <none>
kube-system   pod/calico-node-z82zf                          1/1     Running   0          86d   192.168.100.101   k8s-w1   <none>           <none>
kube-system   pod/coredns-558bd4d5db-d78qm                   1/1     Running   0          86d   172.16.29.1       k8s-m    <none>           <none>
kube-system   pod/coredns-558bd4d5db-kzpkt                   1/1     Running   0          86d   172.16.29.2       k8s-m    <none>           <none>
kube-system   pod/etcd-k8s-m                                 1/1     Running   0          86d   192.168.100.10    k8s-m    <none>           <none>
kube-system   pod/kube-apiserver-k8s-m                       1/1     Running   0          86d   192.168.100.10    k8s-m    <none>           <none>
kube-system   pod/kube-controller-manager-k8s-m              1/1     Running   7          86d   192.168.100.10    k8s-m    <none>           <none>
kube-system   pod/kube-proxy-h2c5v                           1/1     Running   0          86d   192.168.100.103   k8s-w3   <none>           <none>
kube-system   pod/kube-proxy-kt7kv                           1/1     Running   0          86d   192.168.100.102   k8s-w2   <none>           <none>
kube-system   pod/kube-proxy-qgpjp                           1/1     Running   0          86d   192.168.100.101   k8s-w1   <none>           <none>
kube-system   pod/kube-proxy-znxn4                           1/1     Running   0          86d   192.168.100.10    k8s-m    <none>           <none>
kube-system   pod/kube-scheduler-k8s-m                       1/1     Running   6          86d   192.168.100.10    k8s-m    <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
default       service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP                      86d   <none>
default       service/my-service   ClusterIP   10.98.28.169     <none>        80/TCP                       22d   app=MyApp
john          service/kiada        NodePort    10.99.142.250    <none>        80:30080/TCP,443:30443/TCP   8h    app=kiada
john          service/quiz         ClusterIP   10.104.206.158   <none>        80/TCP                       23d   app=quiz
john          service/quote        ClusterIP   10.97.190.49     <none>        80/TCP                       23d   app=quote
kube-system   service/kube-dns     ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       86d   k8s-app=kube-dns

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS    IMAGES                           SELECTOR
kube-system   daemonset.apps/calico-node   4         4         4       4            4           kubernetes.io/os=linux   86d   calico-node   docker.io/calico/node:v3.22.1    k8s-app=calico-node
kube-system   daemonset.apps/kube-proxy    4         4         4       4            4           kubernetes.io/os=linux   86d   kube-proxy    k8s.gcr.io/kube-proxy:v1.21.10   k8s-app=kube-proxy

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                IMAGES                                      SELECTOR
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           86d   calico-kube-controllers   docker.io/calico/kube-controllers:v3.22.1   k8s-app=calico-kube-controllers
kube-system   deployment.apps/coredns                   2/2     2            2           86d   coredns                   k8s.gcr.io/coredns/coredns:v1.8.0           k8s-app=kube-dns

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES                                      SELECTOR
kube-system   replicaset.apps/calico-kube-controllers-6fd7b9848d   1         1         1       86d   calico-kube-controllers   docker.io/calico/kube-controllers:v3.22.1   k8s-app=calico-kube-controllers,pod-template-hash=6fd7b9848d
kube-system   replicaset.apps/coredns-558bd4d5db                   2         2         2       86d   coredns                   k8s.gcr.io/coredns/coredns:v1.8.0           k8s-app=kube-dns,pod-template-hash=558bd4d5db

Chapter 6 permission error with sh script

Hi,

After building the fortune pod, when I deploy it to my GKE I get an error: crashloopbackoff which usually mean I have a container that stop immediately.

When I check the logs I have something like that:
kubectl logs fortune -c html-generator
/bin/sh: 1: /bin/fortuneloop.sh: Permission denied

So it seems my script does not have the exec permission so I added in the Dockerfile the permissions:
FROM ubuntu:latest

RUN apt-get update ; apt-get -y install fortune
ADD fortuneloop.sh /bin/fortuneloop.sh
RUN chmod +x /bin/fortuneloop.sh

ENTRYPOINT /bin/fortuneloop.sh

Now it works well.

Hope it help !
Seb

1st Ingress example

Hi,
Thanks a lot for this really great book.

I am facing an issue with with first Ingress example on Chapter 5, and I would really help any pointers. Please note that I am using minikube.

First let me check this, I may have missed it, the book does not explicitly state this, but I think I needed to create the kubia-nodeport NodePort service (from kubia-svc-nodeport.yaml), Correct?

I created the node port service, and I created the Ingress using the provided yaml. Now if I run the command kubectl describe ingress kubia I noticed that below Backends column I have this kubia-nodeport:80 (). I do not think that is good. I can also see that the IP 10.0.2.15 was assigned to the Ingress service.

Now, if I run the command kubectl describe svc kubia-nodeport I can see that an ip is assigned to the service (10.102.123.188), and there are 3 IP addresses listed under endpoints (172.17.0.5:8080,172.17.0.6:8080,172.17.0.7:8080), which are pods' IP addresses.

The problem is that curling kubia.example.com, 10.0.2.15 does not work. Please note that curling (and browsing) http://192.168.99.100:30123/ works. The IP 192.168.99.100 is minikube's ip address.

[Chapter 8] kubectl proxy image using arm64 architecture

FROM alpine
RUN apk update && apk add curl && curl -L -O https://dl.k8s.io/v1.8.0/kubernetes-client-linux-amd64.tar.gz && tar zvxf kubernetes-client-linux-amd64.tar.gz kubernetes/client/bin/kubectl && mv kubernetes/client/bin/kubectl / && rm -rf kubernetes && rm -f kubernetes-client-linux-amd64.tar.gz
ADD kubectl-proxy.sh /kubectl-proxy.sh
ENTRYPOINT /kubectl-proxy.sh

The Dockerfile above does not work on an arm64 cpu environment.

So I replaced line 2 with the code below.

RUN apk update && apk add curl && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && chmod 700 kubectl && mv kubectl /

2.3.2 - Can't create Service object because no rc

The following app deployment code in 2.3.1 doesn't work because the generator flag has been deprecated:

kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1

Therefore, we are forced to run the command without that flag, as follows:

kubectl run kubia --image=luksa/kubia --port=8080

This means that we don't end up creating a ReplicationController and thus, the following code from 2.3.2 for deploying a Service object of type LoadBalancer doesn't work:

k expose rc kubia --type=LoadBalancer --name kubia-http

What is the correct way to run the code from 2.3.1 to create a ReplicationController that would allow us to follow along with the remaining examples in the chapter?

2.3.2 - Can't create Service object because no rc

The following app deployment code in 2.3.1 doesn't work because the generator flag has been deprecated:

kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1

Therefore, we are forced to run the command without that flag, as follows:

kubectl run kubia --image=luksa/kubia --port=8080

This means that we don't end up creating a ReplicationController and thus, the following code from 2.3.2 for deploying a Service object of type LoadBalancer doesn't work:

k expose rc kubia --type=LoadBalancer --name kubia-http

What is the correct way to run the code from 2.3.1 to create a ReplicationController that would allow us to follow along with the remaining examples in the chapter?

Chapter 6: A pod using a PersistentVolumeClaim volume: mongodb-pod-pvc.yaml

mongodb pod wont start:

$ kubectl describe pod mongodb
Name:         mongodb
Namespace:    default
Priority:     0
Node:         multinode-demo/192.168.58.2
Start Time:   Sun, 31 Jul 2022 13:29:07 +0300
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  mongodb:
    Container ID:   
    Image:          mongo
    Image ID:       
    Port:           27017/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data/db from mongodb-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwmkl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  mongodb-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-pvc
    ReadOnly:   false
  kube-api-access-qwmkl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    94m                  default-scheduler  Successfully assigned default/mongodb to multinode-demo
  Warning  FailedMount  51m (x2 over 87m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[mongodb-data], unattached volumes=[kube-api-access-qwmkl mongodb-data]: timed out waiting for the condition
  Warning  FailedMount  2m4s (x20 over 92m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[mongodb-data], unattached volumes=[mongodb-data kube-api-access-qwmkl]: timed out waiting for the condition

Decrease image size

Dear Luska, Thank you for your great work.
Could you please use alpine version of docker images to decrease their size?

Chapter4: time-limited-batch-job.yaml

When I tried to run this manifest, it is as expected, the pod will fail. However, I could not see the failed pod after it is done, as if the pod was deleted from the Kubernetes.

Is this normal behavior?

Error Multipath Kubernetes Ingress, Cannot GET /

Hi All,

I'm trying to learn about multipath in Kubernetes Ingress. First of all, I'm using minikube for this tutorial, I created a simple Web API using node js.

NodeJS Code

In this nodeJS, I created a simple Web API, with routing and controller

server.js

const express = require ('express');
const routes = require('./routes/tea'); // import the routes

const app = express();

app.use(express.json());

app.use('/', routes); //to use the routes

const listener = app.listen(process.env.PORT || 3000, () => {
    console.log('Your app is listening on port ' + listener.address().port)
})

routes/tea.js

const express = require('express');
const router  = express.Router();
const teaController = require('../controllers/tea');

router.get('/tea', teaController.getAllTea);
router.post('/tea', teaController.newTea);
router.delete('/tea', teaController.deleteAllTea);

router.get('/tea/:name', teaController.getOneTea);
router.post('/tea/:name', teaController.newComment);
router.delete('/tea/:name', teaController.deleteOneTea);

module.exports = router;

controllers/tea.js

const os = require('os');

//GET '/tea'
const getAllTea = (req, res, next) => {
    res.json({message: "GET all tea, " + os.hostname() });
};

//POST '/tea'
const newTea = (req, res, next) => {
    res.json({message: "POST new tea, " + os.hostname()});
};

//DELETE '/tea'
const deleteAllTea = (req, res, next) => {
    res.json({message: "DELETE all tea, " + os.hostname()});
};

//GET '/tea/:name'
const getOneTea = (req, res, next) => {
    res.json({message: "GET 1 tea, os: " + os.hostname() + ", name: " + req.params.name});
};

//POST '/tea/:name'
const newComment = (req, res, next) => {
    res.json({message: "POST 1 tea comment, os: " + os.hostname() + ", name: " + req.params.name});
};

//DELETE '/tea/:name'
const deleteOneTea = (req, res, next) => {
    res.json({message: "DELETE 1 tea, os: " + os.hostname() + ", name: " + req.params.name});
};

//export controller functions
module.exports = {
    getAllTea, 
    newTea,
    deleteAllTea,
    getOneTea,
    newComment,
    deleteOneTea
};

Dockerfile

After that I created a docker image using this Dockerfile

FROM node:18.9.1-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]

Kubernetes Manifest

And then, I created replicaset and service for this docker image

foo-replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: foo
spec:
  selector:
    matchLabels:
      app: foo
  replicas: 3
  template:
    metadata:
      labels:
        app: foo
    spec:
      containers:
        - name: foo
          image: emriti/tea-app:1.0.0
          ports:
            - name: http
              containerPort: 3000
              protocol: TCP

foo-svc-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: foo-nodeport
spec:
  type: NodePort
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 31234
  selector:
    app: foo

all-ingress.yaml

Ingress for both Foo and Bar backend

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: foobar
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - host: foobar.com
      http:
        paths:
          - path: /foo
            pathType: Prefix
            backend:
              service:
                name: foo-nodeport
                port:
                  number: 3000  
          - path: /bar
            pathType: Prefix
            backend:
              service:
                name: bar-nodeport
                port:
                  number: 3000  

Additional setup

I also did these:

  • add 127.0.0.1 foobar.com to /etc/hosts
  • running minikube tunnel

After that I run curl foobar.com/tea and I get this error:

curl : Cannot GET /
At line:1 char:1
+ curl foobar.com/foo
+ ~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
    + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

I'm wondering if maybe someone has experienced a similar problem that I did and maybe already had the answer for that. Secondly how to debug the ingress if I meet similar issues?

The codes and manifest could be accessed on this repo

Thank you!

Chapter 6: Fortune Image seem to be failing during pull

Hi,
Pulling the image luksa/fortune seem to be failing. Below is excerpts of kubectl while trying the example:

$ kubectl create -f fortune-pod.yaml
pod/fortune created
$ kubectl port-forward fortune 8080:80
error: unable to forward port because pod is not running. Current status=Pending
$ kubectl get pod
NAME      READY   STATUS             RESTARTS   AGE
fortune   1/2     ImagePullBackOff   0          37m
$ kubectl logs fortune
Defaulted container "html-generator" out of: html-generator, web-server
Error from server (BadRequest): container "html-generator" in pod "fortune" is waiting to start: trying and failing to pull image

ch8 : ErrImagePull when create pod with curl.yaml

when i create pod with curl.yaml, I got an error

controlplane $ k get pod
NAME   READY   STATUS         RESTARTS   AGE
curl   0/1     ErrImagePull   0          5s

here is the events:

Events:
  Type     Reason     Age               From                   Message
  ----     ------     ----              ----                   -------
  Normal   Scheduled  18s               default-scheduler      Successfully assigned default/curl to controlplane
  Normal   BackOff    16s               kubelet, controlplane  Back-off pulling image "tutum/curl"
  Warning  Failed     16s               kubelet, controlplane  Error: ImagePullBackOff
  Normal   Pulling    3s (x2 over 18s)  kubelet, controlplane  Pulling image "tutum/curl"
  Warning  Failed     2s (x2 over 16s)  kubelet, controlplane  Failed to pull image "tutum/curl": rpc error: code = Unknown desc = Error response from daemon: pull access denied for tutum/curl, repository does not exist or may require 'docker login'
  Warning  Failed     2s (x2 over 16s)  kubelet, controlplane  Error: ErrImagePull

looks like image tutum/curl not exists, what can I do to handle this error?

2.3.6 Kubernetes DashBoard not available in GKE

Hi,

According to the book section 2.3.6 to get the Kubernetes Dashboard we have to type kubectl cluster-info but in my cluster-info I dont have dashboard. Below is my ouput

kasanitej@cloudshell:~ (kubernetes-320016)$ kubectl cluster-info
Kubernetes control plane is running at https://35.244.33.159
GLBCDefaultBackend is running at https://35.244.33.159/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://35.244.33.159/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.244.33.159/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

Please help what am I doing wrong?

Ch.8 Downward Pod not creating

I'm on ch.8 first pod creation for downward api and it's giving me error as below:

Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "downward": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"\"": unknown

Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "downward": Error response from daemon: OCI runtime create failed: runc did not terminate sucessfully: unknown

Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "downward": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"read init-p: connection reset by peer\"": unknown

I've tried to create normal other pod which is creating fine without any issue.

Could you please advise on this error, Thanks.

Docker build not working

Hi,

I am reading the book and now on Chapter 2. As suggested I have followed the steps and created a dockerfile and app.js files.

Whenever I try to build the image it just hangs and does not proceed ahead.

Below is the Dockerfile content: -

FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]

Below is the app.js file content: -

const http = require('http');
const os = require('os');
console.log("Kubia server starting...");
var handler = function(request, response) {
console.log("Received request from " + request.connection.remoteAddress);
response.writeHead(200);
response.end("You've hit " + os.hostname() + "\n");
};
var www = http.createServer(handler);
www.listen(8080);

Jigars-MacBook-Pro:~ jigars$ docker build -t kubia:latest -f Dockerfile .

It doesn't go ahead from here. Not sure what is the issue. I don't find the image when I run docker images.

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest e1ddd7948a1c 2 weeks ago 1.16MB
node 7 d9aed20b68a4 12 months ago 660MB

Let me know what am I missing. Awaiting reply at the earliest.

Chapter 8

I am receiving the following error when I do

kubectl create -f downward-api-env.yaml
kubectl create -f downward-api-volume.yaml

Error:

╰─ kubectl describe pod downward
Name:               downward
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               minikube/192.168.99.100
Start Time:         Sat, 23 Nov 2019 06:29:59 -0600
Labels:             foo=bar
Annotations:        key1: value1
                    key2:
                      multi
                      line
                      value
Status:             Running
IP:                 172.17.0.7
Containers:
  main:
    Container ID:  docker://2e932a24c20818c552575214caec31cf9fb42ec130b841369a88838fbba73a05
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      9999999
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown
      Exit Code:    128
      Started:      Sat, 23 Nov 2019 06:32:26 -0600
      Finished:     Sat, 23 Nov 2019 06:32:26 -0600
    Ready:          False
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  4Mi
    Requests:
      cpu:        15m
      memory:     100Ki
    Environment:  <none>
    Mounts:
      /etc/downward from downward (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r67m6 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  downward:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.name -> podName
      metadata.namespace -> podNamespace
      metadata.labels -> labels
      metadata.annotations -> annotations
      requests.cpu -> containerCpuRequestMilliCores
      limits.memory -> containerMemoryLimitBytes
  default-token-r67m6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r67m6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                     From               Message
  ----     ------                  ----                    ----               -------
  Normal   Scheduled               <unknown>               default-scheduler  Successfully assigned default/downward to minikube
  Normal   SandboxChanged          2m46s (x12 over 2m58s)  kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  2m45s (x13 over 2m59s)  kubelet, minikube  Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "downward": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown
╰─ minikube version
minikube version: v1.5.2
commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad

╰─ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

I'm not exactly where and why this is failing. If possible, please point me in the right direction. Thanks! @luksa

Hi. Struggling with the filter xpath example

I'm starting to work through the examples in the book. I've found the filter xpath to be difficult to get working. I've tried several iterations at this point. I've also verified my xpath online.

I've tried several variants (starting with the filter on "test attribute" provided in the book).
image

image

In general, my filter drops all the messages. They make it to the outbox if I eliminate the filter.

Here is a filter that i think should work (checking value of an element rather than an attribute)

The XML (inbox/order4.xml)

<?xml version="1.0" encoding="UTF-8"?>
<person><city>London</city></person>

The route

from("file:data/inbox?noop=true")
                        .log("Received order: ${header.CamelFileName}")
                        .filter().xpath("/person/city='London'")
                        .log("Please process valid order: ${header.CamelFileName}")
                        .to("file:data/outbox");

The output
image

I should see that order4 is procssed and placed in the outbox. Yet it isnt.
image

The online validation using an xpath tester:

image

What am I missing?

Chapter06 wrong fsType

Doing steps from the book

kubectl create -f mongodb-pv-gcepd.yaml
kubectl create -f mongodb-pvc.yaml
kubectl create -f mongodb-pod-pvc.yaml

Fails with

kubectl describe po mongodb

....

me.MountDevice failed for volume "mongodb-pv" : failed to mount the volume as "nfs4", it already contains ext4. Mount error: mount failed: exit status 32

Because https://github.com/luksa/kubernetes-in-action/blob/master/Chapter06/mongodb-pv-gcepd.yaml has nfs4 instead of ext4 that is created by default in GCE and that is used in the book

Chapter 10 - PV & PVC

I create 3 gcp persistent disks as described pv-a, b, c
then i create kubernetes persistent volume as well. Once I create stateful set, the pods are not binding to the created volumes and instead creating its own PV.

image

I checked GCP as well, all storage disks are in Same zone as master

chapter 2.3.1 generator flag no longer working

Hi,

Just started reading your book. But the command "kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1" throws me the error```
Error: unknown flag: --generator
See 'kubectl run --help' for usage

FYI luksa/kubia is replaced with my dockerhub registry image 496620/server


As per the  [stackoverflow](https://stackoverflow.com/questions/52890718/kubectl-run-is-deprecated-looking-for-alternative) generator is deprecated. If that is the case, which command to use to create the replicationcontroller ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.