Giter Club home page Giter Club logo

docker-development-youtube-series's People

Contributors

agentblacksmith avatar benchwang avatar dependabot[bot] avatar derlev avatar eriksegecs avatar ewongy avatar feralweasel avatar fluxcdbot avatar fortinj66 avatar gaurav04 avatar larsskj avatar leenx avatar manju369 avatar marcel-dempers avatar marceldempers avatar marcosvm avatar myishay avatar orcema avatar oversampling avatar panki989 avatar phatblat avatar rajalokan avatar rdtechie avatar salmanwaheed avatar schirrms avatar schmiddim avatar strangiato avatar treehopper avatar user449993 avatar zizizach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-development-youtube-series's Issues

Monitoring with prometheus

Hello Marcel,
I am watching your series about prometheus and k8s monitoring and in the video 'Kubernetes Cluster Monitoring for beginners", you use some templates from kube-prometheus for the Grafana configuration.
Would be possible that in the Readme in the grafana folder you could mention which files have you picked up? Because you changed the names and just looking at it it's hard to see the ones that you've made and the ones that you made a copy of it.

Thanks!

got error what ran 'docker-compose build'

Following this instructions

https://github.com/marcel-dempers/docker-development-youtube-series/blob/master/kubernetes/servicemesh/introduction.md

and got multiple errors

Successfully built 1427689292c5
Successfully tagged aimvector/nodejs:1.0.0
Building spring-java

Traceback (most recent call last):
File "compose\cli\main.py", line 67, in main
File "compose\cli\main.py", line 126, in perform_command
File "compose\cli\main.py", line 302, in build
File "compose\project.py", line 468, in build
File "compose\project.py", line 450, in build_service
File "compose\service.py", line 1147, in build
compose.service.BuildError: (<Service: spring-java>, {'message': 'failed to reach build target debug in Dockerfile'})

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "docker-compose", line 3, in
File "compose\cli\main.py", line 78, in main
TypeError: can only concatenate str (not "dict") to str
[9436] Failed to execute script docker-compose

Error when creating a secret in version "v1"

Hi Marcel!

I've been following the DroneIo tutorial on youtube: https://www.youtube.com/watch?v=myCcJJ_Fk10 and I'm having an issue while creating the secret.

I think I've followed everything through, rechecked everything but still getting this error :/

Error from server (BadRequest): error when creating "server/droneserver-secret.yaml": Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: decode base64: illegal base64 data at input byte 8, error found in #10 byte of ...|e=disable","DRONE_GI|...,

Hope you can help me.

Best regards.

Grafana deployment fail with errors mounting configmaps

I have tried everything on yourprometheus-operator example. Everything works up to the point where Grafana is deployed. I have tried the code in 1.14.8, 1.15- 1.17, and 1.18.4 in your repo. I have also tried the code in the prometheus operator repo. When Grafana is deployed the volumes containing dashboard configuration fail to mount with errors like:

MountVolume.SetUp failed for volume "grafana-dashboards" : failed to sync configmap cache: timed out waiting for the condition

All the configmaps exist

I have tried it in kind and in Minikube. I'm at a complete loss for how to resolve this.

Connection refused

Hi,

Thank you for this great guide.I'm able to connect to managing dashboard for the RabbitMQ, but when I'm trying to connect to the RabbitMQ instance using the Pika client (python) but I'm getting AMQPConnectorSocketConnectError: ConnectionRefusedError(111, 'Connection refused') using the following code:

RABBITMQ_HOST='localhost'
RABBITMQ_USER='guest'
RABBITMQ_PASS='guest'
RABBITMQ_PORT=5672

rabbitmq_credentials = pika.PlainCredentials(RABBITMQ_USER, RABBITMQ_PASS)
rabbitmq_connection_params = pika.ConnectionParameters(RABBITMQ_HOST, RABBITMQ_PORT, '/', rabbitmq_credentials)

Tried other combinations with port and no credentials but with no success. Is there something I'm missing? How do I connect to the running RabbitMQ instance?

CrashLoopBackOff error for alertmanager-main pod

Hi Marcel,

I was following the prometheus alert manager process as outlined in the readme under K8S v1.18.4
The prometheus operator install & node exporter worked fine, but after after applying the alertmanager yamls I noticed the pod was getting a crashlookbackoff error.

The actual error being reported by the pod log was: "level=error ts=2020-11-05T13:32:07.815Z caller=coordinator.go:124 component=configuration msg="Loading configuration file failed" file=/etc/alertmanager/config/alertmanager.yaml err="yaml: unmarshal errors:\n line 24: field receiver not found in type config.plain"

How do I resolve this?

Thanks
Pushp

`hashicorp/vault/example-apps/` do not work with newer versions of Kubernetes - 1.21.1

In current versions of Kubernetes - 1.21.1
This error is given after the basic example app is deployed - hashicorp/vault/example-apps/basic-secret/deployment.yaml

URL: PUT https://vault-example.vault-example.svc:8200/v1/auth/kubernetes/login
Code: 500. Errors:

* claim "iss" is invalid" backoff=2.903828541
2021-12-07T16:02:45.583Z [INFO]  auth.handler: authenticating
2021-12-07T16:02:45.591Z [ERROR] auth.handler: error authenticating: error="Error making API request.

kubectl -n vault-example get pods gives Init:0/1 as the pod's status.

I am guessing that the injector sidecar cannot authenticate with the vault. I can't solve this issue.

hashicorp/vault/example-apps/ only work with Kubernetes 1.17 at the moment.

TLS error in basic secret injection video

Hey Marcel,

I was trying to replicate the setup from the video "Basic secret injection for microservices on Kubernetes using Vault".
I got to the point of starting the example app deployment & found that the pod starts but stays in the "Init:0/1" status.

The vault injector pod logs show that it received the mutating webhook call:

kubectl -n vault-example logs vault-example-agent-injector-7cdd648787-tv4lb
2020-08-12T22:55:14.523Z [INFO] handler: Starting handler..
Listening on ":8080"...
Updated certificate bundle received. Updating certs...
2020-08-12T23:08:00.894Z [INFO] handler: Request received: Method=POST URL=/mutate?timeout=30s

The logs from the vault pod show a TLS error:

kubectl -n vault-example logs vault-example-0
==> Vault server configuration:

         Api Address: https://10.244.0.6:8200
                 Cgo: disabled
     Cluster Address: https://10.244.0.6:8201
          Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
           Log Level: info
               Mlock: supported: true, enabled: false
       Recovery Mode: false
             Storage: file
             Version: Vault v1.3.1

2020-08-12T22:50:10.226Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
==> Vault server started! Log data will stream in below:

2020-08-12T22:50:50.416Z [INFO] core.cluster-listener: starting listener: listener_address=[::]:8201
2020-08-12T22:50:50.416Z [INFO] core.cluster-listener: serving cluster requests: cluster_listen_address=[::]:8201
2020-08-12T22:50:50.416Z [INFO] core: post-unseal setup starting
2020-08-12T22:50:50.417Z [INFO] core: loaded wrapping token key
2020-08-12T22:50:50.417Z [INFO] core: successfully setup plugin catalog: plugin-directory=
2020-08-12T22:50:50.418Z [INFO] core: successfully mounted backend: type=system path=sys/
2020-08-12T22:50:50.418Z [INFO] core: successfully mounted backend: type=identity path=identity/
2020-08-12T22:50:50.419Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-08-12T22:50:50.421Z [INFO] core: successfully enabled credential backend: type=token path=token/
2020-08-12T22:50:50.421Z [INFO] core: restoring leases
2020-08-12T22:50:50.421Z [INFO] rollback: starting rollback manager
2020-08-12T22:50:50.422Z [INFO] identity: entities restored
2020-08-12T22:50:50.422Z [INFO] expiration: lease restore complete
2020-08-12T22:50:50.422Z [INFO] identity: groups restored
2020-08-12T22:50:50.422Z [INFO] core: post-unseal setup complete
2020-08-12T22:50:50.423Z [INFO] core: vault is unsealed
2020-08-12T23:01:10.547Z [INFO] core: enabled credential backend: path=kubernetes/ type=kubernetes
2020-08-12T23:05:51.876Z [INFO] core: successful mount: namespace= path=secret/ type=kv
2020-08-12T23:06:38.902Z [INFO] http: TLS handshake error from 127.0.0.1:52998: remote error: tls: unknown certificate

And the logs from the init container show an error trying to authenticate with vault:

kubectl -n vault-example logs basic-secret-74b4fdbcdc-2zmtl -c vault-agent-init
==> Vault server started! Log data will stream in below:

==> Vault agent configuration:
2020-08-12T23:08:01.568Z [INFO] sink.file: creating file sink
2020-08-12T23:08:01.568Z [INFO] sink.file: file sink configured: path=/home/vault/.token mode=-rw-r-----
2020-08-12T23:08:01.568Z [INFO] auth.handler: starting auth handler
2020-08-12T23:08:01.568Z [INFO] auth.handler: authenticating
2020-08-12T23:08:01.568Z [INFO] sink.server: starting sink server

2020-08-12T23:08:01.568Z [INFO] template.server: starting template server
Cgo: disabled
Log Level: info
Version: Vault v1.3.1

2020/08/12 23:08:01.569034 [INFO] (runner) creating new runner (dry: false, once: false)
2020/08/12 23:08:01.569618 [WARN] (clients) disabling vault SSL verification
2020/08/12 23:08:01.569658 [INFO] (runner) creating watcher
2020-08-12T23:08:11.580Z [ERROR] auth.handler: error authenticating: error="Put https://vault-example.vault-example.svc:8200/v1/auth/kubernetes/login: dial tcp: lookup vault-example.vault-example.svc on 10.96.0.10:53: read udp 10.244.0.8:50821->10.96.0.10:53: read: connection refused" backoff=2.156164762
2020-08-12T23:08:13.703Z [INFO] auth.handler: authenticating
2020-08-12T23:08:23.712Z [ERROR] auth.handler: error authenticating: error="Put https://vault-example.vault-example.svc:8200/v1/auth/kubernetes/login: dial tcp: lookup vault-example.vault-example.svc on 10.96.0.10:53: read udp 10.244.0.8:41477->10.96.0.10:53: i/o timeout" backoff=2.29257713

In terms of TLS - I used the exact TLS config/process indicated in your ssl_generate_self_signed.txt file.

Any suggestions would be greatly appreciated.

Thanks

Tim

build publisher.go error

Hi thanks for this guide. Can you please provide assistance on this error:

publisher.go:6:2: no required module provides package github.com/julienschmidt/httprouter: working directory is not part of a module

I get it for all the modules in the import that are from github. I tried googling this error message but there aren't any hits. Would appreiciate your help. Thanks.

redis won't come back after node shutdown

the setup works fine with sentiels managing the master for redis. but when the node die/shutdown (in the case where all redis pod and sentinels are in the same node, for some reason, kubernetes won't be able to spin up redis/sentinel pods in another node. any idea?

sentinel get-master-addr-by-name mymaster

Hi, I tried to delete a non-master pod first and then this regex did not match because it returned the full hostname. On the other hand when the master is deleted get-master-addr-by-name mymaster returns the IP address.

MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"

I've changed my regex to match both scenarios.

MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"

If MASTER is empty you get a config error and then the pod is not able to be alive again.

Don't know if it's worth a PR.

Kubernetes Logs to Splunk using Fluentd

Hello,

Would you help to get me the right Fluentd container/build image for Splunk? I do not see them at all. or do we have any alternative method to ship the Kubernative logs to Splunk?

Thanks.
Jino.

Could not connect to Redis at redis-06379: Name does not resolve

Hi,

while deploying sentinel i am getting below error.

finding master at redis-0.redis.redis.svc.cluster.local
Could not connect to Redis at redis-0.redis.redis.svc.cluster.local:6379: Name does not resolve
no master found
finding master at redis-1.redis.redis.svc.cluster.local
Could not connect to Redis at redis-1.redis.redis.svc.cluster.local:6379: Name does not resolve
no master found
finding master at redis-2.redis.redis.svc.cluster.local
Could not connect to Redis at redis-2.redis.redis.svc.cluster.local:6379: Name does not resolve
no master found
port 5000
sentinel monitor mymaster 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster a-very-complex-password-here

Here are my nodes:-

kubectl get nodes
NAME STATUS ROLES AGE VERSION
rbqn01h02 Ready controlplane,etcd,worker 295d v1.17.5
rbqn01h03 Ready controlplane,etcd,worker 295d v1.17.5
rbqn01h04 Ready controlplane,etcd,worker 295d v1.17.5
rbqn04h02 Ready controlplane,etcd,worker 295d v1.17.5

Here are my redis podes

kubectl get pods -n redis-cluster1
NAME READY STATUS RESTARTS AGE
redis-0 1/1 Running 0 22h
redis-1 1/1 Running 0 22h
redis-2 1/1 Running 0 22h

I even tried sentinel deployment by adding above mentioned nodes in initcontainer startup script.
Here:--

REDIS_PASSWORD=a-very-complex-password-here
nodes=

Please suggest what need to add in nodes so that sentinel can connect to redis cluster

[Question] Best practice for version tracking of k8s tools and services?

Awesome repo and awesome videos, thanks!

Over the past week or so, I seen quite a few tools and services in my k8s cluster has had new releases, ie metrics server and HashiCorp's vault.

What is the best practice for managing or tracking these changes. Old school would be something like a spreadsheet and some sort of checking services for releases or security issues?

Kubernetes and friends really seems to move very fast.

Edit the prometheus config file

Is there an option to edit the Prometheus config file to add remote read and write in this folder

"docker-development-youtube-series/prometheus-monitoring/kubernetes/1.18.4/"

/etc/prometheus/config_out/prometheus.env.yaml - How is this file generated?

CrashLoopBackOff error for node-exporter pod

Hi Marcel,

I was following the prometheus monitoring process as outlined in the readme under K8S v1.18.4
The prometheus operator install worked fine, but after installing the node exporter yaml I noticed the pod was getting a crashlookbackoff error.

The actual error being reported by the pod was: "path / is mounted on / but it is not a shared or slave mount"
I commented out the volume & volume mount & the pod started

Is this volume required & if so why?

Thanks

Tim

standalone-prometheus is unable to scrape example-app

Hello,

Following you promethues-operator example under 1.14.8 cuz there isn't one in 1.18.4. When I add the standalone-prometheus instance the example-app target says error http://:5000/metrics. See attached screenshot. The code is straight out of you repo. I have double checked all of the labels and they all look correct. Is the because the app is not service that /metrics endpoint? This is the example from the kubernetes/deployments, services, secrets, configmaps right? Did something change here? I there another example application to try?

I tried using your python-application on port 80 and received a different error shown in the second screenshot. Is there anyway to update an example that works?

Screen Shot 2021-08-19 at 5 13 11 PM

Screen Shot 2021-08-19 at 5 43 58 PM

The "ResolvePackageAssets" task failed unexpectedly. [/work/work.csproj]

Hey Marcel, this is an awesome series. Thanks! However, when I try to build the C# project, I'm see a fail on step 11/12.

Step 11/12 : RUN dotnet publish --no-restore --output /out/ --configuration Release
---> Running in 0a645eeb91a7
Microsoft (R) Build Engine version 16.0.450+ga8dc7f1d34 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: The "ResolvePackageAssets" task failed unexpectedly. [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: NuGet.Packaging.Core.PackagingException: Unable to find fallback package folder '/usr/local/share/dotnet/sdk/NuGetFallbackFolder'. [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at NuGet.Packaging.FallbackPackagePathResolver..ctor(String userPackageFolder, IEnumerable`1 fallbackPackageFolders) [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.NET.Build.Tasks.NuGetPackageResolver.CreateResolver(LockFile lockFile, String projectPath) [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.NET.Build.Tasks.ResolvePackageAssets.CacheWriter..ctor(ResolvePackageAssets task, Stream stream) [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.NET.Build.Tasks.ResolvePackageAssets.CacheReader.CreateReaderFromDisk(ResolvePackageAssets task, Byte[] settingsHash) [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.NET.Build.Tasks.ResolvePackageAssets.CacheReader..ctor(ResolvePackageAssets task) [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.NET.Build.Tasks.ResolvePackageAssets.ReadItemGroups() [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.NET.Build.Tasks.ResolvePackageAssets.ExecuteCore() [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.NET.Build.Tasks.TaskBase.Execute() [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [/work/work.csproj]
/usr/share/dotnet/sdk/2.2.207/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(208,5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask) [/work/work.csproj]
ERROR: Service 'csharp' failed to build: The command '/bin/sh -c dotnet publish --no-restore --output /out/ --configuration Release' returned a non-zero code: 1

RabbitMQ cluster data lost issue

Hi

I have set up 2 nodes on a cluster with Automatic synchronization (i.e rabbit1 and rabbit2(mirror node)). I have run the application through the spring boot application where I used rabbittemplate. So both producer code and consumer code i have put counter value to check how many messages and produce and consumer-like

Publisher
rabbitTemplate.convertAndSend(exchange, routingKey, msg, message -> { message.getMessageProperties().setDeliveryMode(MessageDeliveryMode.PERSISTENT); return message; }); publisherCounter++;

Consumer
@RabbitListener(queues = "${jqueue}") public void recievedMessage(String msg) { consumerCounter++; }

When I kill the rabbit2 i.e mirror node both count gives the same but when to kill rabbit1 I see consumerCounter values is less then publisherCounter like in my case

Subscriber:164126
Publisher:200000

is there anything I have missed?

After killing node docker rm -f rabbit-2 application working fine

Cluster setup steps:

  • docker run -d --rm --net rabbits --hostname rabbit-1 --name rabbit-1 -p 30000:5672 -p 30001:15672 -v ${PWD}/config/rabbit-1/:/config/ -e RABBITMQ_CONFIG_FILE=/config/rabbitmq -e RABBITMQ_ERLANG_COOKIE=test rabbitmq:management
  • docker run -d --rm --net rabbits --hostname rabbit-2 --name rabbit-2 -p 30002:5672 -p 30003:15672 -v ${PWD}/config/rabbit-2/:/config/ -e RABBITMQ_CONFIG_FILE=/config/rabbitmq -e RABBITMQ_ERLANG_COOKIE=test rabbitmq:management

docker exec -it rabbit-2 rabbitmqctl stop_app
docker exec -it rabbit-2 rabbitmqctl reset
docker exec -it rabbit-2 rabbitmqctl join_cluster rabbit@rabbit-1
docker exec -it rabbit-2 rabbitmqctl start_app
docker exec -it rabbit-2 rabbitmqctl cluster_status

docker exec -it rabbit-1 rabbitmq-plugins enable rabbitmq_federation
docker exec -it rabbit-2 rabbitmq-plugins enable rabbitmq_federation

docker exec -it rabbit-1 bash
root@rabbit-1:/# rabbitmqctl set_policy ha-fed ".*" '{"federation-upstream-set":"all", "ha-sync-mode":"automatic", "ha-mode":"nodes", "ha-params":["rabbit@rabbit-1","rabbit@rabbit-2"]}' --priority 1 --apply-to queues

S3

Hi, would be nice to see a tutorial for S3 in Kubernetes ;-)

grafana dashboard-nodeexporter ConfigMap is too long

Hello,
When I tried a

kubectl apply -f ./prometheus-monitoring/kubernetes/1.18.4/grafana/

it failed with a

The ConfigMap "grafana-dashboard-nodeexporter" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Preventing the pod to start.

As a workaround, I had to do :

kubectl create cm grafana-dashboard-nodeexporter
kubectl replace cm -f ./dashboard-nodeexporter.yaml

... btw, thanks a lot for your work ! ;)

Cert-Manager : Waiting for HTTP-01 challenge propagation: wrong status code '404', expected '200'

Hello,

Thanks you very much for your work and yours videos.

I have a issue and i don't know why, I search on google but I did'nt find anything.

When I apply the certificate.yaml I have already this error : Waiting for HTTP-01 challenge propagation: wrong status code '404', expected '200'.

My config is K8s + Metallb + Certmanager.

have you any information about this error ?

Kind regards

aimvector/jenkins-slave-new:latest dockerfile

Hello @marcel-dempers
Can you please update the dockerfile in Jenkins with latest from aimvector/jenkins-slave-new:latest?

I'm trying to build the image in my local adding the groupid and user, but it is failing in:
JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior

But when I use your image from aimvector/jenkins-slave-new:latest it worked, but it throws docker permission folder issue.
Error saving credentials: mkdir /home/jenkins/.docker: permission denied

Debugging containerized python flask app with non-standard code organization with VS Code

I am trying to debug my Python 3 Flask app using VS Code. I have the extensions docker and python installed for the Remote WSL mode of VS Code. The relevant part of the docker-compose.yml is:

  web:
    build: 
      context: .
    image: web
    container_name: web
    ports:
      - 5004:5000
    command: python manage.py run -h 0.0.0.0
    volumes:
      - .:/usr/src/app
    environment:
      - FLASK_DEBUG=1
      - APP_SETTINGS=project.server.config.DevelopmentConfig
    networks:
      - webnet

When I follow the instructions as per your video, I end up with commenting the command statement in my docker-compose file and an additional layer in my Dockerfile like so:

ENV FLASK_APP=manage.py

# ###########START NEW IMAGE : DEBUGGER ###################
FROM base as debug
RUN pip install ptvsd

WORKDIR /usr/src/app
CMD python -m ptvsd --host 0.0.0.0 --port 5678 --wait --multiprocess -m flask run -h 0.0.0.0 -p 5000

and a launch.json file like:

// .vscode/launch.json
{
  "configurations": [
    {
      "name": "Python Attach",
      "type": "python",
      "request": "attach",
      "pathMappings": [
        {
          "localRoot": "${workspaceFolder}",
          "remoteRoot": "/usr/src/app"
        }
      ], 
      "port": 5678, 
      "host": "127.0.0.1"
    }
  ]
}

Now here comes the kicker. My app is composed like so:

manage.py (has the statement app = create_app())
project
  server
   - _init_.py (this is where create_app() is defined using Flask() with appropriate config)
   - config.py
      main (rest of the app)

When try to debug this, I can apply breakpoints in manage.py and they trigger fine on app start, but my breakpoints in the views, which are located in .\project\server\main\views.py do not get triggered. I am guessing, this has either to do with how I am initiating the debug sub-process, or my pathMappings in the launch.json.

Any suggestions to debug this are appreciated. Sorry about the long post.

Small misprint PersistentVolume

In the "Persist data Kubernetes" paragraph, the apply is executed on the persistedvolume.yaml and persistedvolumeclaim.yaml files. Maybe you wanted to write persisteNT. The files, however, have the correct name.

Thanks for the great job you do.

ERROR: Could not find specified Maven installation 'maven_3_5_0'

I am trying a jenkins maven pipeline. My sample Jenkinsfile below

node('jenkins-slave') {
    stage ('Compile Stage') {
        withMaven(maven : 'maven_3_5_0') {
            sh 'mvn clean compile'
        }
    }

    stage ('Testing Stage') {
        withMaven(maven : 'maven_3_5_0') {
            sh 'mvn test'
        }
    }
    
    stage ('Deployment Stage') {
        withMaven(maven : 'maven_3_5_0') {
            sh 'mvn deploy'
        }
    }
}

And when I build in and running on jenkins-slave I am getting below error

Running on jenkins-slave-3tc8l in /home/jenkins/agent/workspace/ins-multi-branch-pipeline_master
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Compile Stage)
[Pipeline] withMaven
[withMaven] Options: []
[withMaven] Available options: 
[withMaven] using JDK installation provided by the build agent
[Pipeline] // withMaven
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Could not find specified Maven installation 'maven_3_5_0'.
Finished: FAILURE

What is the recommended way to fix this?

Based on the error, there is no maven installation on the jenkins-slave. So do we have to edit the docker file of the jenkins-slave and have maven installed there and rebuild and push it again? Is this the best approach or what other alternatives do we have?

DNS Name should be replace with IP address in redis.conf file

Hi, redis statefulset failed to recreate the pod during redis failover, as its looking for IP address, but getting DNS name instead.

When I have executed the Info Replication command from one of the redis replica node, observed the following details
master_host:redis-0.redis.default.svc.cluster.local.

Screenshot 2021-06-14 at 2 11 55 PM

Am I missing any other configuration ?

patchesStrategicMerge is overridding the other.

Hi,

I am trying to add multiple pod specs using Kustomize. However, it is adding the last one. I am using patchesStrategicMerge: for this. Below are spec I am trying to add

apiVersion: argoproj.io/v1alpha1
kind:  ApplicationSet 
metadata:
  name: maven-app1
  namespace: argocd
spec:
  generators:
  - list:
      elements:
      - cluster: rancher
        url: https://192.168.29.143:6443/

---
apiVersion: argoproj.io/v1alpha1
kind:  ApplicationSet 
metadata:
  name: maven-app1
  namespace: argocd
spec:
  generators:
  - list:
      elements:
      - cluster: baremetal
        url: https://192.168.29.145:6443/

docker build is failing on slave

Hello,

I followed your tutorial, but when I tried to build a new job in pipeline, it results in an error

Started by user jenkins
Obtained jenkins/JenkinsFile from git https://github.com/marcel-dempers/docker-development-youtube-series.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Still waiting to schedule task
‘jenkins-slave-mbm4d’ is offline
Agent jenkins-slave-mbm4d is provisioned from template Kubernetes Pod Template
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations: {}
  labels:
    jenkins: "slave"
    jenkins/label: "jenkins-slave"
  name: "jenkins-slave-mbm4d"
spec:
  containers:
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_TUNNEL"
      value: "jenkins:50000"
    - name: "JENKINS_AGENT_NAME"
      value: "jenkins-slave-mbm4d"
    - name: "JENKINS_NAME"
      value: "jenkins-slave-mbm4d"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins/"
    image: "aimvector/jenkins-slave"
    imagePullPolicy: "IfNotPresent"
    name: "jnlp"
    resources:
      limits: {}
      requests: {}
    securityContext:
      privileged: false
    tty: true
    volumeMounts:
    - mountPath: "/var/run/docker.sock"
      name: "volume-0"
      readOnly: false
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
    workingDir: "/home/jenkins/agent"
  hostNetwork: false
  nodeSelector:
    beta.kubernetes.io/os: "linux"
  restartPolicy: "Never"
  securityContext: {}
  volumes:
  - hostPath:
      path: "/var/run/docker.sock"
    name: "volume-0"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on jenkins-slave-mbm4d in /home/jenkins/agent/workspace/test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test pipeline)
[Pipeline] sh
+ echo hello
hello
+ git clone https://github.com/marcel-dempers/docker-development-youtube-series.git
Cloning into 'docker-development-youtube-series'...
+ cd ./docker-development-youtube-series/golang
+ docker build . -t test
time="2019-12-24T02:08:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied"
context canceled
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

It is failing at

time="2019-12-24T02:08:35Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied"
context canceled

Vault TLS Errors on K8s

@marcel-dempers

I am setting up vault on K8s as per https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/hashicorp/vault-2022

Following are the configs

consul-values-new.yaml
client:
  enabled: true
connectInject:
  enabled: false
  transparentProxy:
    defaultEnabled: false
controller:
  enabled: true
global:
  datacenter: vault-kubernetes
  acls:
    gossipEncryption:
      autoGenerate: true
    manageSystemACLs: true
  enabled: true
  metrics:
    enableAgentMetrics: true
    enabled: true
  name: consul
prometheus:
  enabled: true
server:
  replicas: 3
  bootstrapExpect: 3
  disruptionBudget:
    maxUnavailable: 0
ui:
  enabled: true
  service:
    type: LoadBalancer
# Vault Helm Chart Value Overrides
global:
  enabled: true
  tlsDisable: false

injector:
  enabled: true
  # Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
  image:
    repository: "hashicorp/vault-k8s"
    tag: "0.14.2"

  resources:
      requests:
        memory: 50Mi
        cpu: 50m
      limits:
        memory: 256Mi
        cpu: 250m

server:
  image:
    repository: "hashicorp/vault"
    tag: "1.9.3"

  # These Resource Limits are in line with node requirements in the
  # Vault Reference Architecture for a Small Cluster
  resources:
    requests:
      memory: 50Mi
      cpu: 500m
    limits:
      memory: 16Gi
      cpu: 2000m
  # For HA configuration and because we need to manually init the vault,
  # we need to define custom readiness/liveness Probe settings
  readinessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
  livenessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true"
    initialDelaySeconds: 60
  # extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
  # used to include variables required for auto-unseal.
  extraEnvironmentVars:
    VAULT_CACERT: /vault/userconfig/tls-ca/tls.crt

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Vault in the path `/vault/userconfig/<name>/`.
  extraVolumes:
    - type: secret
      name: tls-server
    - type: secret
      name: tls-ca
^M
  standalone:
    enabled: false
^M
  # Run Vault in "HA" mode.
  ha:
    enabled: true
    replicas: 3
    config: |
      ui = true
^M
      listener "tcp" {
        tls_disable = 0
        address     = "0.0.0.0:8200"
        tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
        tls_key_file = "/vault/userconfig/tls-server/tls.key"
        tls_min_version = "tls12"
      }
^M
      storage "consul" {
        path = "vault"
        address = "consul-server:8500"
        token = "6c0264bc-b750-77dc-2717-6a1e11990ecc"
      }
# Vault UI
ui:
  enabled: true
  externalPort: 8200
  externalTrafficPolicy: Cluster
  serviceType: LoadBalancer
^M
csi:
  enabled: true
  image:
    repository: "hashicorp/vault-csi-provider"
    tag: "1.0.0"
    pullPolicy: IfNotPresent
kgp -n vault
NAME                                           READY   STATUS             RESTARTS        AGE
consul-client-4sq98                            1/1     Running            0               4h13m
consul-client-7685b                            1/1     Running            0               4h13m
consul-client-m58vh                            1/1     Running            0               4h12m
consul-controller-5d57795d8b-c8cvq             1/1     Running            0               7d21h
consul-server-0                                1/1     Running            0               7d21h
consul-server-1                                1/1     Running            0               7d21h
consul-server-2                                1/1     Running            0               7d21h
consul-server-acl-init--1-bvh56                0/1     Completed          0               37m
consul-test                                    0/1     Completed          0               37m
consul-webhook-cert-manager-54899467bf-l6qzl   1/1     Running            0               7d21h
prometheus-server-5cbddcc44b-9m6tx             2/2     Running            0               7d21h
vault-0                                        1/1     Running            5 (5m19s ago)   11m
vault-1                                        0/1     CrashLoopBackOff   7 (3s ago)      11m
vault-2                                        0/1     CrashLoopBackOff   7 (8s ago)      11m
vault-agent-injector-74655f76d8-z5jkr          1/1     Running            0               11m
vault-csi-provider-5vrdc                       1/1     Running            0               11m
vault-csi-provider-g8d2m                       1/1     Running            0               11m
vault-csi-provider-nnhk4                       1/1     Running            0               11m
vault-server-test                              0/1     Error              0               11m

As you see the vault-1 and vault-2 pods are going ina crashloopbackoff status

k logs -n vault vault-1 -f
==> Vault server configuration:

             Api Address: https://100.96.2.14:8200
                     Cgo: disabled
         Cluster Address: https://vault-1.vault-internal:8201
              Go Version: go1.17.5
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: consul (HA available)
                 Version: Vault v1.9.3
             Version Sha: 7dbdd57243a0d8d9d9e07cd01eb657369f8e1b8a

==> Vault server started! Log data will stream in below:

2022-03-17T13:48:26.224Z [INFO]  proxy environment: http_proxy="\"\"" https_proxy="\"\"" no_proxy="\"\""
2022-03-17T13:48:26.225Z [WARN]  storage.consul: appending trailing forward slash to path
2022-03-17T13:48:26.236Z [INFO]  core: Initializing VersionTimestamps for core
2022-03-17T13:48:43.288Z [INFO]  http: TLS handshake error from 100.96.0.1:52832: EOF
2022-03-17T13:49:13.979Z [INFO]  http: TLS handshake error from 10.250.83.86:36425: EOF
2022-03-17T13:49:23.287Z [INFO]  http: TLS handshake error from 100.96.0.1:60527: EOF
2022-03-17T13:49:23.979Z [INFO]  http: TLS handshake error from 10.250.83.86:32121: EOF
2022-03-17T13:49:29.177Z [INFO]  http: TLS handshake error from 100.96.1.1:30783: EOF
2022-03-17T13:49:33.287Z [INFO]  http: TLS handshake error from 100.96.0.1:44207: EOF
2022-03-17T13:49:39.177Z [INFO]  http: TLS handshake error from 100.96.1.1:56081: EOF
2022-03-17T13:49:41.005Z [INFO]  service_registration.consul: shutting down consul backend
==> Vault shutdown triggered
k logs -n vault vault-2 -f
==> Vault server configuration:

             Api Address: https://100.96.0.34:8200
                     Cgo: disabled
         Cluster Address: https://vault-2.vault-internal:8201
              Go Version: go1.17.5
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: consul (HA available)
                 Version: Vault v1.9.3
             Version Sha: 7dbdd57243a0d8d9d9e07cd01eb657369f8e1b8a

==> Vault server started! Log data will stream in below:

2022-03-17T13:48:21.291Z [INFO]  proxy environment: http_proxy="\"\"" https_proxy="\"\"" no_proxy="\"\""
2022-03-17T13:48:21.291Z [WARN]  storage.consul: appending trailing forward slash to path
2022-03-17T13:48:21.294Z [INFO]  core: Initializing VersionTimestamps for core
2022-03-17T13:48:33.286Z [INFO]  http: TLS handshake error from 10.250.21.77:2713: EOF
2022-03-17T13:48:33.980Z [INFO]  http: TLS handshake error from 100.96.2.1:48380: EOF
2022-03-17T13:48:39.177Z [INFO]  http: TLS handshake error from 100.96.1.1:1905: EOF
2022-03-17T13:48:53.286Z [INFO]  http: TLS handshake error from 10.250.21.77:18049: EOF
2022-03-17T13:48:53.979Z [INFO]  http: TLS handshake error from 100.96.2.1:65176: EOF
2022-03-17T13:49:03.287Z [INFO]  http: TLS handshake error from 10.250.21.77:22405: EOF
2022-03-17T13:49:13.287Z [INFO]  http: TLS handshake error from 10.250.21.77:60182: EOF
2022-03-17T13:49:19.181Z [INFO]  http: TLS handshake error from 100.96.1.1:27293: EOF
2022-03-17T13:49:33.980Z [INFO]  http: TLS handshake error from 100.96.2.1:41163: EOF
==> Vault shutdown triggered
2022-03-17T13:49:36.023Z [INFO]  service_registration.consul: shutting down consul backend

Is this normal or what am i doing incorrect ?

errors while running in k8s cluster in GKE

➜ POC-admission-controller git:(master) ✗ kubectl logs -f example-webhook-78c8bc67b7-p95gd -n k8s-controller
panic: pods is forbidden: User "system:serviceaccount:k8s-controller:example-webhook" cannot list resource "pods" in API group "" at the cluster scope

goroutine 1 [running]:
main.test()
/app/test.go:14 +0x1a8
main.main()
/app/main.go:93 +0x392
➜ POC-admission-controller git:(master) ✗

@marcel-dempers

jenkins test pipeline: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:

Hi Marcel,

Thank you for your tutorials...I love them.

I have Windows 10 Home, and I installed Docker and Kubernetes successfully (including WSL 2 needed by Docker and Kubernetes).

I installed Jenkins as per your tutorial. I got to the step of testing the 'test' pipeline.
When I run the build, I get:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock

Obtained ./jenkins/JenkinsFile from git https://github.com/marcel-dempers/docker-development-youtube-series.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Still waiting to schedule task
‘jenkins-slave-r6q79’ is offline
Agent jenkins-slave-r6q79 is provisioned from template jenkins-slave
---
apiVersion: "v1"
kind: "Pod"
metadata:
  labels:
    jenkins: "slave"
    jenkins/label-digest: "5059d2cd0054f9fe75d61f97723d98ab1a42d71a"
    jenkins/label: "jenkins-slave"
  name: "jenkins-slave-r6q79"
spec:
  containers:
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_TUNNEL"
      value: "jenkins:50000"
    - name: "JENKINS_AGENT_NAME"
      value: "jenkins-slave-r6q79"
    - name: "JENKINS_NAME"
      value: "jenkins-slave-r6q79"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins/"
    image: "aimvector/jenkins-slave"
    imagePullPolicy: "Always"
    name: "jnlp"
    resources:
      limits: {}
      requests: {}
    tty: true
    volumeMounts:
    - mountPath: "/var/run/docker.sock"
      name: "volume-0"
      readOnly: false
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
    workingDir: "/home/jenkins/agent"
  hostNetwork: false
  nodeSelector:
    kubernetes.io/os: "linux"
  restartPolicy: "Never"
  serviceAccount: "jenkins"
  volumes:
  - hostPath:
      path: "/var/run/docker.sock"
    name: "volume-0"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on jenkins-slave-r6q79 in /home/jenkins/agent/workspace/test-devops-guy
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test pipeline)
[Pipeline] sh
+ echo hello
hello
+ git clone https://github.com/marcel-dempers/docker-development-youtube-series.git
Cloning into 'docker-development-youtube-series'...
+ cd ./docker-development-youtube-series/golang
+ docker build . -t test
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&session=wae0mt7nlffr6i3ohng1eon3p&shmsize=0&t=test&target=&ulimits=null&version=1: dial unix /var/run/docker.sock: connect: permission denied
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

Have you run into this? If so, how do we get past this?

Thank you

:)

There is an error when apply prometheus-operator

my kubernetes verion is
Client Version: v1.17.9-eks-4c6976
Server Version: v1.17.12-eks-7684af

so I have apply the command

kubectl -n monitoring apply -f ./monitoring/prometheus/kubernetes/1.15-1.17/prometheus-operator/

then there was an error occurred however it only happens on customresourcedefinition kind while others deployment service and others are well applied. Will this still working properly for Prometheus in Kubernetes??

Fatal config file error for sentinel

Hey @marcel-dempers , I copied your script exactly as it is for the sentinel and i'm getting the following output:

Wed, Dec 1 2021 8:47:24 am |  
Wed, Dec 1 2021 8:47:24 am | *** FATAL CONFIG FILE ERROR (Redis 6.2.3) ***
Wed, Dec 1 2021 8:47:24 am | Reading the configuration file, at line 4
Wed, Dec 1 2021 8:47:24 am | >>> 'sentinel monitor mymaster 6379 2'
Wed, Dec 1 2021 8:47:24 am | Unrecognized sentinel configuration statement.

Any idea's? I'm parsing through docs right now but not seeing anything obviously wrong.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sentinel
spec:
  serviceName: sentinel
  replicas: 3
  selector:
    matchLabels:
      app: sentinel
  template:
    metadata:
      labels:
        app: sentinel
    spec:
      initContainers:
      - name: config
        image: redis:6.2.3-alpine
        command: [ "sh", "-c" ]
        args:
          - |
            REDIS_PASSWORD=a-very-complex-password-here
            nodes=redis-0.redis.redis.svc.cluster.local,redis-1.redis.redis.svc.cluster.local,redis-2.redis.redis.svc.cluster.local

            for i in ${nodes//,/ }
            do
                echo "finding master at $i"
                MASTER=$(redis-cli --no-auth-warning --raw -h $i -a $REDIS_PASSWORD info replication | awk '{print $1}' | grep master_host: | cut -d ":" -f2)
                if [ "$MASTER" == "" ]; then
                    echo "no master found"
                    MASTER=
                else
                    echo "found $MASTER"
                    break
                fi
            done
            echo "sentinel monitor mymaster $MASTER 6379 2" >> /tmp/master
            echo "port 5000
            sentinel resolve-hostnames yes
            sentinel announce-hostnames yes
            $(cat /tmp/master)
            sentinel down-after-milliseconds mymaster 5000
            sentinel failover-timeout mymaster 60000
            sentinel parallel-syncs mymaster 1
            sentinel auth-pass mymaster $REDIS_PASSWORD
            " > /etc/redis/sentinel.conf
            cat /etc/redis/sentinel.conf
        volumeMounts:
        - name: redis-config
          mountPath: /etc/redis/
      containers:
      - name: sentinel
        image: redis:6.2.3-alpine
        command: ["redis-sentinel"]
        args: ["/etc/redis/sentinel.conf"]
        ports:
        - containerPort: 5000
          name: sentinel
        volumeMounts:
        - name: redis-config
          mountPath: /etc/redis/
        - name: data
          mountPath: /data
      volumes:
      - name: redis-config
        emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "longhorn"
      resources:
        requests:
          storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
  name: sentinel
spec:
  clusterIP: None
  ports:
  - port: 5000
    targetPort: 5000
    name: sentinel
  selector:
    app: sentinel

Redis Labs

Hi there,
I watched your videos about Redis. I'd love to touch base with you. Please contact me at raja.rao at redislabs.com.

kafka

Cannot build/create containers with docker compose ? I get the following error :

D:\Development\docker-development-youtube-series\messaging\kafka>docker-compose build
Building zookeeper-1
[+] Building 1.2s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 553B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:11.0.10-jre-buster 1.1s
=> [1/5] FROM docker.io/library/openjdk:11.0.10-jre-buster@sha256:60fc7f8d1deb9672df29785cab71a7ecc37949de870018 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 39B 0.0s
=> CACHED [2/5] RUN mkdir /tmp/kafka && apt-get update && apt-get install -y curl 0.0s
=> CACHED [3/5] RUN curl "https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0.tgz" -o /tmp/kafka/ka 0.0s
=> CACHED [4/5] COPY start-zookeeper.sh /usr/bin 0.0s
=> CACHED [5/5] RUN chmod +x /usr/bin/start-zookeeper.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:dcd1d321d48b86c3c963ab9442777302fed3945bc02efe66bcf331cdad087136 0.0s
=> => naming to docker.io/aimvector/zookeeper:2.7.0 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building kafka-1
[+] Building 0.7s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 549B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:11.0.10-jre-buster 0.6s
=> [internal] load build context 0.0s
=> => transferring context: 35B 0.0s
=> [1/5] FROM docker.io/library/openjdk:11.0.10-jre-buster@sha256:60fc7f8d1deb9672df29785cab71a7ecc37949de870018 0.0s
=> CACHED [2/5] RUN apt-get update && apt-get install -y curl 0.0s
=> CACHED [3/5] RUN mkdir /tmp/kafka && curl "https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0. 0.0s
=> CACHED [4/5] COPY start-kafka.sh /usr/bin 0.0s
=> CACHED [5/5] RUN chmod +x /usr/bin/start-kafka.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:96bb4d4e0152e76d70f79b8f1d804483f587bce96142525992ac8ea2aab6512d 0.0s
=> => naming to docker.io/aimvector/kafka:2.7.0 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building kafka-2
[+] Building 0.7s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:11.0.10-jre-buster 0.6s
=> [internal] load build context 0.0s
=> => transferring context: 35B 0.0s
=> [1/5] FROM docker.io/library/openjdk:11.0.10-jre-buster@sha256:60fc7f8d1deb9672df29785cab71a7ecc37949de870018 0.0s
=> CACHED [2/5] RUN apt-get update && apt-get install -y curl 0.0s
=> CACHED [3/5] RUN mkdir /tmp/kafka && curl "https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0. 0.0s
=> CACHED [4/5] COPY start-kafka.sh /usr/bin 0.0s
=> CACHED [5/5] RUN chmod +x /usr/bin/start-kafka.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:96bb4d4e0152e76d70f79b8f1d804483f587bce96142525992ac8ea2aab6512d 0.0s
=> => naming to docker.io/aimvector/kafka:2.7.0 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building kafka-3
[+] Building 0.7s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:11.0.10-jre-buster 0.6s
=> [internal] load build context 0.0s
=> => transferring context: 35B 0.0s
=> [1/5] FROM docker.io/library/openjdk:11.0.10-jre-buster@sha256:60fc7f8d1deb9672df29785cab71a7ecc37949de870018 0.0s
=> CACHED [2/5] RUN apt-get update && apt-get install -y curl 0.0s
=> CACHED [3/5] RUN mkdir /tmp/kafka && curl "https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0. 0.0s
=> CACHED [4/5] COPY start-kafka.sh /usr/bin 0.0s
=> CACHED [5/5] RUN chmod +x /usr/bin/start-kafka.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:96bb4d4e0152e76d70f79b8f1d804483f587bce96142525992ac8ea2aab6512d 0.0s
=> => naming to docker.io/aimvector/kafka:2.7.0 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building kafka-producer
[+] Building 0.8s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:11.0.10-jre-buster 0.7s
=> [1/5] FROM docker.io/library/openjdk:11.0.10-jre-buster@sha256:60fc7f8d1deb9672df29785cab71a7ecc37949de870018 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 35B 0.0s
=> CACHED [2/5] RUN apt-get update && apt-get install -y curl 0.0s
=> CACHED [3/5] RUN mkdir /tmp/kafka && curl "https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0. 0.0s
=> CACHED [4/5] COPY start-kafka.sh /usr/bin 0.0s
=> CACHED [5/5] RUN chmod +x /usr/bin/start-kafka.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:96bb4d4e0152e76d70f79b8f1d804483f587bce96142525992ac8ea2aab6512d 0.0s
=> => naming to docker.io/aimvector/kafka:2.7.0 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building kafka-consumer
[+] Building 0.9s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:11.0.10-jre-buster 0.7s
=> [1/5] FROM docker.io/library/openjdk:11.0.10-jre-buster@sha256:60fc7f8d1deb9672df29785cab71a7ecc37949de870018 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 35B 0.0s
=> CACHED [2/5] RUN apt-get update && apt-get install -y curl 0.0s
=> CACHED [3/5] RUN mkdir /tmp/kafka && curl "https://archive.apache.org/dist/kafka/2.7.0/kafka_2.13-2.7.0. 0.0s
=> CACHED [4/5] COPY start-kafka.sh /usr/bin 0.0s
=> CACHED [5/5] RUN chmod +x /usr/bin/start-kafka.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:96bb4d4e0152e76d70f79b8f1d804483f587bce96142525992ac8ea2aab6512d 0.0s
=> => naming to docker.io/aimvector/kafka:2.7.0 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building kafka-consumer-go
[+] Building 1.7s (16/16) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.10 1.2s
=> [internal] load metadata for docker.io/library/golang:1.16-alpine 1.5s
=> [dev-env 1/3] FROM docker.io/library/golang:1.16-alpine@sha256:5616dca835fa90ef13a843824ba58394dad356b7d56198 0.0s
=> [runtime 1/3] FROM docker.io/library/alpine:3.10@sha256:451eee8bedcb2f029756dc3e9d73bab0e7943c1ac55cff3a4861c 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 111B 0.0s
=> CACHED [dev-env 2/3] RUN apk add --no-cache git gcc musl-dev 0.0s
=> CACHED [dev-env 3/3] WORKDIR /app 0.0s
=> CACHED [build-env 1/4] COPY go.mod /go.sum /app/ 0.0s
=> CACHED [build-env 2/4] RUN go mod download 0.0s
=> CACHED [build-env 3/4] COPY . /app/ 0.0s
=> CACHED [build-env 4/4] RUN CGO_ENABLED=0 go build -o /consumer 0.0s
=> CACHED [runtime 2/3] COPY --from=build-env /consumer /usr/local/bin/consumer 0.0s
=> CACHED [runtime 3/3] RUN chmod +x /usr/local/bin/consumer 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:4ef3bdab3eac1c56834686bca2c5922d9a113fe60763a22921e8b4def05a400b 0.0s
=> => naming to docker.io/aimvector/kafka-consumer-go:1.0.0 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

D:\Development\docker-development-youtube-series\messaging\kafka>docker-compose up
Creating network "kafka" with the default driver
Creating kafka-consumer-go ... done
Creating kafka-consumer ... done
Creating kafka-2 ... done
Creating kafka-1 ... done
Creating kafka-3 ... done
Creating zookeeper-1 ... done
Creating kafka-producer ... done
Attaching to kafka-consumer-go, kafka-producer, kafka-2, kafka-3, kafka-consumer, zookeeper-1, kafka-1
: invalid option | /bin/bash: -
: invalid option | /bin/bash: -
: invalid option | /bin/bash: -
: invalid option | /bin/bash: -
kafka-2 exited with code 1
kafka-3 exited with code 1
zookeeper-1 exited with code 1
kafka-1 exited with code 1

/usr/share/nginx/html/{{ v.imageurl }}" failed (2: No such file or directory) error from videos-web app

For kubernetes/servicemesh/applications/videos-web/
I got "/usr/share/nginx/html/{{ v.imageurl }}" failed (2: No such file or directory)" error when I tried rendered /home/ page.
This issue occurred in both "docker-compose up" and k8s installation.

videos-web | 2021/03/04 16:31:43 [error] 32#32: *2 open() "/usr/share/nginx/html/{{ v.imageurl }}" failed (2: No such file or directory), client: 172.18.0.1, server: , request: "GET /%7B%7B%20v.imageurl%20%7D%7D HTTP/1.1", host: "localhost:8080", referrer: "http://localhost:8080/"

MountVolume.SetUp failed for volume "jenkins" - role s not authorized to perform: elasticfilesystem:DescribeMountTargets

Hello

I'm facing one issue during Jenkins deployment in the pod.
Apparently, the role is not set to elasticfilesystem:DescribeMountTargets. Does anyone faced the same issue?

`Events:
Type Reason Age From Message


Normal Scheduled 7m18s default-scheduler Successfully assigned jenkins/jenkins-755df69664-279s6 to ip-192-168-83-52.ca-central-1.compute.internal
Warning FailedMount 59s (x11 over 7m17s) kubelet MountVolume.SetUp failed for volume "jenkins" : rpc error: code = Internal desc = Could not mount "fs-022497d2da99cd928:/" at "/var/lib/kubelet/pods/aa09bb27-3be5-41dd-86a8-4ac5484b0e01/volumes/kubernetes.io~csi/jenkins/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs -o tls fs-022497d2da99cd928:/ /var/lib/kubelet/pods/aa09bb27-3be5-41dd-86a8-4ac5484b0e01/volumes/kubernetes.io~csi/jenkins/mount
Output: Failed to resolve "fs-022497d2da99cd9277.efs.ca-central-1.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options.
User: arn:aws:sts::054550991362:assumed-role/eksctl-reelcruit-eks-cluster-node-NodeInstanceRole-1AEPZZE5D06NE/i-0f83f000ca47c460a is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource
Warning FailedMount 43s (x3 over 5m15s) kubelet Unable to attach or mount volumes: unmounted volumes=[jenkins], unattached volumes=[jenkins kube-api-access-hhp87]: timed out waiting for the condition`

Deployment failing in Kubernetes Deployments for Beginners

Hey Marcel,

Your videos are awesome.

I was trying to follow the steps from you deployments video in my local environment but had a few issues. It looks like the deployments.yaml in your repo has changed a bit from the one shown in the deployments video.

The current deployments.yaml in your master branch has a couple of issues:

  • the configmap & secrets volumes don't exist
  • the aimvector/golang:1.0.0 image fails due to missing /configs/config.json file

Cheers,

Tim

Http request method is: GET when i want to build a kubernetes admission webhook

My kubernetes version is v1.21.2 which setup by minikube:

image

The validate and mutate webhook have been enabled
image

image

Source code is:

package main

import(
    "log"
    "flag"
    "net/http"
    "io/ioutil"
)

type Options struct {
    TLSCertFile     string
    TLSkeyFile      string
}

var(
    options Options

)

func main(){
    flag.StringVar(&options.TLSCertFile,"tlsCertFile", "/etc/webhook/certs/cert.pem","File containing the x509 Certificate for HTTPS.")
    flag.StringVar(&options.TLSkeyFile, "tlsKeyFile", "/etc/webhook/certs/key.pem", "File containing the x509 private key to --tlsCertFile.")
    flag.Parse()
    log.Println("start http server")
    http.HandleFunc("/validate/",validate)
    log.Fatalln(http.ListenAndServeTLS(":8080",options.TLSCertFile,options.TLSkeyFile,nil))
}


func validate(w http.ResponseWriter,r *http.Request){
    var body []byte
    log.Printf("the request method is :%s",r.Method)
    if data,err := ioutil.ReadAll(r.Body); err != nil {
        log.Printf("Error: Read the body of http request failed: %s",err)
        http.Error(w,"Read the body of http request failed",http.StatusBadRequest)
    } else {
        body = data
    }
    if err := ioutil.WriteFile("/tmp/admission_request.json",body,0777); err != nil {
        panic("Write http request body to file failed")
    }
}

The output of pod:
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.