Giter Club home page Giter Club logo

ansible-collections / kubernetes.core Goto Github PK

View Code? Open in Web Editor NEW
204.0 7.0 130.0 4.34 MB

The collection includes a variety of Ansible content to help automate the management of applications in Kubernetes and OpenShift clusters, as well as the provisioning and maintenance of clusters themselves.

License: Other

Makefile 0.16% Python 98.17% Jinja 0.54% Shell 1.13%
kubernetes ansible automation openshift k8s ansible-collection hacktoberfest

kubernetes.core's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes.core's Issues

Allow Setting Helm Timeout Independent of Wait

For the Helm module, the wait_timeout provides the the --timeout argument to the helm command:

https://github.com/ansible-collections/community.kubernetes/blob/f99614370c4fd3457dbcf9581217df7997ba86fd/plugins/modules/helm.py#L343-L346

However, the --timeout argument also effects Helm independently of --wait. It can be used to specify a timeout for all Kubernetes commands. https://helm.sh/docs/intro/using_helm/#helpful-options-for-installupgraderollback

I am running into issues where pre-chart hooks can sometimes take longer than default five minutes to execute, but I can't change the timeout, because wait_timeout is ignored if wait is not True. I can't add wait: yes, because it is a legacy app and it takes a consider amount of time for all the pods to enter a ready state, which Helm would wait for.

Json merge_type is not applying

SUMMARY

When using json merge, the playbook is running ok but not doing anything. It is the same as ansible/ansible#65897 that was automatically closed by the repo migration.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s

ANSIBLE VERSION
ansible 2.9.18
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/rgordill/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.9.2 (default, Feb 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
CONFIGURATION

OS / ENVIRONMENT

Fedora release 33 (Thirty Three)
Linux musashi 5.11.11-200.fc33.x86_64 ansible-collections/community.kubernetes#1 SMP Tue Mar 30 16:53:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

STEPS TO REPRODUCE
  1. Start a simple minikube with:
    minikube start -addons ingress
  2. See the configuration of the ingress deployment with
    kubectl get deployments.apps -n kube-system ingress-nginx-controller -o yaml
  3. Execute the following ansible tasks
- name: Patch Nginx to support ssl-passthough
  community.kubernetes.k8s:
    state: present
    kind: deployment
    api_version: apps/v1
    name: ingress-nginx-controller
    namespace: kube-system
    merge_type: 
    - json
    definition:
    - op: add
      path: '/spec/template/spec/containers/0/args/-'
      value: '--enable-ssl-passthrough'
  1. Check with the same kubectlcommand, that the deployment is exactly the same
  2. Try to do it manually with
kubectl patch deployment \
  ingress-nginx-controller \
  --namespace kube-system \
  --type='json' \
  -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-ssl-passthrough"}]'
  1. Again get the deployment, now the merge has been applied.
EXPECTED RESULTS

Merge patch successfully applied as in kubectl.

ACTUAL RESULTS

result.txt

Implement "loop flattening" in `k8s`

SUMMARY

Using a loop to process a batch of file means a fork and API call is made for each item in the loop. This adds a substantial amount of overhead to process each file that could be avoided if the list of files are "flattened" and submitted with one API call. The proposal here is to build this "flattening" logic into the k8s module itself where optimal handling of a batch of files is trivial.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
  • k8s
ADDITIONAL INFORMATION
- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    definition: "{{ lookup('template', item) }}"
  loop: "{{ monitoring_dashboard_yamls_to_deploy }}"
  when:
    ...

While functional, using a loop to process each file means a fork and API call is also made for each item. This adds a substantial amount of overhead to process each file that could be avoided if the files are "flattened" in one pass and submitted with one API call.

Using creative use of Jinja2 syntax this can be done...

- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    definition: |
      {% for mdy in monitoring_dashboard_yamls_to_deploy %}
      ---
      {{ lookup("template", mdy) }}
      ...
      {% endfor %}
  when:
    ...

This executes significantly faster than the prior example, but it also requires a fair bit of knowledge and effort to get right.

The proposal here is to build this "flattening" logic into the k8s module itself where optimal handling of batch file processing is trivial.

The src and template (and by extension template/path) would be able to optionally accept a list of strings (file paths) instead of a simple string. Seeing a list has been passed, the module would perform the functional equivalent the last example where the module processes the files and submits them in one pass to the K8s cluster.

With this implemented, something like this should be possible with the same performance results:

- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    template: {{ monitoring_dashboard_yamls_to_deploy }}
  when:
    ...

Note, here the monitoring_dashboard_yamls_to_deploy variable is presumed to contains a list of file paths. This could have been explicitly defined in the task without a variable.

- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    template: 
      - crds/foo.yaml
      - crds/bar.yaml
      - crds/baz.yaml
      - crds/fred.yaml
  when:
    ...

Relative file paths given to src and template should be "Role-aware" similar to template and file modules.

# my_kiali role
- name: Create the Monitoring Dashboards
  k8s:
    state: "present"
    namespace: "{{ kiali_vars.deployment.namespace }}"
    template: 
      - foo.yaml
      - bar.yaml
  when:
    ...

In this example the module would know to look in /path/to/roles/my_kiali/templates for foo.yaml and bar.yaml.

ERROR HANDLING

Consideration must be given to how errors are handled in this scenario. How should the module report and otherwise handle a processing error in the middle of a batch? Exit immediately or continue to process the remainder? Should this be controlled by the user with a boolean param? If so, how are multiple errors reported? See ansible-collections/community.kubernetes#321 that relates to this very issue.

How can I set node labels?

SUMMARY

I would like to set node lables in a kubernetes cluster.
(I'm writing this as I have not seen a way to do it via ansible right now)

I need node labels because cloud nodes have access to cloud volumes while dedicated nodes don't have.
I need to be able to deploy pods based on labels to these nodes.

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

I would like to be able to use an ansible task to set node labels for a bunch of nodes in my inventory.
Ideally the labels would be set via a list/array variable.

Because of how the kubernetes API works I believe ansible should expose two modules:

  • one for adding node labels
  • one for removing node labels

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#step-one-attach-label-to-the-node

hosts:
   worker1:
     - k8s_add_labels:
        - label1=value1
        - label2=value2
     - k8s_remove_labels:
        -  label3
        # Remove label4 only if it has value4 ?! 
        - label4=value4
COMPONENT NAME

Playbook not fetching connection parameters from k8s.yml inventory file

SUMMARY

Use of k8s inventory plugin with playbook is not fetching connection parameters.
The play is looking for kubeconfig whereas host + api_key are specified in the connection from k8s.yaml

ISSUE TYPE
  • Bug Report
COMPONENT NAME

community.kubernetes.k8s โ€“ Kubernetes (K8s) inventory source

community.kubernetes.k8s โ€“ Manage Kubernetes (K8s) objects

ANSIBLE VERSION
ansible 2.9.6
  config file = ~/ansible/ansible.cfg
  configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
$ cat ~/.ansible/collections/ansible_collections/community/kubernetes/MANIFEST.json  | grep version
  "version": "1.2.0",
CONFIGURATION
HOST_KEY_CHECKING(~/ansible/ansible.cfg) = False
OS / ENVIRONMENT

Ubuntu 20.04.1 LTS

STEPS TO REPRODUCE

k8s.yml

plugin: community.kubernetes.k8s
connections:
  - host: https://k8smaster:6443
    api_key: MYAPIKEY
    validate_certs: false
  • At this point, I can succesfully connect to my k8s cluster
$ ansible-inventory -i k8s.yml --graph
@all:
  |--@label_app_nginx:
[...]
|--@training:
  |  |--@namespace_default:
  |  |  |--@namespace_default_pods:
  |  |  |  |--nginx-f89759699-b5q5p_nginx
[...]

I created a very basic playbook to create a namespace

---
- hosts: localhost
  gather_facts: false
  connection: local

  collections:
    - community.kubernetes

  tasks:
    - name: Create myTestNamespace namespace.
      k8s:
        name: myTestNamespace
        api_version: v1
        kind: Namespace
        state: present

But when I execute the playbook by specifying my inventory k8s.yml, it's not using the connection and try to look for kubeconfig

EXPECTED RESULTS

I would expect the playbook to use k8s.yml connection when using -i k8s.yml

ACTUAL RESULTS
$ ansible-playbook  -i playbooks/k8s.yml playbooks/kube.yml
PLAY [localhost] ****************************************************************************************************************************************************************************************************************************

TASK [Create myTestNamespace namespace.] ******************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to load kubeconfig due to Invalid kube-config file. No configuration found."}

PLAY RECAP **********************************************************************************************************************************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

apis being overwritten in cluster_info

SUMMARY

When I do this I seem to get one API per group:

    - name: Get APIs
      community.kubernetes.k8s_cluster_info:
      register: apis

I changed this: line
https://github.com/ansible-collections/community.kubernetes/blob/main/plugins/modules/k8s_cluster_info.py#L209
to this and I get the full list of APIs:
results[resource.kind + "." + resource.group] = {

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s_cluster_info

ANSIBLE VERSION
$ ansible --version
ansible 2.9.18
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/jason/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.9.2 (default, Feb 20 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
CONFIGURATION
$ ansible-config dump --only-changed
ANSIBLE_COW_SELECTION(env: ANSIBLE_COW_SELECTION) = random
ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True
OS / ENVIRONMENT

Fedora 33

STEPS TO REPRODUCE

Run the above mentioned task and see that you have a list of one api per group when in many cases you should have several.

EXPECTED RESULTS

See all apis listed.

ACTUAL RESULTS

See a fraction of apis listed.

Moving this content to kubernetes.core (v2.0)

Red Hat Ansible is looking to continue furthering this content collection for downstream applications and packaging. Continuing with the "community" prefix will be confusing and communicate and different status. To address this, this repo will be transferred to a new repo named kubernetes.core. The community.kubernetes collection/repo will remain, but all future development and activity will shift to the new repo. Eventually this repo will become a remapping to the content in kubernetes.core.

This migration will be part of the v2.0 development effort.

k8s lookup return a dictionary

SUMMARY

Using the k8s lookup, it returns a dictionary. I was expecting a list

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s lookup

ANSIBLE VERSION
ansible 2.9.16
  config file = None
  configured module search path = ['/var/home/job/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /var/home/job/.local/lib/python3.8/site-packages/ansible
  executable location = /var/home/job/.local/bin/ansible
  python version = 3.8.7 (default, Dec 22 2020, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
CONFIGURATION

OS / ENVIRONMENT

Fedora 32

STEPS TO REPRODUCE
  - vars:
      namespace: osdk-pr-0-0
      php_fpm_appname: m4e-sample-php-fpm
    debug:
      msg: "{{ lookup('k8s', namespace=namespace, kind='pod', label_selector='app=' +
        php_fpm_appname, field_selector='status.phase=Running') }}"
EXPECTED RESULTS

A list; even though, there is only one item

ACTUAL RESULTS

A dict. There is only one item

TASK [debug] **************************************************************************************************************************
task path: /var/home/job/mydata/myrepos/m4e-operator/.idea/debug.yml:15
ok: [localhost] => {
    "msg": {
        "apiVersion": "v1",
        "kind": "Pod",
        "metadata": {
            "creationTimestamp": "2021-04-04T03:59:35Z",
            "generateName": "m4e-sample-php-fpm-deploy-74c9847c79-",
            "labels": {
                "app": "m4e-sample-php-fpm",
                "app.kubernetes.io/component": "php-fpm",
                "app.kubernetes.io/instance": "m4e-sample",
                "app.kubernetes.io/managed-by": "ansible",
                "app.kubernetes.io/name": "m4e-sample-php-fpm",
                "app.kubernetes.io/part-of": "moodle",
                "app.kubernetes.io/version": "v1alpha1",
                "pod-template-hash": "74c9847c79"
            },
            "managedFields": [
                {
                    "apiVersion": "v1",
                    "fieldsType": "FieldsV1",
                    "fieldsV1": {
                        "f:metadata": {
                            "f:generateName": {},
                            "f:labels": {
                                ".": {},
                                "f:app": {},
                                "f:app.kubernetes.io/component": {},
                                "f:app.kubernetes.io/instance": {},
                                "f:app.kubernetes.io/managed-by": {},
                                "f:app.kubernetes.io/name": {},
                                "f:app.kubernetes.io/part-of": {},
                                "f:app.kubernetes.io/version": {},
                                "f:pod-template-hash": {}
                            },
                            "f:ownerReferences": {
                                ".": {},
                                "k:{\"uid\":\"2233a2d7-8cfd-4c5b-bac0-87194a702bc6\"}": {
                                    ".": {},
                                    "f:apiVersion": {},
                                    "f:blockOwnerDeletion": {},
                                    "f:controller": {},
                                    "f:kind": {},
                                    "f:name": {},
                                    "f:uid": {}
                                }
                            }
                        },
                        "f:spec": {
                            "f:containers": {
                                "k:{\"name\":\"m4e-sample-php-fpm\"}": {
                                    ".": {},
                                    "f:args": {},
                                    "f:env": {
                                        ".": {},
                                        "k:{\"name\":\"MOODLE_CONFIG_DIR\"}": {
                                            ".": {},
                                            "f:name": {},
                                            "f:value": {}
                                        },
                                        "k:{\"name\":\"PHP_FPM_LISTEN_ALLOWED_CLIENTS\"}": {
                                            ".": {},
                                            "f:name": {},
                                            "f:value": {}
                                        },
                                        "k:{\"name\":\"PHP_FPM_PROCESS_CONTROL_TIMEOUT\"}": {
                                            ".": {},
                                            "f:name": {},
                                            "f:value": {}
                                        }
                                    },
                                    "f:image": {},
                                    "f:imagePullPolicy": {},
                                    "f:livenessProbe": {
                                        ".": {},
                                        "f:exec": {
                                            ".": {},
                                            "f:command": {}
                                        },
                                        "f:failureThreshold": {},
                                        "f:initialDelaySeconds": {},
                                        "f:periodSeconds": {},
                                        "f:successThreshold": {},
                                        "f:timeoutSeconds": {}
                                    },
                                    "f:name": {},
                                    "f:ports": {
                                        ".": {},
                                        "k:{\"containerPort\":9000,\"protocol\":\"TCP\"}": {
                                            ".": {},
                                            "f:containerPort": {},
                                            "f:protocol": {}
                                        }
                                    },
                                    "f:readinessProbe": {
                                        ".": {},
                                        "f:exec": {
                                            ".": {},
                                            "f:command": {}
                                        },
                                        "f:failureThreshold": {},
                                        "f:initialDelaySeconds": {},
                                        "f:periodSeconds": {},
                                        "f:successThreshold": {},
                                        "f:timeoutSeconds": {}
                                    },
                                    "f:resources": {
                                        ".": {},
                                        "f:limits": {
                                            ".": {},
                                            "f:cpu": {},
                                            "f:memory": {}
                                        },
                                        "f:requests": {
                                            ".": {},
                                            "f:cpu": {},
                                            "f:memory": {}
                                        }
                                    },
                                    "f:terminationMessagePath": {},
                                    "f:terminationMessagePolicy": {},
                                    "f:volumeMounts": {
                                        ".": {},
                                        "k:{\"mountPath\":\"/config\"}": {
                                            ".": {},
                                            "f:mountPath": {},
                                            "f:name": {},
                                            "f:readOnly": {}
                                        },
                                        "k:{\"mountPath\":\"/var/moodledata\"}": {
                                            ".": {},
                                            "f:mountPath": {},
                                            "f:name": {}
                                        }
                                    }
                                }
                            },
                            "f:dnsPolicy": {},
                            "f:enableServiceLinks": {},
                            "f:restartPolicy": {},
                            "f:schedulerName": {},
                            "f:securityContext": {
                                ".": {},
                                "f:fsGroup": {},
                                "f:runAsUser": {}
                            },
                            "f:terminationGracePeriodSeconds": {},
                            "f:volumes": {
                                ".": {},
                                "k:{\"name\":\"config-php\"}": {
                                    ".": {},
                                    "f:name": {},
                                    "f:secret": {
                                        ".": {},
                                        "f:defaultMode": {},
                                        "f:items": {},
                                        "f:secretName": {}
                                    }
                                },
                                "k:{\"name\":\"moodledata\"}": {
                                    ".": {},
                                    "f:name": {},
                                    "f:persistentVolumeClaim": {
                                        ".": {},
                                        "f:claimName": {}
                                    }
                                }
                            }
                        }
                    },
                    "manager": "kube-controller-manager",
                    "operation": "Update",
                    "time": "2021-04-04T03:59:35Z"
                },
                {
                    "apiVersion": "v1",
                    "fieldsType": "FieldsV1",
                    "fieldsV1": {
                        "f:status": {
                            "f:conditions": {
                                "k:{\"type\":\"ContainersReady\"}": {
                                    ".": {},
                                    "f:lastProbeTime": {},
                                    "f:lastTransitionTime": {},
                                    "f:status": {},
                                    "f:type": {}
                                },
                                "k:{\"type\":\"Initialized\"}": {
                                    ".": {},
                                    "f:lastProbeTime": {},
                                    "f:lastTransitionTime": {},
                                    "f:status": {},
                                    "f:type": {}
                                },
                                "k:{\"type\":\"Ready\"}": {
                                    ".": {},
                                    "f:lastProbeTime": {},
                                    "f:lastTransitionTime": {},
                                    "f:status": {},
                                    "f:type": {}
                                }
                            },
                            "f:containerStatuses": {},
                            "f:hostIP": {},
                            "f:phase": {},
                            "f:podIP": {},
                            "f:podIPs": {
                                ".": {},
                                "k:{\"ip\":\"10.88.136.128\"}": {
                                    ".": {},
                                    "f:ip": {}
                                }
                            },
                            "f:startTime": {}
                        }
                    },
                    "manager": "kubelet",
                    "operation": "Update",
                    "time": "2021-04-04T04:00:04Z"
                }
            ],
            "name": "m4e-sample-php-fpm-deploy-74c9847c79-gpb6b",
            "namespace": "osdk-pr-0-0",
            "ownerReferences": [
                {
                    "apiVersion": "apps/v1",
                    "blockOwnerDeletion": true,
                    "controller": true,
                    "kind": "ReplicaSet",
                    "name": "m4e-sample-php-fpm-deploy-74c9847c79",
                    "uid": "2233a2d7-8cfd-4c5b-bac0-87194a702bc6"
                }
            ],
            "resourceVersion": "68158643",
            "uid": "228a69ba-28fb-42f3-973c-b3c8a261dd6f"
        },
        "spec": {
            "containers": [
                {
                    "args": [
                        "php-fpm"
                    ],
                    "env": [
                        {
                            "name": "PHP_FPM_LISTEN_ALLOWED_CLIENTS",
                            "value": "any"
                        },
                        {
                            "name": "PHP_FPM_PROCESS_CONTROL_TIMEOUT",
                            "value": "20"
                        },
                        {
                            "name": "MOODLE_CONFIG_DIR",
                            "value": "/config"
                        }
                    ],
                    "image": "quay.io/krestomatio/moodle_web",
                    "imagePullPolicy": "Always",
                    "livenessProbe": {
                        "exec": {
                            "command": [
                                "/usr/libexec/check-container-php",
                                "-t",
                                "-l"
                            ]
                        },
                        "failureThreshold": 3,
                        "initialDelaySeconds": 5,
                        "periodSeconds": 10,
                        "successThreshold": 1,
                        "timeoutSeconds": 3
                    },
                    "name": "m4e-sample-php-fpm",
                    "ports": [
                        {
                            "containerPort": 9000,
                            "protocol": "TCP"
                        }
                    ],
                    "readinessProbe": {
                        "exec": {
                            "command": [
                                "/usr/libexec/check-container-php",
                                "-t",
                                "-r"
                            ]
                        },
                        "failureThreshold": 6,
                        "initialDelaySeconds": 5,
                        "periodSeconds": 30,
                        "successThreshold": 1,
                        "timeoutSeconds": 3
                    },
                    "resources": {
                        "limits": {
                            "cpu": "1",
                            "memory": "1Gi"
                        },
                        "requests": {
                            "cpu": "150m",
                            "memory": "256Mi"
                        }
                    },
                    "terminationMessagePath": "/dev/termination-log",
                    "terminationMessagePolicy": "File",
                    "volumeMounts": [
                        {
                            "mountPath": "/var/moodledata",
                            "name": "moodledata"
                        },
                        {
                            "mountPath": "/config",
                            "name": "config-php",
                            "readOnly": true
                        },
                        {
                            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
                            "name": "default-token-k2qkg",
                            "readOnly": true
                        }
                    ]
                }
            ],
            "dnsPolicy": "ClusterFirst",
            "enableServiceLinks": true,
            "nodeName": "minikube",
            "preemptionPolicy": "PreemptLowerPriority",
            "priority": 0,
            "restartPolicy": "Always",
            "schedulerName": "default-scheduler",
            "securityContext": {
                "fsGroup": 48,
                "runAsUser": 48
            },
            "serviceAccount": "default",
            "serviceAccountName": "default",
            "terminationGracePeriodSeconds": 30,
            "tolerations": [
                {
                    "effect": "NoExecute",
                    "key": "node.kubernetes.io/not-ready",
                    "operator": "Exists",
                    "tolerationSeconds": 300
                },
                {
                    "effect": "NoExecute",
                    "key": "node.kubernetes.io/unreachable",
                    "operator": "Exists",
                    "tolerationSeconds": 300
                }
            ],
            "volumes": [
                {
                    "name": "moodledata",
                    "persistentVolumeClaim": {
                        "claimName": "m4e-sample-moodle-data"
                    }
                },
                {
                    "name": "config-php",
                    "secret": {
                        "defaultMode": 420,
                        "items": [
                            {
                                "key": "config.php",
                                "path": "config.php"
                            }
                        ],
                        "secretName": "m4e-sample-moodle-secret"
                    }
                },
                {
                    "name": "default-token-k2qkg",
                    "secret": {
                        "defaultMode": 420,
                        "secretName": "default-token-k2qkg"
                    }
                }
            ]
        },
        "status": {
            "conditions": [
                {
                    "lastProbeTime": null,
                    "lastTransitionTime": "2021-04-04T03:59:36Z",
                    "status": "True",
                    "type": "Initialized"
                },
                {
                    "lastProbeTime": null,
                    "lastTransitionTime": "2021-04-04T04:00:04Z",
                    "status": "True",
                    "type": "Ready"
                },
                {
                    "lastProbeTime": null,
                    "lastTransitionTime": "2021-04-04T04:00:04Z",
                    "status": "True",
                    "type": "ContainersReady"
                },
                {
                    "lastProbeTime": null,
                    "lastTransitionTime": "2021-04-04T03:59:36Z",
                    "status": "True",
                    "type": "PodScheduled"
                }
            ],
            "containerStatuses": [
                {
                    "containerID": "docker://b77ee173496abb37a0092d2869cf1efb59c2af43adbf5ecab1d256e0108e9836",
                    "image": "quay.io/krestomatio/moodle_web:latest",
                    "imageID": "docker-pullable://quay.io/krestomatio/moodle_web@sha256:48d069f0bc5301a547966ea20050d1b3fcfeb8dce871c65230cf8d6d62eb867c",
                    "lastState": {},
                    "name": "m4e-sample-php-fpm",
                    "ready": true,
                    "restartCount": 0,
                    "started": true,
                    "state": {
                        "running": {
                            "startedAt": "2021-04-04T03:59:38Z"
                        }
                    }
                }
            ],
            "hostIP": "192.168.39.222",
            "phase": "Running",
            "podIP": "10.88.136.128",
            "podIPs": [
                {
                    "ip": "10.88.136.128"
                }
            ],
            "qosClass": "Burstable",
            "startTime": "2021-04-04T03:59:36Z"
        }
    }
}
META: ran handlers
META: ran handlers

issue reading kubeconfig

Seems similar to #56

SUMMARY

module not able to load kube config, as this is done twice, first time with the module parameters then without leading the module to fail

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s

ANSIBLE VERSION

CONFIGURATION
ansible version : devel
kubernetes.core version : latest (1.2.0)
kubernetes version : 12.0.1
OS / ENVIRONMENT
STEPS TO REPRODUCE
- name: k8s read from configuration
  hosts: localhost
  gather_facts: false
  vars:
    kube_config: /home/runner/k8s/kube0.config
  tasks:
    - name: Ensure namespace is created
      kubernetes.core.k8s:
        kind: namespace
        name: ansible
        kubeconfig: "{{ kube_config }}"
EXPECTED RESULTS

namespace successfully created

ACTUAL RESULTS
PLAY [k8s merge type] *******************************************************************************************************************************************************

TASK [Ensure namespace is created] ******************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py\", line 163, in get_api_client\n  File \"/home/aubin/work/common/env/py38/lib/python3.8/site-packages/kubernetes/config/incluster_config.py\", line 118, in load_incluster_config\n    InClusterConfigLoader(\n  File \"/home/aubin/work/common/env/py38/lib/python3.8/site-packages/kubernetes/config/incluster_config.py\", line 54, in load_and_set\n    self._load_config()\n  File \"/home/aubin/work/common/env/py38/lib/python3.8/site-packages/kubernetes/config/incluster_config.py\", line 62, in _load_config\n    raise ConfigException(\"Service host/port is not set.\")\nkubernetes.config.config_exception.ConfigException: Service host/port is not set.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/home/aubin/.ansible/tmp/ansible-tmp-1619100985.295542-2104393-273223126777110/AnsiballZ_k8s.py\", line 100, in <module>\n    _ansiballz_main()\n  File \"/home/aubin/.ansible/tmp/ansible-tmp-1619100985.295542-2104393-273223126777110/AnsiballZ_k8s.py\", line 92, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/home/aubin/.ansible/tmp/ansible-tmp-1619100985.295542-2104393-273223126777110/AnsiballZ_k8s.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.kubernetes.core.plugins.modules.k8s', init_globals=dict(_module_fqn='ansible_collections.kubernetes.core.plugins.modules.k8s', _modlib_path=modlib_path),\n  File \"/usr/local/lib/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/local/lib/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/local/lib/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/modules/k8s.py\", line 348, in <module>\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/modules/k8s.py\", line 344, in main\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/modules/k8s.py\", line 328, in execute_module\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py\", line 501, in execute_module\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py\", line 168, in get_api_client\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py\", line 129, in _raise_or_fail\n  File \"/tmp/ansible_kubernetes.core.k8s_payload_1qsulbi9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py\", line 166, in get_api_client\n  File \"/home/aubin/work/common/env/py38/lib/python3.8/site-packages/kubernetes/config/kube_config.py\", line 792, in load_kube_config\n    loader = _get_kube_config_loader(\n  File \"/home/aubin/work/common/env/py38/lib/python3.8/site-packages/kubernetes/config/kube_config.py\", line 751, in _get_kube_config_loader\n    raise ConfigException(\nkubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP ******************************************************************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

helm uninstall does not support `--wait`

SUMMARY

The wait on absent is returning too fast,

"Error: failed to create resource: configmaps "env-cm" is forbidden: unable to create new content in namespace qa-ccc because it is being terminated"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

helm

STEPS TO REPRODUCE
- name: uninstall helm chart
  helm:
    chart_ref: "{{mount_point}}{{chart_ref}}"
    state: absent
    name: "{{helm_deploy_name}}"
    namespace: default
    wait: true
  tags:
    - deploy_helm
  collections:
    - community.kubernetes

Ability to wait on arbitrary status value

SUMMARY

Not all k8s resources implement the conditions in their status. However it may still be necessary to wait on them being ready. Since operations in K8s are asynchronous, an item later depending on this may fail.

A good example is the kind Service. This does not implement conditions as shown in kubernetes/kubernetes#80828. My ansible may need to wait for this service to have acquired an External IP address before using in a further task - perhaps feeding that IP into a network infrastructure rule (firewall, DNS or other).

A similar ticket exists for the Kubectl command - kubernetes/kubernetes#83094 - waiting for an arbitrary json path.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

module k8s

ADDITIONAL INFORMATION
- name: Deploy the dashboard service (lb)
  community.kubernetes.k8s:
    <<: *k8s_auth
    template: dash-service.yaml
    wait: yes
    wait_for: .status.loadBalancer.ingress[*].ip

Something like that could be waiting for an ingress IP to appear. Perhaps inspiration could come from kubernetes/kubernetes#83094 (comment).

support generateName or utils like that

SUMMARY

In order to make sure that resources created by k8s module has unique
name, there is some work involded to be done in ansible. kubernetes
api-server has a metadata.generateName field which can be used
to create a name that is within the limits of max chars allowed for the
resource. It would be great if the k8s module allows creation of
resources with generateName so the user does not have to take
care of limits of metadata.Name

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s

ADDITIONAL INFORMATION
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  generateName: utilitypod-
  namespace: blah-dev
  labels:
    purpose: utility-pod
spec:
  containers:
    - name: utilitypod
      image: blahblah/utilitypod:latest
      command: [ "/bin/bash", "-c", "--" ]
      args: [ "while true; do sleep 28800; done;" ]
      env:
        - name: KUBERNETES_SERVICE_HOST
          value: "api.dev.blah.internal"
        - name: KUBERNETES_SERVICE_PORT
          value: "443"
EOF

Taken from this stackoverflow thread https://stackoverflow.com/questions/48023475/add-random-string-on-kubernetes-pod-deployment-name

Proxy param does not accept authentication header

SUMMARY

When setting a proxy url in the proxy param, the username:password header is not read. It only accesses the URL, which returns a 407 if authentication is required for the proxy.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s

ANSIBLE VERSION
ansible 2.9.0
   config file = /home/user/.ansible.cfg
   configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
   ansible python module location = /home/user/projects/python_envs/ansible-env/lib/python3.6/site-packages/ansible
   executable location = /home/user/projects/python_envs/ansible-env/bin/ansible
   python version = 3.6.9 (default, Jan 26 2021, 15:33:00)
CONFIGURATION
DEFAULT_ROLES_PATH(/home/user/.ansible.cfg) = ['/home/user/.ansible/roles', '/usr/share/ansible/roles', '/etc/ansible/roles']
RETRY_FILES_ENABLED(/home/user/.ansible.cfg) = False
OS / ENVIRONMENT

WSL Ubuntu 18.03
Pythion 3.6 virtual environment

STEPS TO REPRODUCE

With a proxy url such as http://USERNAME:[email protected]:3001, the USERNAME:PASSWORD header is not loaded in, resulting in a 401 due to authentication required with proxy.

- name: Set up namespace
  hosts: localhost
  tasks:
    - name: create namespace
      k8s:
        host: "https://kube-cluster.com"
        api_key: "api_token"
        proxy: "http://USERNAME:[email protected]:3001"
        state: present
        name: "new-default-namespace"
        api_version: "v1"
        state: present
EXPECTED RESULTS

Expected for the namespace to be created.

ACTUAL RESULTS

Received a 407 from proxy server.

kubernetes.client.rest.ApiException: (401)
Reason: authenticationrequired

The rest of the output is proxy server error being returned.

Support the selector option in the k8s module when applying service definitions

SUMMARY

This is a copy of the following GH issue: ansible/ansible#67316.

The k8s module should support specifying selectors when applying resource definitions, similar to how kubectl supports it.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s module

ADDITIONAL INFORMATION

Some commonly used projects, such as Knative, require you to apply the Custom Resource Definitions (CRDs) to the K8s cluster, before you can deploy the rest of the service definition.

See: https://knative.dev/v0.11-docs/install/knative-with-any-k8s/#installing-knative.

Without the support for specifying selectors, this is not possible in Ansible without having to use a 3rd party binary (kubectl).

# example 
- name: Install Knative CRDs                                                                                                                                                                                      
  k8s:                                   
    state: present
    src: "{{ item.dest }}"
    context: "{{ k8s_cluster | mandatory }}" 
    # this does not work
   selector:
     - 'knative.dev/crd-install=true'    
    validate:
      fail_on_error: yes                                                                                                                                                                                   
  with_items:
    - "{{ knative_install_files.results }}"

# output

msg: 'Unsupported parameters for (k8s) module: selector Supported parameters include: api_key, api_version, append_hash, apply, ca_cert, client_cert, client_key, context, force, host, kind, kubeconfig, merge_type, name, namespace, password, proxy, resource_definition, src, state, username, validate, validate_certs, wait, wait_condition, wait_sleep, wait_timeout'

Sample kubectl command utilizing selectors:

kubectl apply --selector knative.dev/crd-install=true \
--filename https://github.com/knative/serving/releases/download/v0.11.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.11.0/release.yaml \
--filename https://github.com/knative/serving/releases/download/v0.11.0/monitoring.yaml

Helm Post Rendering support

SUMMARY

Helm Post Rendering support

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

community.kubernetes.helm

ADDITIONAL INFORMATION

Post rendering gives chart installers the ability to manually manipulate, configure, and/or validate rendered manifests before they are installed by Helm. It would be great to support this in the helm module. Use case would be to hand over a post renderer script (bash, python) what is execuring kustomize customization. More information: https://helm.sh/docs/topics/advanced/#post-rendering

helm: has_plugin fails when helm is outputting multiline description

SUMMARY

We get an Exception in has_plugin in
community.kubernetes/blob/main/plugins/modules/helm.py 421
When testing for helm plugins this is our helm Output:

bash-5.1# helm plugin list
NAME            VERSION DESCRIPTION
registry        0.7.0   This plugin provides app-registry client to Helm.
                        usage:
                          $ helm reg...

The failing output line is "usage:". This results in ValueError: not enough values to unpack (expected 2, got 1) because everything around it gets trimmed.

As a workaround i just ignored every line with single result:

def has_plugin(command, plugin):
    """
    Check if helm plugin is installed.
    """

    cmd = command + " plugin list"
    rc, out, err = run_helm(module, cmd)
    for line in out.splitlines():
        if line.startswith("NAME"):
            continue
        try:
            name, _rest = line.split(None, 1)
        except ValueError:
            continue
        if name == plugin:
            return True
    return False

ISSUE TYPE
  • Bug Report
COMPONENT NAME

community.kubernetes/blob/main/plugins/modules/helm.py 421

ANSIBLE VERSION
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.8/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.8 (default, Mar 15 2021, 13:10:14) [GCC 10.2.1 20201203]


helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE
EXPECTED RESULTS
ACTUAL RESULTS
TASK [Deploying Helm Charts] ***************************************************
skipping: [controller] => (item={'repo': 'dev', 'name': 'ads-manager', 'releaseSuffix': '', 'version': '0.1.2'}) 
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: not enough values to unpack (expected 2, got 1)
failed: [controller] (item={'repo': 'dev', 'name': '****-env', 'releaseSuffix': '-pr-41', 'version': '38.6.0-pr-41-14'}) => {"ansible_loop_var": "item", "changed": false, "item": {"name": "****-env", "releaseSuffix": "-pr-41", "repo": "dev", "version": "38.6.0-pr-41-14"}, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1617001631.0870662-124-53740490907834/AnsiballZ_helm.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1617001631.0870662-124-53740490907834/AnsiballZ_helm.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1617001631.0870662-124-53740490907834/AnsiballZ_helm.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.community.kubernetes.plugins.modules.helm', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_community.kubernetes.helm_payload_viqlvhh3/ansible_community.kubernetes.helm_payload.zip/ansible_collections/community/kubernetes/plugins/modules/helm.py\", line 641, in <module>\n  File \"/tmp/ansible_community.kubernetes.helm_payload_viqlvhh3/ansible_community.kubernetes.helm_payload.zip/ansible_collections/community/kubernetes/plugins/modules/helm.py\", line 587, in main\n  File \"/tmp/ansible_community.kubernetes.helm_payload_viqlvhh3/ansible_community.kubernetes.helm_payload.zip/ansible_collections/community/kubernetes/plugins/modules/helm.py\", line 421, in has_plugin\nValueError: not enough values to unpack (expected 2, got 1)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
skipping: [controller] => (item={'repo': 'dev', 'name': 'spring-boot-demo', 'releaseSuffix': '', 'version': '0.1.0'}) 
skipping: [controller] => (item={'repo': 'dev', 'name': 'kickstarter-ui', 'releaseSuffix': '', 'version': '2.9.3'}) 

Unable to connect openshift using inventory plugin

SUMMARY
ISSUE TYPE
  • Bug
COMPONENT NAME
ANSIBLE VERSION
$ ansible --version               
ansible 2.10.7
  config file = /home/savsingh/openshift/ansible.cfg
  configured module search path = ['/home/savsingh/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/savsingh/virtualenvs/openshfit-test2/lib64/python3.6/site-packages/ansible
  executable location = /home/savsingh/virtualenvs/openshfit-test2/bin/ansible
  python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE
$ cat openshift.yaml                             
plugin: redhat.openshift.openshift
connections:
  - host: https://api.ocp4.example.com:6443/
    api_key: sha256~zqxxxxxxx-T
    validate_certs: false
EXPECTED RESULTS

inventory should gather the information about the objects in openshift.

ACTUAL RESULTS
$ ansible-inventory all -i openshift.yaml --list
[WARNING]:  * Failed to parse /home/savsingh/openshift/openshift.yaml with auto plugin: 404 Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Audit-Id': '4b09d246-0fbe-46c2-823d-8609472afd06', 'Cache-Control': 'no-
cache, private', 'Content-Type': 'application/json', 'Date': 'Mon, 12 Apr 2021 11:24:29 GMT', 'Content-Length':
'1228'}) HTTP response body: b'{\n  "paths": [\n    "/apis",\n    "/apis/",\n    "/apis/apiextensions.k8s.io",\n
"/apis/apiextensions.k8s.io/v1",\n    "/apis/apiextensions.k8s.io/v1beta1",\n    "/healthz",\n
"/healthz/etcd",\n    "/healthz/log",\n    "/healthz/ping",\n    "/healthz/poststarthook/crd-informer-synced",\n
"/healthz/poststarthook/generic-apiserver-start-informers",\n    "/healthz/poststarthook/start-apiextensions-
controllers",\n    "/healthz/poststarthook/start-apiextensions-informers",\n    "/livez",\n    "/livez/etcd",\n
"/livez/log",\n    "/livez/ping",\n    "/livez/poststarthook/crd-informer-synced",\n
"/livez/poststarthook/generic-apiserver-start-informers",\n    "/livez/poststarthook/start-apiextensions-
controllers",\n    "/livez/poststarthook/start-apiextensions-informers",\n    "/metrics",\n    "/openapi/v2",\n
"/readyz",\n    "/readyz/etcd",\n    "/readyz/informer-sync",\n    "/readyz/log",\n    "/readyz/openshift-
apiservices-available",\n    "/readyz/ping",\n    "/readyz/poststarthook/crd-informer-synced",\n
"/readyz/poststarthook/generic-apiserver-start-informers",\n    "/readyz/poststarthook/start-apiextensions-
controllers",\n    "/readyz/poststarthook/start-apiextensions-informers",\n    "/readyz/shutdown",\n
"/version"\n  ]\n}' Original traceback:    File "/home/savsingh/virtualenvs/openshfit-
test2/lib64/python3.6/site-packages/openshift/dynamic/client.py", line 42, in inner     resp = func(self, *args,
**kwargs)    File "/home/savsingh/virtualenvs/openshfit-test2/lib64/python3.6/site-
packages/openshift/dynamic/client.py", line 247, in request
_return_http_data_only=params.get('_return_http_data_only', True)    File "/home/savsingh/virtualenvs/openshfit-
test2/lib64/python3.6/site-packages/kubernetes/client/api_client.py", line 353, in call_api
_preload_content, _request_timeout, _host)    File "/home/savsingh/virtualenvs/openshfit-
test2/lib64/python3.6/site-packages/kubernetes/client/api_client.py", line 184, in __call_api
_request_timeout=_request_timeout)    File "/home/savsingh/virtualenvs/openshfit-test2/lib64/python3.6/site-
packages/kubernetes/client/api_client.py", line 377, in request     headers=headers)    File
"/home/savsingh/virtualenvs/openshfit-test2/lib64/python3.6/site-packages/kubernetes/client/rest.py", line 243,
in GET     query_params=query_params)    File "/home/savsingh/virtualenvs/openshfit-test2/lib64/python3.6/site-
packages/kubernetes/client/rest.py", line 233, in request     raise ApiException(http_resp=r)
[WARNING]:  * Failed to parse /home/savsingh/openshift/openshift.yaml with yaml plugin: Plugin configuration
YAML file, not YAML inventory
[WARNING]:  * Failed to parse /home/savsingh/openshift/openshift.yaml with ini plugin: Invalid host pattern
'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port.
[WARNING]: Unable to parse /home/savsingh/openshift/openshift.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
    "_meta": {
        "hostvars": {}
    },
    "all": {
        "children": [
            "ungrouped"
        ]
    }
}

Missing requirements.txt file

SUMMARY

Both redhat.openshift and kubernetes.core collections are missing requirements.txt file in the collection.
Files are needed to create a certified execution environment that include both of the certified collections. I am not sure which is the correct place to file this bug , so I am taking a best guess. Please correct me if I am wrong @tima @gravesm @Akasurde

openshift inventory fails with traceback

SUMMARY

Openshift inventory from redhat.openshift and community.okd fails with following error -

'InventoryModule' object has no attribute 'fail_json'
ISSUE TYPE
  • Bug Report
COMPONENT NAME

plugins/module_utils/common.py

ANSIBLE VERSION
2.10
CONFIGURATION
OS / ENVIRONMENT
openshift           0.12.0
kubernetes          12.0.1

Collection version

kubernetes.core               1.2.0
community.okd                 1.1.2
STEPS TO REPRODUCE
ansible-inventory -i k8s.yml --list -vv
EXPECTED RESULTS

Inventory returns information about pods

ACTUAL RESULTS
[WARNING]:  * Failed to parse /home/vagrant/k8s.yml with auto plugin: 'InventoryModule' object
has no attribute 'fail_json'
[WARNING]:  * Failed to parse /home/vagrant/k8s.yml with yaml plugin: Plugin configuration
YAML file, not YAML inventory
[WARNING]:  * Failed to parse /home/vagrant/k8s.yml with ini plugin: Invalid host pattern
'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a
port.
[WARNING]: Unable to parse /home/vagrant/k8s.yml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
    "_meta": {
        "hostvars": {}
    },
    "all": {
        "children": [
            "ungrouped"
        ]
    }
}

Support globally the "ansible_kubectl_context" variable

SUMMARY

Just as many modules support the K8S_AUTH_CONTEXT environment variable, globally supporting the ansible_kubectl_context variable would be handy.

Such as here:
https://github.com/ansible-collections/community.kubernetes/blob/0377a892d5ce7ee39ad683d80be8be58dbef7360/plugins/connection/kubectl.py#L83-L90

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

community.kubernetes.*

ADDITIONAL INFORMATION

When there are multiple contexts available, instead of always having to specify the context on every task/modules, just as the K8S_AUTH_CONTEXT allows to set a global context used on all k8s operations.

  name: Foo
  k8s:
    state: present
    context: my_context  # โ† This line
    namespace: default
    definition:
      โ€ฆ

not possible to replace/update objects via "apply: yes" or "force: yes" in OpenShift 3.11

SUMMARY

There are actually 2 problems we face. The first one is that large definitions are not updated at all when apply: yes is used. We have a file with CRD definition that has over 7500 lines. Ansible does not show any errors and it looks like the the update was made. However after inspecting the CRD on the server you can see that no update took place. You can use the Kafka CRD from the Strimzi project at https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.21.1/strimzi-crds-0.21.1.yaml to reproduce the issue. We tried force: yes but you can't really use it because the resourceVersion property is missing in the definition but is needed for force: yes.

The second problem is that objects are not really updated when using apply: yes (and yet again we can't use force: yes because the resourceVersion property is missing in the definition). We have a definition like the following one (few fields removed for simplification) on our OpenShift 3.11 cluster.

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-name
  namespace: my-name-space
spec:
  entityOperator:
    template:
      tlsSidecarContainer:
        env:
          - name: TZ
            value: Europe/Zurich
[...]

Our new definition does not contain the path spec.entityOperator.template.tlsSidecarContainer anymore but after the following task runs, it is still in place and not removed on the server.

- name: deploy cluster
  k8s:
    namespace: "{{ project }}"
    state: present
    apply: yes
    definition: "{{ lookup('template', 'definition.j2') }}"
ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s module

ANSIBLE VERSION
ansible 2.10.5
  config file = ~/Code/Ansible/playbooks/ansible.cfg
  configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = ~/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/ansible
  executable location = ~/.pyenv/versions/default/bin/ansible
  python version = 3.9.1 (default, Jan 30 2021, 21:53:22) [Clang 12.0.0 (clang-1200.0.32.29)]
CONFIGURATION
ANSIBLE_PIPELINING(~/Code/Ansible/playbooks/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(~/Code/Ansible/playbooks/ansible.cfg) = True
DEFAULT_ROLES_PATH(~/Code/Ansible/playbooks/ansible.cfg) = ['/Users/b0rski/Code/Ansible/playbooks/roles']
DEFAULT_STDOUT_CALLBACK(~/Code/Ansible/playbooks/ansible.cfg) = yaml
INTERPRETER_PYTHON(~/Code/Ansible/playbooks/ansible.cfg) = ~/.pyenv/shims/python
OS / ENVIRONMENT

MacOS X Big Sur and Redhat Linux 7

STEPS TO REPRODUCE

Please use files and examples from description above (Strimzi CRDs).

EXPECTED RESULTS

We expect that paths get deleted if they're not defined in a definition anymore. No merge but replace must be done.

ACTUAL RESULTS

As described above fields are not removed and remain untouched in the definition on server.

K8s module doesn't use context namespace which oblige complexity in code

ansible/ansible#51242 (comment)
ansible/ansible#69193

SUMMARY

kubectl and oc are assuming a default namespace on resources for which you do not define a "metadata.namespace" attribute. This is the default behaviour of kubectl as it use the one defined in the configuration of a context required to authenticate to the kubernetes clusters (the context).

A use case to have the namespace not specified on the resource being passed is that you might want to deploy those resources in differents namepaces depending on the context you're in (test namespace, production namespace, dev namespace, etc.).

This is a pretty usual case where you want to ensure the resource work for a certain scenario in a development cluster and then move to a production cluster which uses a different namespace. It ease the code complexity as you use the kubectl context (which uses "default" namespace by default, but you can point to the namespace you want).

A more extended use case is to have a test cases within the same cluster you're using ; a user with namespaced rights on the test namespace but with no access to the cluster wide. The context is namespaced and it's able to do pretty much anything a cluster-admin would be able to do, but only on that namespace (create and patch of a namespace but only that namespace). This enable to limit the blast radius of an error to only that namespace but still test the behaviour of your code. It will also prevent extended access into other namespace.

I would like to re-open the discussion as I believe those strategy are relevants ones in combination with i.e. gitlab kubernetes integration which define a context for you to execute kubectl commands which encompass the namespace as well (as is the case with kubectl and oc).
It's also not an expected behaviour when you have a resource that you were able to deploy with kubectl and oc but not with ansible k8s module.

If the decision stays to be "it has to be explicitly defined and maintain in the code", than I believe the documentation should be also explicit and be updated to say that all resource requiring a namespace will have to get the namespace defined within "resource definition" or "namespace" field.

ISSUE TYPE
  • Feature Idea and/or documentation clarification
COMPONENT NAME

k8s module should use namespace from context when it's not passed to continue behaviour of the wrapper it's supposed to use. (aka kubectl and oc).

ADDITIONAL INFORMATION

Here is an example of a configmap that could be used with a deployment;

ansible-playbook example.yml
- name: a ConfigMap for an app without metadata.namespace specified
  k8s:
    state: present
    definition: "{{ lookup('template', 'cm.yaml.j2') }}"

cm.yaml.j2

apiVersion: v1
kind: ConfigMap
metadata:
  name: game-config
data:
  somedata: "Hello"

Current situation ;

ansible-playbook example.yml -e namespace=$(kubectl config view --minify --output 'jsonpath={..namespace}')
- name: a ConfigMap for an app without metadata.namespace specified
  k8s:
    state: present
    definition: "{{ lookup('template', 'cm.yaml.j2') }}"

cm.yaml.j2

apiVersion: v1
kind: ConfigMap
metadata:
  name: game-config
  namespace: "{{ namespace }}"
data:
  somedata: "Hello"

some more references;
https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration

playbook in error when using openshift lib 0.12

SUMMARY

Error encounter when using openshift 0.12 package and community.kubernetes.k8s. See attached file: error_with_openshift_0.12.json.txt
community.kubernetes.k8s works well with openshift 0.11.2 pakage. See attached file: ok_with_openshift_0.11.2.json.txt
error_with_openshift_0.12.json.txt
ok_with_openshift_0.11.2.json.txt

ISSUE TYPE
  • Bug Report
COMPONENT NAME

community.kubernetes.k8s

ANSIBLE VERSION

Virtual Env with openshift 0.11.2 and 0.12.0 run both:

ansible 2.9.18
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /opt/custom-envs/cloudbuilder-collection2/lib/python2.7/site-packages/ansible
  executable location = /opt/custom-envs/cloudbuilder-collection2/bin/ansible
  python version = 2.7.5 (default, Aug 13 2020, 02:51:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
# ansible-config dump --only-changed
/opt/custom-envs/cloudbuilder-collection2/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
  from cryptography.exceptions import InvalidSignature

OS / ENVIRONMENT
# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.9 (Maipo)
STEPS TO REPRODUCE

https://github.com/nergalex/f5-aks-nginx_ingress_app_protect/blob/master/playbooks/roles/poc-k8s/tasks/create_nap_log_format.yaml

EXPECTED RESULTS

See attached file: ok_with_openshift_0.11.2.json.txt

ACTUAL RESULTS

See attached file: ok_with_openshift_0.11.2.json.txt

Failed to load kubeconfig due to Invalid kube-config file

SUMMARY

When k8s_info module runs on a non-controller host, it looks for a kubeconfig file on the controller host, but returns "invalid kube-config file" error even though the kubeconfig file on the controller is valid.

To demonstrate this I have 2 scenarios to be compared; the first scenario (CI) seems to work by co-incidence, the second scenario (local development) reveals the bug.

CI Scenario:

  • I have a CI environment where two new servers are terraformed, one as control box, the other as target box.
  • CI will run the ansible playbook from the control box and deploy a kubernetes cluster to the target box as part of the playbook. - - As part of the playbook, the target box's kubeconfig file ~centos/.kube/config is copied to the control box at the same location ~centos/.kube/config near the end of the play.
  • At all times everything runs as 'centos' user.
  • When I run a k8s_info action in this scenario with hosts: server (target), specifying the kubeconfig file, I am able to confirm certain objects exist in my cluster. This works as intended.
    whoami on all boxes would return centos and ansible_user would also return centos. The 2 boxes are basically the same, just different 'purpose'. This is the only scenario that works.

Local Development Scenario

  • I have a developer computer which acts as a control box, I am logged in as myself e.g. daniel there is a target box configured the same way as the CI scenario (with a centos user).
  • When I run the installation, I set ansible-playbook .... -u centos, the ssh user is centos, but I run the installer as daniel. so
    whoami on the localhost would return daniel and anywhere else would be centos.
    As above the kubeconfig file is in its place on the target machine, and as before gets copied to the control box. Note in the CI scenario, the copy was to an identical folder, ~centos/.kube/config, but this time in this scenario, it's copied to ~daniel/.kube/config

The documentation does not state where the kubeconfig file must be located, nor does it state that the k8s_info must be run on the control box. So up till now I had no reason to think anything was wrong. When my k8s_info task ran on the target machine, I assumed it was using the target machine kubeconfig (not the controlbox kubeconfig).

When I try to run a k8s_info action in this scenario with hosts: server (target) specifying the kubeconfig file as ~centos/.kube/config then it says the file cannot be found on the AnsibleControl machine (of course it's now located at ~daniel/.kube/confing on the control machine

This suggests that regardless of the hosts, the role is expecting to find the kubeconfig file on the control machine. Can you confirm this is the case?

Assuming this is true, if I tell the installer to use ~daniel/.kube/config (which exists on the control machine), with hosts: server then it tells me that the config isn't valid!

The only scenario that works with hosts: server is if my target and control both have a kubeconfig file in the same location.

This seems to be a bug as

  • I would have expected my config to be considered valid as below.
  • I would expect the operation to work in locations other than the control machine.

Can you clarify the following:

  • Where can these tasks run, localhost only? or anywhere?
  • Which kubeconfig file is used, control? or the current inventory_host?
  • Has this behaviour changed recently?
failed: [10.50.52.94] (item={'name': 'coredns', 'quantity': 1}) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"name": "coredns", "quantity": 1}, "msg": "Failed to load kubeconfig due to Invalid kube-config file. No configuration found."}
ls ~/.kube/config
/home/daniel/.kube/config
cat ~/.kube/config

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: redactedbase64==
    server: https://10.50.52.94:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: redacted
    username: redacted

This is all very strange, as this causes the playbook to fail, and I am able to cat the file, and it's perfectly valid, and works with kubectl.

Any ideas?

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s_info

ANSIBLE VERSION
[WARNING]: Ansible is being run in a world writable directory (/home/daniel/projects/prom/project), ignoring it as an ansible.cfg source. For more
information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
ansible-playbook 2.10.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/daniel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/daniel/.pex/installed_wheels/04c26471cb05787fcd8372d2f2bea63afb042678/ansible_base-2.10.6-py3-none-any.whl/ansible
  executable location = ansible-playbook
  python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]

CONFIGURATION

OS / ENVIRONMENT

centos 7.9

STEPS TO REPRODUCE
- name: Copy kubeconfig from the server to the control machine
  fetch:
   # in CI both control_user_id and ansible_user are centos, in development, control_user_id is daniel, ansible_user is centos
    src: ~{{ ansible_user }}/.kube/config
    dest: ~{{ control_user_id }}/.kube/config
    flat: yes
  become: no

# SNIP
# run once on host 'server' (target)
- name: "Verify k3s deployment"
  community.kubernetes.k8s_info:
    kind: Deployment
    wait: yes
    name: "{{ item.name }}"
    namespace: kube-system
    wait_timeout: 360
    wait_sleep: 10
   #apparently this kubeconfig file is supposed to be on the control box, can this be added to the docs?
    kubeconfig: "~{{ control_user_id }}/.kube/config"   #in development control_user_id  this is ~daniel in CI its centos
  register: deployment_status
  run_once: true
  become: yes
  until: (deployment_status.resources[0].status.readyReplicas | default(0) == item.quantity)
  retries: 5
  loop:
    - { name: 'coredns', quantity: 1 }
EXPECTED RESULTS

I would have expected my kubeconfig file to be considered valid.

ACTUAL RESULTS
failed: [10.50.52.94] (item={'name': 'coredns', 'quantity': 1}) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"name": "coredns", "quantity": 1}, "msg": "Failed to load kubeconfig due to Invalid kube-config file. No configuration found."}

Add context argument to k8s_cluster_info module

SUMMARY

Add context argument to the k8s_cluster_info module so I can get the relevant cluster's info (like version)

Currently this module only gives me the option to get the info of the default context.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

plugins/modules/k8s_cluster_info.py

ADDITIONAL INFORMATION
- k8s_cluster_info:
    context: cluster-name

Secret keys are printed on error

SUMMARY

When I run helm and get a hook error it prints a secret (which I expect to be hidden)

Since I'm running ansible-playbook as part of a jenkins job I have to hide the output (no_log: True).

My chart does not have a secret in it, it only references one (deployed manually via another ansible task)

ISSUE TYPE
  • Bug Report
COMPONENT NAME

community.kubernetes.helm

ANSIBLE VERSION
ansible 2.10.3
  config file = /home/user/code/devops/ansible/cloud-infrastructure/ansible.cfg
  configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/user/code/devops/ansible/.venv/lib/python3.7/site-packages/ansible
  executable location = /home/user/code/devops/ansible/.venv/bin/ansible
  python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
CONFIGURATION
INTERPRETER_PYTHON(/home/user/code/devops/ansible/cloud-infrastructure/ansible.cfg) = ../.venv/bin/python3
OS / ENVIRONMENT
STEPS TO REPRODUCE
- name: Install latest available Helm chart in current channel
  community.kubernetes.helm:
    name: cloud-engine
    chart_ref: "{{ helm_gcs_bucket }}/proj"
    chart_version: '>0.0.0-0' # Get devel versions
    release_namespace: default
    update_repo_cache: true
    atomic: true
    wait: true
EXPECTED RESULTS

I expect that no secret would be output onto the console.

It would be preferable to not get any output (unless some -v is specified perhaps?) than to have my secrets leak into logs.

ACTUAL RESULTS
TASK [applications/proj : Install latest available proj Helm chart in current channel] 
fatal: [localhost]: FAILED! => {"changed": false, "command": "/home/user/bin/helm --namespace=default --version=>0.0.0-0 upgrade -i --reset-values --wait --atomic -f=/tmp/tmpkuwixan5.yml proj nh-helm-unstable/proj", "msg": "Failure when executing Helm command. Exited 1.\nstdout: \nstderr: Error: UPGRADE FAILED: an error occurred while rolling back the release. original upgrade error: pre-upgrade hooks failed: timed out waiting for the condition: warning: Hook post-rollback proj/templates/hooks/db-migrate-job.yaml failed: Job.batch \"proj-db-migrate\" is invalid: [spec.template.spec.containers[0].env[1].valueFrom.secretKeyRef.name: Invalid value: \"map[apiVersion:v1 data:map[cloud-sql-proxy.service-account-credentials:LONG SECRET HERE  [CUT A BUNCH OF TEXT]: a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]"], "stdout": "", "stdout_lines": []}

client side rate limiting

SUMMARY

kubernetes client-go has QPS and Burst configurations that will allow us to
rate-limit the number of request hitting the api-server on client side. It would
be great if the k8s module can be backed by rate-limiting client to stop a rogue
playbook bring down api-server

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

probably all components that use the client

ADDITIONAL INFORMATION
    // QPS indicates the maximum QPS to the master from this client.
    // If it's zero, the created RESTClient will use DefaultQPS: 5
    QPS float32
    // Maximum burst for throttle.
    // If it's zero, the created RESTClient will use DefaultBurst: 10.
    Burst int

Migration to kubernetes-client/python Meta-issue

This meta-issue is part of the execution of the proposal to establish a separate collection for OpenShift (OKD) Content.

The Ansible k8s_* modules and other plugins use the openshift-restclient-python library dynamic API client. While this library covers both native K8s functionality in addition to OpenShift specific extensions effectively, the requirement of the library raises questions and concerns with the broader Kubernetes community to its compatibility with all Kubernetes distributions.

This proposed action will make it easier and more effectively to serve these two distinct though related user communities. Switching to the official Kubernetes client and the decoupling of all OpenShift code will assure compatibility for their respective objectives and enable independent decision making on the features, functionality and release cadence.

This meta-issue is tracking the replacement the openshift-restclient-python library dependency with the official python client library for Kubernetes porting and refactoring code as needed to maintain the same functional behaviors.

This work is slated to be completed for the v2.0.0 release of this collection.

  • Hash generation for configmaps and secrets
    • Should propose to Kubernetes Python client, this may be of interest to them
  • Client-side apply implementation
    • This could partially be solved by adding support for server-side apply, which is available on newer versions of Kubernetes.
    • It will likely be difficult to upstream this into the Kubernetes Python client, because of the existence of server-side apply
  • Advanced diffing logic
    • This is currently part of apply, and we'd likey have to carry this in the collection as this is likely not a feature the Kubernetes Python client maintainers are interested in.
  • Support for the dry_run query parameter (not currently used)
  • Error handling for 503s in the discovery API (kubernetes-client/python-base#187)
  • Namespace cache based on user (kubernetes-client/python-base#188)
  • Robustly handle *List kind base resource lookups (kubernetes-client/python-base#186)

Documentation of default kubeconfig location

SUMMARY

References to the location of the default kubeconfig file are not consistent. In some places, it's ~/.kube/config, and in others it's ~/.kube/config.json.

I think ~/.kube/config is correct; as listed in the kubernetes docs, and judging by the behaviour of the module.

ISSUE TYPE
  • Documentation Report
COMPONENT NAME

From $ grep -R kube/config in the git repository master:

20210410_21h04m55s_grim

ANSIBLE VERSION

Present on 2.10 documentation page


Module for creating Job from CronJob

SUMMARY

It may be very specific use case but I need to create Job using CronJob object. Right now, I have to resort to command module in order to do that:

- name: Test my CronJob
  command: kubectl --namespace {{ namespace }} job --from=cronjob/my-cron test-job
  register: test_job
  failed_when:
    - test_job.rc != 0
    - "already exists" not in test_job.stdout
ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s_job

ADDITIONAL INFORMATION

One use case would be to test CronJobs that are running very seldom. Other use case may that: there's already defined but complicated CronJob for some task i.e. creating backup from database. Creating Job from CronJob allows performing this action on demand.

- name: Test CronJob using Job
  k8s_job:
    name: test-job
    from_cronjob: some-cronjob
    state: present

this module would need also other options from k8s module like for waiting and validation.

This is just very loose idea - it may be better to create more generic module or completely restructure my proposal in case of being generic.

Unexpected CronJob behavior. Fails to create CronJob

SUMMARY

Attempting to create a cronjob with kind CronJob and apiVersion batch/v1beta1 yields "Failed to find exact match for v1.CronJob by [kind, name, singularName, shortNames]"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s

ANSIBLE VERSION
ansible 2.9.14

CONFIGURATION
bash-5.0# ansible-config dump --only-changed

bash-5.0# 



OS / ENVIRONMENT
bash-5.0# cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.12.3
PRETTY_NAME="Alpine Linux v3.12"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"

Jenkins Docker Container
STEPS TO REPRODUCE
---
- hosts: localhost
  tasks:
    - name: Create Resource
      community.kubernetes.k8s:
        state: present
        definition:
          api_Version: batch/v1beta1
          kind: CronJob
          metadata:
            name: test-cronjob
            namespace: default
          schedule: "@daily"
          jobTemplate:
            spec:
              containers: 
                - name: test1
                  image: busybox

EXPECTED RESULTS

A cronjob called "test-cronjob" to be created in the default namespace

ACTUAL RESULTS
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to find exact match for v1.CronJob by [kind, name, singularName, shortNames]"}


Inventory plugin should prefix groups with cluster name (bug?!)

SUMMARY

Hello,

I'm using the inventory plugin with two clusters (one for prod and one for dev) and I've noticed that it merges the groups from both clusters and I get the following warning:

[WARNING]: Found both group and host with same name: k2
[WARNING]: Found both group and host with same name: k1

On a second note, I believe there are cases where multiple configurations files for this plugin make sense.
One can generate the files.
In this case working with a single file is not easy.

ISSUE TYPE
COMPONENT NAME

The inventory plugin handles this.

ADDITIONAL INFORMATION

My inventory plugin config k8s.yml .

# Use a custom config file, and a specific context.
plugin: kubernetes.core.k8s
connections:
  - name: k1
    kubeconfig: secrets/k1.cluster-access.yml
    context: default
  - name: k2
    kubeconfig: secrets/k1.cluster-access.yml
    context: default

failure in k8s task when using a definition list should have better error message

SUMMARY

Use a list in k8s definition list:

  set_fact:
    my_yaml_files: ["some.yaml", "another.yaml"]

  k8s:
     state: "present"
     namespace: "default"
    definition: |
      {% for mdy in my_yaml_files %}
      ---
      {{ lookup("template", mdy) }}
      ...
      {% endfor %}

If there is a syntax error in one of the templates (or simply that a resource in the list fails to be created), the error message that k8s spits out doesn't tell you WHICH item in the list fails.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s

ANSIBLE VERSION

2.9

STEPS TO REPRODUCE

See above.

EXPECTED RESULTS

An error message that better indicates which item in the list failed.
It might also be nice to have the rest of the items processed, but that might break the k8s design. As it is, if you have ANY error in ANY item in the list, the entire list of resources apparently fail to be persisted/created.

ACTUAL RESULTS

You do get an error, but all resources abort (fail to get created) so you aren't sure which item in the list caused the failure.

helm module should support the equivalent of the command line --history-max flag

SUMMARY

HELM module should allow equivalent configuration of the --history-max field.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

community.kubernetes.helm

ADDITIONAL INFORMATION

When the helm module is invoked to trigger an upgrade it results in additional secrets being created up to the default maximum (10) of the form: sh.helm.release.v1..v<deployment#>.

This requires a large number of secrets being configured as the quota in the K8s cluster. If it supported the CLIs --history-max option, then deployments would be capped at a much lower threshold.

If run manually I would do helm upgrade -f file --history-max value release chart

Example configuration:

  community.kubernetes.helm:
    name: test
    chart_ref: stable/grafana
    release_namespace: monitoring
    history_max: 2

Unable to patch node status in order to create extended resources

SUMMARY
ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s (k8s_raw)

ANSIBLE VERSION
ansible 2.9.14
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.7.4 (default, Aug 12 2019, 14:45:07) [GCC 9.1.1 20190605 (Red Hat 9.1.1-2)]
CONFIGURATION
Empty output
OS / ENVIRONMENT

k8s v1.17.0

STEPS TO REPRODUCE

Trying to create extended resources with ansible.
Ansible playbook finishes successfully, but nothing actually happens.
I believe its related to
kubernetes/kubernetes#67455
more info on openshift/openshift-restclient-python#391
using a proxy it works, as then /api/v1/nodes/<NODE>/status EP is used instead of /api/v1/nodes/<NODE>.
https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/

Is there a field of ansible that says use a proxy maybe ?

- hosts: localhost
  tasks:
    - name: mytask
      k8s:
        api_version: v1
        kind: Node
        name: sriov-worker
        merge_type: json
        definition:
        - op: add
          path: /status/capacity/prow~1sriov
          value: 4
EXPECTED RESULTS

Extended resource should be created on the node /status/capacity.

ACTUAL RESULTS

Node status is unchaged.

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************
ok: [localhost]

TASK [mytask] ***************************************************************************************************************************************************************************
ok: [localhost]

PLAY RECAP ******************************************************************************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

kubernete.core not pointing to helm3

SUMMARY

Target system has both helm and helm3, when executing kubenetes.core.helm it always refers to helm and not to helm3

ISSUE TYPE
  • Bug Report
COMPONENT NAME

kubenetes.core.helm

ANSIBLE VERSION
ansible 2.10.8
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]


ansible                        3.3.0
ansible-base                   2.10.8

CONFIGURATION

OS / ENVIRONMENT

Machine running ansible (localhost): CentOS Linux release 7.9.2009 (Core)
Target system: CentOS 7.8.2003 (Core)

STEPS TO REPRODUCE

Trying to install, delete helm chart but no success

   - name: Install chart
     kubernetes.core.helm:
       name: abc-grafana
       chart_ref: sample-grafana
       state: present
       namespace: test
       values_files:
         - /root/sample-grafana/values.yaml
     tags:
       - install
EXPECTED RESULTS

Chart should be installed

ACTUAL RESULTS

fatal: [10.76.183.251]: FAILED! => {
    "changed": false,
    "command": "/usr/local/bin/helm list --output=yaml --filter sample-grafana",
    "invocation": {
        "module_args": {
            "api_key": null,
            ....
            ....
"msg": "Failure when executing Helm command. Exited 1.\nstdout: \nstderr: Error: unknown flag: --filter\n",
    "stderr": "Error: unknown flag: --filter\n",
    "stderr_lines": [
        "Error: unknown flag: --filter"
    ],
    "stdout": "",
    "stdout_lines": []

How is the k8s inventory plugin being used?

In ansible-collections/community.kubernetes#217, @fabianvf made some modifications to the k8s inventory plugin that we believe improve its overall operation, cuts down on the "noise" and just makes more sense. This begs the question -- how is this plugin being used? Does this refactoring make sense? What is the impact, if any, of the changes? Are there other improvements to be made to help users?

ADDENDUM: Is this plugin needed at all or is it supporting an anti-pattern (Kubernetes should be managing containers in its cluster) that it should be deprecated and removed?

A new module for copying files to/from a Pod

SUMMARY

Create a module for copying files to/from a running Pod analogous to the kubectl cp command.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s_cp

ADDITIONAL INFORMATION

Copying files in and out of running Pods with Ansible is generally useful for various scenarios including backup/restore operations.

One option today would be to use the core copy module with the kubectl connection plugin, but running just one task with the a connection plugin is difficult.

Another option is to use kubectl cp but a module doesn't exist requiring the use of command. It also means the kubectl packaging as a dependency.

Something more native and direct is needed and should be added to this collection.

This Stackoverflow post covers a similar, albeit it more limited, scenario and python solutions using the official kubernetes client: https://stackoverflow.com/questions/59703610/copy-file-from-pod-to-host-by-using-kubernetes-python-client

Document and test breaking change in k8s_cluster_info and provide porting guidance

SUMMARY

The PR ansible-collections/community.kubernetes#389 introduces a breaking change to address the problems introduced in issue ansible-collections/community.kubernetes#380. This needs to be called out in the documentation. Further, as noted here, current testing of this module is inadequate and did not pickup the original issue or the breaking change that was submitted and needs to be addressed as part of this.

ISSUE TYPE
  • Documentation
COMPONENT NAME

k8s_cluster_info

k8s module: --diff-mode support only half works with multi-resource manifests

SUMMARY

We have a kustomize lookup plugin that renders a kustomize directory structure into a resource_definition for the k8s module. We are observing that when the k8s module manages multiple resources, --diff does not display inline diffs.

The same process returns inline diffs when the resource_definition is only a single resource.

After spending a bunch of time with this I have a fix committed to my employer's ansible fork, but I don't know where would be best for this bug report to live upstream.

The problem appears to be that inside raw.py we are setting diffs as follows:

match, diffs = self.diff_objects(existing, result['result'])
[...]
result['diff'] = diffs

This is ends up returning a result structure that looks something like this:

{
    "changed": true,
    "invocation": {},
    "result": {
        "results": [
            { "changed": false, "diff": {}, <etc> },
            { 
                "changed": true,
                "diff": {
                    "after": <resource spec post-change>,
                    "before": <existing resource spec>,
                    <etc>
                }
            },
            <other result items>
        ]
    }
}

Because our result structure is so complicated, it is causing problems in upstream task handlers, I found 2 and resolving them locally enabled us to start seeing friendly diff output regardless of resource count.

  1. default strategy plugin
    https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/strategy/__init__.py#L504-L514
    The functionality here makes some potentially naive assumptions about the result dictionary:
            elif '_ansible_item_result' in task_result._result:
                if task_result.is_failed() or task_result.is_unreachable():
                    self._tqm.send_callback('v2_runner_item_on_failed', task_result)
                elif task_result.is_skipped():
                    self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
                else:
                    if 'diff' in task_result._result:
                        if self._diff or getattr(original_task, 'diff', False):
                            self._tqm.send_callback('v2_on_file_diff', task_result)
                    self._tqm.send_callback('v2_runner_item_on_ok', task_result)
                continue

None of these conditions end up evaluating as True for our returned result structure:

  • our task_result._result has a results: [] list instead of a diff:, since we are pushing the diff key into each result item under results.
  • we have many possible results which each must be examined for a diff

Without an adjustment to either our return value or the upstream logic, our multi-resource tasks never fire the v2_on_file_diff callback, so our diff is only visible while running with -vvv which dumps the entire data structure to the display.

For some reason this works fine with single resource module invocations, which leads me to believe that ansible.module_utils.common.dict_transformations.recursive_diff might be handling that case nominally but not the multi-resource case,

  1. default callback plugin
    https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/callback/default.py#L274-L288
    def v2_on_file_diff(self, result):
        if result._task.loop and 'results' in result._result:
            for res in result._result['results']:
                if 'diff' in res and res['diff'] and res.get('changed', False):
                    diff = self._get_diff(res['diff'])
                    if diff:
                        if self._last_task_banner != result._task._uuid:
                            self._print_task_banner(result._task)
                        self._display.display(diff)
        elif 'diff' in result._result and result._result['diff'] and result._result.get('changed', False):
            diff = self._get_diff(result._result['diff'])
            if diff:
                if self._last_task_banner != result._task._uuid:
                    self._print_task_banner(result._task)
                self._display.display(diff)

Similar to the strategy plugin issue, even if we end up firing this callback, there are two issues:

  1. Even though we are returning a list of results, the module invocation is not in "loop mode", so the first conditional statement evaluates to False. The second conditional looks promising but it fails for the same reason as the strategy plugin: our diff key is in each result._result['results'] item, not the top-level result as configured here.

  2. The entire callback execution is wrapped in a try: statement that does a search through all possible callbacks, so our KeyErrors here are silently suppressed.

ISSUE TYPE
  • Bug Report / Feature Question
COMPONENT NAME

k8s.py, possibly any other modules that use diff_objects() inside module_utils/common.py.

ANSIBLE VERSION

tested on at least ansible 2.9.0, but the source issue is present in master and hasn't been touched for multiple years (generally)

EXPECTED RESULTS

Multi-resource k8s invocations, when run with diff mode, will output a pretty diff instead of the vvv required dense diff.

wait is not honored if resources does not exist yet

SUMMARY

When using the k8s_info module with wait: yes, if the first query returns no results, the task ends without honoring the wait block. If the resource exists, then it blocks as expected until the wait_condition is met.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s_info

ANSIBLE VERSION
ansible 2.10.4
  config file = /Users/hguerrer/git/agnosticd/ansible/ansible.cfg
  configured module search path = ['/Users/hguerrer/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/Cellar/ansible/2.10.4/libexec/lib/python3.9/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.9.1 (default, Dec 29 2020, 08:52:17) [Clang 12.0.0 (clang-1200.0.32.28)]
CONFIGURATION
ANSIBLE_NOCOWS(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = True
DEFAULT_BECOME(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = False
DEFAULT_CALLBACK_WHITELIST(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = ['profile_tasks']
DEFAULT_FORKS(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = 50
DEFAULT_GATHERING(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = smart
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = ['/Users/hguerrer/git/agnosticd/ansible/dynamic-roles', '/Users/hguerrer/git/agnosticd/ansible/ansible/dynamic-roles', '/Users/hguerrer/git/agnosticd/ansible/roles-infra', '/Users/hguerrer/git/agnosticd/ansible/ansible/roles-infra', '/Users/hguerrer/git/agnosticd/ansible/roles', '/Users/hguerrer/git/agnosticd/ansible/ansible/roles', '/Users/hguerrer/git/agnosticd/ansible/ansible/roles_studentvm', '/Users/hguerrer/git/agnosticd/ansible/roles_studentvm', '/Users/hguerrer/git/agnosticd/ansible/ansible/roles_ocp_workloads', '/Users/hguerrer/git/agnosticd/ansible/roles_ocp_workloads']
DEFAULT_STDOUT_CALLBACK(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = 60
HOST_KEY_CHECKING(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = False
LOCALHOST_WARNING(/Users/hguerrer/git/agnosticd/ansible/ansible.cfg) = False
OS / ENVIRONMENT

MacOS Big Sur

STEPS TO REPRODUCE

Query k8s for a resource that not exist adding wait and wait_condiiton

- name: Wait until POD is in running state
  k8s_info:
    api_version: v1
    kind: Deployment
    name: non-existent-resource
    namespace: default
    wait: yes
    wait_condition:
      type: Available
      status: 'True'
    wait_timeout: 120
  register: _dg_deployment
EXPECTED RESULTS

task should wait and retry until condition met

ACTUAL RESULTS

urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) with k8s module

SUMMARY

Getting the following exception when using k8s module in Ansible Versions 2.9.13 and 2.10.4.

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
failed: [localhost] (item=consumer) => changed=false
  ansible_loop_var: item
  item: consumer
  module_stderr: |-
    Traceback (most recent call last):
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/connectionpool.py", line 597, in urlopen
        httplib_response = self._make_request(conn, method, url,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/connectionpool.py", line 384, in _make_request
        six.raise_from(e, None)
      File "<string>", line 2, in raise_from
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/connectionpool.py", line 380, in _make_request
        httplib_response = conn.getresponse()
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/http/client.py", line 1347, in getresponse
        response.begin()
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/http/client.py", line 307, in begin
        version, status, reason = self._read_status()
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/http/client.py", line 276, in _read_status
        raise RemoteDisconnected("Remote end closed connection without"
    http.client.RemoteDisconnected: Remote end closed connection without response

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "<stdin>", line 102, in <module>
      File "<stdin>", line 94, in _ansiballz_main
      File "<stdin>", line 40, in invoke_module
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/runpy.py", line 210, in run_module
        return _run_module_code(code, init_globals, run_name, mod_spec)
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/runpy.py", line 97, in _run_module_code
        _run_code(code, mod_globals, init_globals,
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/var/folders/zm/ld4b9y8n6ms5svr09mhly1sr0000gn/T/ansible_k8s_payload_h7k6cwaz/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py", line 281, in <module>
      File "/var/folders/zm/ld4b9y8n6ms5svr09mhly1sr0000gn/T/ansible_k8s_payload_h7k6cwaz/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py", line 277, in main
      File "/var/folders/zm/ld4b9y8n6ms5svr09mhly1sr0000gn/T/ansible_k8s_payload_h7k6cwaz/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 191, in execute_module
      File "/var/folders/zm/ld4b9y8n6ms5svr09mhly1sr0000gn/T/ansible_k8s_payload_h7k6cwaz/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 385, in perform_action
      File "/var/folders/zm/ld4b9y8n6ms5svr09mhly1sr0000gn/T/ansible_k8s_payload_h7k6cwaz/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 410, in patch_resource
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/openshift/dynamic/client.py", line 71, in inner
        resp = func(self, resource, *args, **kwargs)
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/openshift/dynamic/client.py", line 275, in patch
        return self.request('patch', path, body=body, content_type=content_type, **kwargs)
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/openshift/dynamic/client.py", line 362, in request
        return self.client.call_api(
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 317, in call_api
        return self.__call_api(resource_path, method,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 150, in __call_api
        response_data = self.request(method, url,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 374, in request
        return self.rest_client.PATCH(url,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/kubernetes/client/rest.py", line 280, in PATCH
        return self.request("PATCH", url,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/kubernetes/client/rest.py", line 162, in request
        r = self.pool_manager.request(method, url,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/request.py", line 70, in request
        return self.request_encode_body(method, url, fields=fields,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/request.py", line 150, in request_encode_body
        return self.urlopen(method, url, **extra_kw)
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/poolmanager.py", line 324, in urlopen
        response = conn.urlopen(method, u.request_uri, **kw)
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/connectionpool.py", line 637, in urlopen
        retries = retries.increment(method, url, error=e, _pool=self,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/util/retry.py", line 368, in increment
        raise six.reraise(type(error), error, _stacktrace)
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/packages/six.py", line 685, in reraise
        raise value.with_traceback(tb)
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/connectionpool.py", line 597, in urlopen
        httplib_response = self._make_request(conn, method, url,
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/connectionpool.py", line 384, in _make_request
        six.raise_from(e, None)
      File "<string>", line 2, in raise_from
      File "/Users/sherry/.pyenv/versions/3.9.1/envs/default/lib/python3.9/site-packages/urllib3/connectionpool.py", line 380, in _make_request
        httplib_response = conn.getresponse()
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/http/client.py", line 1347, in getresponse
        response.begin()
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/http/client.py", line 307, in begin
        version, status, reason = self._read_status()
      File "/Users/sherry/.pyenv/versions/3.9.1/lib/python3.9/http/client.py", line 276, in _read_status
        raise RemoteDisconnected("Remote end closed connection without"
    urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
  module_stdout: ''
  msg: |-
    MODULE FAILURE
    See stdout/stderr for the exact error
  rc: 1

The task looks like this.

- name: "create route"
  k8s:
    api_version: v1
    kind: Route
    name: yolo
    namespace: "yolo"
    state: present
    resource_definition:
      spec:
        host: "yolo"
        path: ""
        to:
          kind: Service
          name: "yolo"
        tls:
          certificate:
            "{{ lookup('file', 'files/yolo.crt') }}"
          key:
            "{{ lookup('file', 'vaults/yolo.key') }}"
          caCertificate:
            "{{ lookup('file', 'files/yolo-ca.pem') }}"
          insecureEdgeTerminationPolicy: Redirect
          termination: edge
    host: "yolo-master"
    api_key: "yolo-api-key"

If I remove the tls part of this task it works again. If I add |- behind certificate:, key: and caCertificate: then it works as well but OpenShift does not accept the configuration.

This is what I mean, when I say OpenShift does not accept it. Please not that it only occurs with additional |-.

  • spec.tls.caCertificate: Invalid value: "redacted ca certificate data": failed to parse CA certificate: data does not contain any valid RSA or ECDSA certificates
  • spec.tls.certificate: Invalid value: "redacted certificate data": data does not contain any valid RSA or ECDSA certificates
  • spec.tls.key: Invalid value: "": no key specified

Just noticed that adding | to_nice_yaml(indent=16) after the file lookup fixes the problem as well, but it shouldn't be needed in my opinion and OpenShift does not accept the configuration as well.

Any ideas?

ISSUE TYPE
  • Bug Report
COMPONENT NAME

module k8s and its libraries

ANSIBLE VERSION
ansible 2.9.13
  config file = /Users/sherry/Code/Scripts/playbooks/ansible.cfg
  configured module search path = ['/Users/sherry/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/sherry/.pyenv/versions/default/lib/python3.9/site-packages/ansible
  executable location = /Users/sherry/.pyenv/versions/default/bin/ansible
  python version = 3.9.1 (default, Jan 14 2021, 12:01:39) [Clang 12.0.0 (clang-1200.0.32.28)]
CONFIGURATION
ANSIBLE_PIPELINING(/Users/sherry/playbooks/ansible.cfg) = True
DEFAULT_HOST_LIST(/Users/sherry/playbooks/ansible.cfg) = ['/Users/sherry/playbooks/inventories/yolo']
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/sherry/playbooks/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/sherry/playbooks/ansible.cfg) = ['/Users/sherry/playbooks/roles']
DEFAULT_STDOUT_CALLBACK(/Users/sherry/playbooks/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/Users/sherry/playbooks/ansible.cfg) = /Users/sherry/.yolo
DEPRECATION_WARNINGS(/Users/sherry/playbooks/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/sherry/playbooks/ansible.cfg) = /Users/sherry/.pyenv/versions/3.9.1/envs/default/bin/python
OS / ENVIRONMENT

macOS 11.1 20C69 x86_64 BigSur

STEPS TO REPRODUCE

Just run this one task (see above) against OpenShift 3.11.

ansible-playbook debug.yml -i inventories/yolo
EXPECTED RESULTS

Route gets created without problems.

ACTUAL RESULTS

The exception from above.

Add valid type parameter to documentation for resource_definition

SUMMARY

Inside common.py, the resource_definition argument uses a custom type, list_dict_str.

Because of this, and the ansible-test bug Unable to pass validate-modules sanity check if using custom type for arg spec, I can't get the argument in the documentation fragment k8s_resource_options.py to pass the validate-modules test. I get the errors:

Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/k8s.py plugins/modules/k8s_auth.py plugins/modules/k8s_ ...
ERROR: Found 2 validate-modules issue(s) which need to be resolved:
ERROR: plugins/modules/k8s.py:0:0: parameter-type-not-in-doc: Argument 'resource_definition' in argument_spec defines type as <function list_dict_str at 0x7f7570d09378> but documentation doesn't define type
ERROR: plugins/modules/k8s_scale.py:0:0: parameter-type-not-in-doc: Argument 'resource_definition' in argument_spec defines type as <function list_dict_str at 0x7f7570d09378> but documentation doesn't define type

I have added the k8s and k8s_scale modules to the sanity check ignore.txt file for now for parameter-type-not-in-doc, but for this issue, I'd like to hopefully remove those (and have no more ignore.txt content).

ISSUE TYPE
  • Feature Idea

Support for helm secrets?

Sorry for not using the templates provided but nothing seemed relevant.
I was just wondering if helm secrets is supported?

impersonation support using become in k8s

SUMMARY

I'm only using Openshift and never used k8s
One of the nifty feature is to be able to impersonate users so I can do tasks as my personal user to work which have extended privileges (but not admin) and for some tasks I can use oc --as system:admin to perform some as cluster-admin role.

This is not exposed in the k8s module, which could be helpful, expose as a become keyword

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s

ADDITIONAL INFORMATION

Create ConfigMap from a directory containing configuration files

SUMMARY

Our application has all configuration files in a directory conf. When deploying to Kubernetes, we used to create a ConfigMap from the entire directory:

kubectl create configmap app-conf --from-file=conf

With Ansible k8s module it is impossible for now. It is especially bad, because now we deploy applications from a Docker container running in a Jenkins pipeline. This Docker container has Ansible inside, but it is not supposed to have also kubectl.

Would be good to make it like this:

- name: Create app config
  k8s:
    kind: ConfigMap
    name: app-conf
    src: <path to conf dir>
ISSUE TYPE
  • Feature Idea
COMPONENT NAME

Ansible k8s module

ADDITIONAL INFORMATION

Ansible 2.10.3
Python 3.7.3

Currently we are using a workaround:

  1. A shell script reads all the files in the conf directory and generates the YAML content as expected by kubectl apply -f descriptor.yaml.
  2. Then we feed this auto-generated YAML descriptor as src argument to k8s module.

There is a similar issue ansible/ansible#55329, but it does not provide a solution.

Support for kustomize in the k8s module.

SUMMARY

I would like to use kustomize with the k8s module. This will allow someone to be able to supply a directory or URI with a kustomize.yaml file in it.

This should emulate what kubectl -k /path/to/dir does (as close as possible).

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

Ideally, I would like to have this as part of the k8s module. But wouldn't mind having a separate kustomize module (maybe k8s_kustomize?)

ADDITIONAL INFORMATION

I assume the k8s api is being hit, so comparing it to the functionality of kubectl isn't apples to apples. But, it would be great to "expand" the src option to something like this..

Equivalent to: kubectl create -k /path/to/some/kustomizedir

- name: Create objects from a local directory that has a kustomize.yaml file
  k8s:
    state: present
    src: /path/to/some/kustomizedir
    kustomize: true

Equivalent to: kubectl create -k https://github.com/username/some/kustomizedir

- name: Create objects from a remote dir that has a kustomize.yaml file
  k8s:
    state: present
    src: https://github.com/username/some/kustomizedir
    kustomize: true

Multi-patch and Remove don't seem to be available in k8s module.

SUMMARY

There does not seem to be a way in k8s to do either multi-patch of a resource or deletion of something in a resource. E.g.

- name: patch registry to use emptydir, scale to 0
   shell: >-
     oc patch configs.imageregistry.operator.openshift.io cluster --type json --patch
        '[{ "op": "remove",  "path": "/spec/storage/pvc" },
          { "op": "add",        "path": "/spec/storage/emptydir", "value": "{}" },
          { "op": "replace",  "path": "/spec/replicas", "value": 0}]'
ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s module

ANSIBLE VERSION
ansible 2.9.10
  config file = None
  configured module search path = ['/Users/wkulhane/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/wkulhane/ve_agnosticd/lib/python3.8/site-packages/ansible
  executable location = /Users/wkulhane/ve_agnosticd/bin/ansible
  python version = 3.8.4 (default, Jul 16 2020, 09:49:26) [Clang 11.0.3 (clang-1103.0.32.62)]
OS / ENVIRONMENT

All

As requested by @fabianvf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.