Giter Club home page Giter Club logo

ansible_role_microk8s's People

Contributors

adamcstephens avatar badri avatar byjg avatar defilan avatar dyasny avatar ericpardee avatar eshikhov avatar iambryancs avatar istvano avatar jack1902 avatar markmywords avatar obsh avatar phunyguy avatar projectinitiative avatar toopy avatar turiok avatar vondowntown avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible_role_microk8s's Issues

Update apt cache when installing snapd

On Debian/Ubuntu distributions where the package index have been stale for a while, using the role will throw an error like below:

TASK [istvano.microk8s : Make sure snapd is installed] *****************************************************
task path: /home/bryancs/.ansible/roles/istvano.microk8s/tasks/install.yml:1
fatal: [172.16.67.5]: FAILED! => {"changed": false, "msg": "No package matching 'snapd' is available"}
fatal: [172.16.67.3]: FAILED! => {"changed": false, "msg": "No package matching 'snapd' is available"}
fatal: [172.16.67.7]: FAILED! => {"changed": false, "msg": "No package matching 'snapd' is available"}

To get around this, the update_cache parameter of Ansible's apt module should be added and set to yes like below:

diff --git a/tasks/install.yml b/tasks/install.yml
index c48409b..e1177dd 100644
--- a/tasks/install.yml
+++ b/tasks/install.yml
@@ -2,6 +2,7 @@
   apt:
     name:
       - snapd
+    update_cache: yes
     state: present
   become: yes
   when: ansible_distribution == 'Ubuntu'

Microk8s Version upgrade does not work

Expected Behaviour: Changing 'microk8s_version' would change the version of the installed microk8s.

What happens: Version is still on v1.19

my current by hand fix was running sudo snap refresh microk8s --channel=latest/stable on all servers.

can not enable snap auto update after disabled it

Hello,

I have successfully installed microk8s using this role, with the snap auto update to false.
After that I have put this variable to true dans I can see an entry in the host file:

microk8s_disable_snap_autoupdate: true
# BEGIN ANSIBLE MANAGED: microk8s Disable snap autoupdate
127.0.0.1 api.snapcraft.io
# END ANSIBLE MANAGED: microk8s Disable snap autoupdate

If I want to enable the snap auto update, there is no task to remove the host entry. Something like this should works:

microk8s_disable_snap_autoupdate: false
- name: Disable snap autoupdate
  become: true
  blockinfile:
    state: "{{ 'present' if microk8s_disable_snap_autoupdate == true else 'absent' }}"
    dest: /etc/hosts
    marker: "# {mark} ANSIBLE MANAGED: microk8s Disable snap autoupdate"
    content: |
      127.0.0.1 api.snapcraft.io

Task `reaffirm permission on files` not working?

I'm having an issue where this portion of configure-groups.yml doesn't do what I think it's supposed to do.

- name: reaffirm permission on files
  become: yes
  file:
    path: ~/.kube
    state: directory
    owner: '{{ user }}'
    group: '{{ user }}'
    recurse: yes
  with_items: '{{ users }}'
  loop_control:
    loop_var: user
    label: '{{ user }}'

The playbook to execute the role:

- hosts: cluster
  gather_subset:
    - all_ipv4_addresses

- hosts: cluster
  # serial: 1
  gather_facts: yes
  roles:
    - role: istvano.microk8s
      become: true
      vars:
        users:
          - ubuntu
        microk8s_version: stable
        microk8s_enable_HA: true
        microk8s_group_HA: cluster

Verifying ownership of ~/.kube/config

$ ansible cluster -i inventories/microk8s/hosts -m shell -a "ls -lh ~/.kube/"
pi | CHANGED | rc=0 >>
total 8.0K
drwxr-x--- 4 ubuntu ubuntu 4.0K Jan 18 14:24 cache
-rw-r--r-- 1 root   root   1.9K Jan 20 13:17 config
pi-2 | CHANGED | rc=0 >>
total 4.0K
-rw-r--r-- 1 root root 105 Jan 20 13:18 config
pi-1 | CHANGED | rc=0 >>
total 4.0K
-rw-r--r-- 1 root root 105 Jan 20 13:18 config

One would assume that config would be owned by ubuntu:ubuntu

Any suggestions?

Enabling `microk8s_enable_HA: true` dosn't add nodes to master

Attempting to create a cluster with 3 raspberry Pi's.

The playbook to run the role:

---
- hosts: cluster
  gather_subset:
    - all_ipv4_addresses

- hosts: cluster
  # serial: 1
  gather_facts: yes
  roles:
    - role: istvano.microk8s
      become: true
      vars:
        users:
          - ubuntu
        microk8s_version: stable
        microk8s_enable_HA: true
        microk8s_group_HA: cluster

Logs don't indicate any issues when adding nodes to master. Performing a check on master doesn't show HA:

$ ansible cluster -i inventories/microk8s/hosts -m shell -a "microk8s status" --limit pimaster
pi | CHANGED | rc=0 >>
microk8s is running
high-availability: no
  datastore master nodes: 192.168.1.226:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # (core) The Kubernetes dashboard
    dns                  # (core) CoreDNS
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    ingress              # (core) Ingress controller for external access
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
    rbac                 # (core) Role-Based Access Control for authorisation
    registry             # (core) Private image registry exposed on localhost:32000
    storage              # (core) Alias to hostpath-storage add-on, deprecated
  disabled:
    cert-manager         # (core) Cloud native certificate management
    community            # (core) The community addons repository
    kube-ovn             # (core) An advanced network fabric for Kubernetes
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Loadbalancer for your Kubernetes cluster
    minio                # (core) MinIO object storage
    observability        # (core) A lightweight observability stack for logs, traces and metrics
    prometheus           # (core) Prometheus operator for monitoring and logging

Ran the role multiple times to no avail... output from last run:

TASK [istvano.microk8s : configure High Availability] ***********************************************************************************************************************************
included: /home/<redacted>/roles/istvano.microk8s/tasks/configure-HA.yml for pi, pi-1, pi-2

TASK [istvano.microk8s : Enumerate all cluster HA hosts within the hosts file] **********************************************************************************************************
ok: [pi-2]
ok: [pi-1]
ok: [pi]

TASK [istvano.microk8s : Enumerate all cluster worker hosts within the hosts file] ******************************************************************************************************
skipping: [pi]
skipping: [pi-1]
skipping: [pi-2]

TASK [istvano.microk8s : Find the designated host] **************************************************************************************************************************************
ok: [pi]
ok: [pi-1]
ok: [pi-2]

TASK [istvano.microk8s : Waiting for microk8s to be ready on microk8s host master] ******************************************************************************************************
skipping: [pi]
ok: [pi-1 -> pi(192.168.1.226)]
ok: [pi-2 -> pi(192.168.1.226)]

TASK [istvano.microk8s : Get the microk8s join command from the microk8s master] ********************************************************************************************************
skipping: [pi]
ok: [pi-1 -> pi(192.168.1.226)]
ok: [pi-2 -> pi(192.168.1.226)]

TASK [istvano.microk8s : Get microk8s cluster nodes] ************************************************************************************************************************************
skipping: [pi]
ok: [pi-1 -> pi(192.168.1.226)]
ok: [pi-2 -> pi(192.168.1.226)]

TASK [istvano.microk8s : Waiting for microk8s to be ready on microk8s host node] ********************************************************************************************************
skipping: [pi]
ok: [pi-1]
ok: [pi-2]

TASK [istvano.microk8s : Set the microk8s join command on the microk8s node] ************************************************************************************************************
skipping: [pi]
skipping: [pi-1]
skipping: [pi-2]

Anything missing?

Install Hangs With metallb Enabled and No Args

If installing the metallb plugin by setting it to "true", the install process hangs there and there is no diagnostic output.

The only way I found the issue is by changing tasks/addons.yml

cmd: microk8s.enable {{ item.name }}{% if microk8s_plugins[item.name] != True %}:{{ microk8s_plugins[item.name] }}{% endif %}

To...

cmd: microk8s.enable {{ item.name }} && exit{% if microk8s_plugins[item.name] != True %}:{{ microk8s_plugins[item.name] }}{% endif %}

The main issue is in defaults/main.yml, the metallb line looks like...

metallb: false

So I just set to to "true". It probably should look something like the dns addon line to make plain that arguments are required.

Inventory example

Hi, can you share an example of your inventory please to better understand the HA section.
Good job๐Ÿ‘

Helm3 Repositories are Not Added

Running a playbook which installs microk8s and addons successfully does not add helm3 repos.

Playbook is below. Result should show two repos when "helm3 repo list" is run, instead I get "Error: no repositories to show". I haven't figured this out yet.


  • hosts: prodk8s
    gather_facts: yes
    become: yes
    roles:

enabling kube-ovn does not work

I get this error:

Infer repository core for addon kube-ovn", "", "Warning: this is a potentially destructive operation. Please enable kube-ovn", "with:", "", " microk8s enable kube-ovn --force"

I believe it should be applied to each node before joining the network. Would also be nice if we could automatically set the replica count and also mark all microk8s_HA nodes as kube-ovn/role=master (see: https://microk8s.io/docs/addon-kube-ovn)

update helm repos does not work

TASK [istvano.microk8s : update helm repos] ***************************************************************************************************************************************************************
failed: [cloud01] (item=ubuntu) => {"ansible_loop_var": "user", "changed": false, "msg": "missing required arguments: release_name, release_namespace", "user": "ubuntu"}
failed: [cloud04] (item=ubuntu) => {"ansible_loop_var": "user", "changed": false, "msg": "missing required arguments: release_name, release_namespace", "user": "ubuntu"}
failed: [cloud03] (item=ubuntu) => {"ansible_loop_var": "user", "changed": false, "msg": "missing required arguments: release_name, release_namespace", "user": "ubuntu"}
failed: [cloud02] (item=ubuntu) => {"ansible_loop_var": "user", "changed": false, "msg": "missing required arguments: release_name, release_namespace", "user": "ubuntu"}

A more detailed log:

failed: [cloud01] (item=ubuntu) => {
    "ansible_loop_var": "user",
    "changed": false,
    "invocation": {
        "module_args": {
            "api_key": null,
            "atomic": false,
            "binary_path": null,
            "ca_cert": null,
            "chart_ref": null,
            "chart_repo_url": null,
            "chart_version": null,
            "context": null,
            "create_namespace": false,
            "disable_hook": false,
            "force": false,
            "host": null,
            "kubeconfig": null,
            "purge": true,
            "release_name": null,
            "release_namespace": null,
            "release_state": "present",
            "release_values": {},
            "replace": false,
            "skip_crds": false,
            "update_cache": true,
            "update_repo_cache": false,
            "validate_certs": true,
            "values_files": [],
            "wait": false,
            "wait_timeout": null
        }
    },
    "msg": "missing required arguments: release_name, release_namespace",
    "user": "ubuntu"
}

As stated in the message release_name and release_namespace are required.
If you look at the current doku at https://docs.ansible.com/ansible/latest/collections/community/kubernetes/helm_module.html
you will see that from now on they are required.

I will prepare a fix and create a merge request.

If you have some remarks feel free to comment ^^
I'm pretty new to kubernetes, so every comment if i am doing something wrong is appreciated.

Playbook should be idempotent

When running the playbook twice, you'll get a failed for nodes that have already joined the cluster. This should not fail the overall run as this is desirable to have the node joined.

A failed_when should be used to ensure that the failure of a node already having joined doesn't result in an overall failure

Add other linux distribution

Hi,

I like your role to install microk8s.
I want to use it on manjaro distribution which is based on Arch linux.

Unable to add worker nodes

I'm able to install a single node cluster successfully. However, I'm unable to add additional worker nodes.

Here's how my inventory looks like:

master ansible_ssh_host=x.x.x.x

[microk8s_group_WORKERS]
node-01 ansible_ssh_host=y.y.y.y
node-02 ansible_ssh_host=z.z.z.z

Feature: Ability to create worker only nodes

Microk8s 1.23+ makes it possible to create worker only nodes which don't run the control-plane. This is beneficial for various reasons and would be a great feature to add to this ansible-role

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.