Giter Club home page Giter Club logo

ansible-role-pacemaker's Introduction

Pacemaker role for Ansible

This role configures Pacemaker cluster by dumping the configuration (CIB), adjusting the XML, and reloading it. The role is idempotent, and supports check mode.

It has been redesigned to configure individual elements (cluster defaults, resources, groups, constraints, etc) rather than the whole state of the cluster and all the services. This allows you to focus on specific resources, without interfering with the rest.

Requirements

This role has been written for and tested in Scientific Linux 7. It might also work in other distros, please share your experience.

Tasks

Use tasks_from Ansible directive to specify what you want to configure.

Boolean values in properties (parsed by Pacemaker itself) don't have to be quoted. However, resource agents may expect Boolean-like arguments as integers, strings, etc. Such values must be quoted.

tasks_from: main

Set up nodes, configure cluster properties, and resource defaults.

pcmk_cluster_name

Name of the cluster (optional).

Default: hacluster.

pcmk_password

The plaintext password for the cluster user (optional). If omitted, will be derived from ansible_machine_id of the first host in the play batch. This password is only used in the initial authentication of the nodes.

Default: ansible_machine_id | to_uuid

pcmk_user

The system user to authenticate PCS nodes with (optional). PCS will authenticate all nodes with each other.

Default: hacluster

pcmk_cluster_options

Dictionary with cluster-wide options (optional).

pcmk_votequorum

Dictionary with votequorum options (optional). See votequorum(5). Boolean values accepted.

pcmk_resource_defaults

Dictionary of resource defaults (optional).

tasks_from: resource

Configure a simple resource.

pcmk_resource

Dictionary describing a simple (primitive) resource. Contains the following members:

  • id: resource identifier; mandatory for simple resources;
  • class, provider, and type: resource agent descriptors; provider may be omitted, e.g. when type is service;
  • options: optional dictionary of resource-specific attributes, e.g. address and netmask for IPaddr2;
  • op: optional list of operations; each operation is a dictionary with required name and interval members, and optional arbitrary members;
  • meta: optional dictionary of meta-attributes.

tasks_from: group

Configure a resource group.

pcmk_group

Dictionary with two members:

  • id is the group identifier
  • resources is a dictionary where keys are resource IDs, and values have the same format as pcmk_resource (except for id of the resources being optional).

tasks_from: constraint

Configure a constraint.

pcmk_constraint

Dictionary defining a single constraint. The following members are required:

  • type: one of: location, colocation, or order;
  • score: constraint score (signed integer, INFINITY, or -INFINITY).

Depending on the value of type, the following members are also required:

  • location requires rsc and node;
  • colocation requires rsc and with-rsc;
  • order requires first and then;

The dictionary may contain other members, e.g. symmetrical.

Example playbooks

Active-active chrooted BIND DNS server

---
- name: Configure DNS cluster
  hosts: dns-servers
  tasks:

    - name: Set up cluster
      include_role:
        name: devgateway.pacemaker
      vars:
        pcmk_password: hunter2
        pcmk_cluster_name: named
        pcmk_cluster_options:
          stonith-enabled: false

    - name: Configure IP address resource
      include_role:
        name: devgateway.pacemaker
        tasks_from: resource
      vars:
        pcmk_resource:
          id: dns-ip
          class: ocf
          provider: heartbeat
          type: IPaddr2
          options:
            ip: 10.0.0.1
            cidr_netmask: 8
          op:
            - name: monitor
              interval: 5s

    - name: Configure cloned BIND resource
      include_role:
        name: devgateway.pacemaker
        tasks_from: advanced-resource
      vars:
        pcmk_resource:
          type: clone
          id: dns-clone
          resources:
            named:
              class: service
              type: named-chroot
              op:
                - name: monitor
                  interval: 5s

    - name: Set up constraints
      include_role:
        name: devgateway.pacemaker
        tasks_from: constraint
      vars:
        pcmk_constraint:
          type: order
          first: dns-ip
          then: dns-clone

Active-active Squid proxy

---
- name: Configure Squid cluster
  hosts: proxy-servers
  tasks:

    - name: Set up cluster
      include_role:
        name: devgateway.pacemaker
      vars:
        pcmk_password: hunter2
        pcmk_cluster_name: squid
        pcmk_cluster_options:
          stonith-enabled: false

    - name: Configure IP address resource
      include_role:
        name: devgateway.pacemaker
        tasks_from: resource
      vars:
        pcmk_resource:
          id: squid-ip
          class: ocf
          provider: heartbeat
          type: IPaddr2
          options:
            ip: 192.168.0.200
            cidr_netmask: 24
          op:
            - name: monitor
              interval: 5s

    - name: Configure cloned BIND resource
      include_role:
        name: devgateway.pacemaker
        tasks_from: advanced-resource
      vars:
        pcmk_resource:
          id: squid
            type: clone
            resources:
              squid-service:
                class: service
                type: squid
                op:
                  - name: monitor
                    interval: 5s

    - name: Set up constraints
      include_role:
        name: devgateway.pacemaker
        tasks_from: constraint
      vars:
        pcmk_constraint:
          type: order
          first: squid-ip
          then: squid

Nginx, web application, and master-slave Postgres

The cluster runs two Postgres nodes with synchronous replication. Wherever master is, a virtual IP address is running, where NAT is pointing at. Nginx and the webapp are running at the same node, but not the other, in order to save resources. Based on the example from Clusterlabs wiki.

---
- hosts:
    - alpha
    - bravo
  tasks:

    - name: Set up Pacemaker with Postgres master/slave
      include_role:
        name: devgateway.pacemaker
      vars:
        pcmk_pretty_xml: true
        pcmk_cluster_name: example
        pcmk_password: hunter2
        pcmk_cluster_options:
          no-quorum-policy: ignore
          stonith-enabled: false
        pcmk_resource_defaults:
          resource-stickiness: INFINITY
          migration-threshold: 1

    - name: Configure simple resources
      include_role:
        name: devgateway.pacemaker
        tasks_from: resource
      loop_control:
        loop_var: pcmk_resource
      loop:
        - id: coolapp
          class: service
          type: coolapp
        - id: nginx
          class: service
          type: nginx
        - id: virtual-ip
          class: ocf
          provider: heartbeat
          type: IPaddr2
          options:
            ip: 10.0.0.23
          meta:
            migration-threshold: 0
          op:
            - name: start
              timeout: 60s
              interval: 0s
              on-fail: restart
            - name: monitor
              timeout: 60s
              interval: 10s
              on-fail: restart
            - name: stop
              timeout: 60s
              interval: 0s
              on-fail: restart

    - name: Configure master-slave Postgres
      include_role:
        name: devgateway.pacemaker
        tasks_from: advanced-resource
      vars:
        pcmk_resource:
          id: postgres
          type: master
          meta:
            master-max: 1
            master-node-max: 1
            clone-max: 2
            clone-node-max: 1
            notify: true
          resources:
            postgres-replica-set:
              class: ocf
              provider: heartbeat
              type: pgsql
              options:
                pgctl: /usr/pgsql-9.4/bin/pg_ctl
                psql: /usr/pgsql-9.4/bin/psql
                pgdata: /var/lib/pgsql/9.4/data
                rep_mode: sync
                node_list: "{{ ansible_play_batch | join(' ') }}"
                restore_command: cp /var/lib/pgsql/9.4/archive/%f %p
                master_ip: 10.0.0.23
                restart_on_promote: "true"
                repuser: replication
              op:
                - name: start
                  timeout: 60s
                  interval: 0s
                  on-fail: restart
                - name: monitor
                  timeout: 60s
                  interval: 4s
                  on-fail: restart
                - name: monitor
                  timeout: 60s
                  interval: 3s
                  on-fail: restart
                  role: Master
                - name: promote
                  timeout: 60s
                  interval: 0s
                  on-fail: restart
                - name: demote
                  timeout: 60s
                  interval: 0s
                  on-fail: stop
                - name: stop
                  timeout: 60s
                  interval: 0s
                  on-fail: block
                - name: notify
                  timeout: 60s
                  interval: 0s

    - name: Set up constraints
      include_role:
        name: devgateway.pacemaker
        tasks_from: constraint
      loop_control:
        loop_var: pcmk_constraint
      loop:
        - type: colocation
          rsc: virtual-ip
          with-rsc: postgres
          with-rsc-role: Master
          score: INFINITY
        - type: colocation
          rsc: nginx
          with-rsc: virtual-ip
          score: INFINITY
        - type: colocation
          rsc: coolapp
          with-rsc: virtual-ip
          score: INFINITY
        - type: order
          first: postgres
          first-action: promote
          then: virtual-ip
          then-action: start
          symmetrical: false
          score: INFINITY
        - type: order
          first: postgres
          first-action: demote
          then: virtual-ip
          then-action: stop
          symmetrical: false
          score: 0

See also

Copyright

Copyright 2015-2019, Development Gateway. Licensed under GPL v3+.

ansible-role-pacemaker's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ansible-role-pacemaker's Issues

pacemaker resources

Hello
This is from your example playbook

pacemaker_resources:
- id: dns-ip
type: "ocf:heartbeat:IPaddr2"
options:
ip: 10.0.0.1
cidr_netmask: 8

Do i have to enter a virtual/floating IP in ip:?

thank you

Error when using tasks/constraint.yml in ansible 2.9.x

The following error is observed in ansible 2.9.x:

fatal: [myhost]: FAILED! => {"changed": false, "msg": "The target XML source '/tmp/ansible.xwl9wmkg.xml' does not exist."}

This output is observed even if prior include_role calls had created the temp file already. Apparently ansible cleans up tempfiles faster than it used to.

I have submitted a patch in PR #16

running cluster

Hi,
I've created my cluster manually. Nothing special.
Then i wanted to test ansible book for it. But when i run it i get this error

TASK: [styopa.pacemaker | Ensure the password for the cluster user] ***********
failed: [20cent] => {"failed": true}
msg: this module requires key=value arguments (['name={#', 'pacemaker_user', '#}', 'password={#', 'pacemaker_password', '|', 'password_hash(sha512,', 'ansible_hostname)', '#}'])
failed: [10cent] => {"failed": true}
msg: this module requires key=value arguments (['name={#', 'pacemaker_user', '#}', 'password={#', 'pacemaker_password', '|', 'password_hash(sha512,', 'ansible_hostname)', '#}'])

FATAL: all hosts have already failed -- aborting

And my playbook looks like this

  • hosts: mycluster
    roles:

    • styopa.pacemaker
      vars:
      pacemaker_ansible_group: mycluster
      pacemaker_password: Support1209
      pacemaker_cluster_name: 30cent
      pacemaker_properties:
      stonith_enabled: "true"
      pacemaker_resources:
    - id: dns-ip
      type: "ocf:heartbeat:IPaddr2"
      options:
        ip: 16.53.125.50
        cidr_netmask: 8
      op:
        - action: monitor
          options:
            interval: 5s
    

Here are the variables

root@mybook:/etc/ansible/roles/styopa.pacemaker/vars# cat main.yml

pacemaker_package: pcs
pacemaker_user: hacluster

pacemaker resources

can you please explain in your playbook the - id: dns-ip and dns-server ?
are these just hostnames reflecting your set up ?

TASK [styopa.pacemaker : Authenticate all nodes] *

hi
i got it go past the previous error but now i am stuck here
TASK [styopa.pacemaker : Authenticate all nodes] *******************************
task path: /home/dimtheo/my-roles/styopa.pacemaker/tasks/main.yml:18
fatal: [debian3]: FAILED! => {"failed": true, "msg": "'cloud' is undefined"}

in vars/main i have these:

pacemaker_package: pacemaker
pacemaker_user: hacluster
pacemaker_ansible_group: cloud

i have defined "cloud". Any ideas why it complains ?
thank you

Constraint task doesn't have 'pre' and 'post' tasks

Hi! If, in a playbook, I first say tasks_from: group, it works perfectly. However, if I, after that, in another task, say tasks_from: constraint, it fails, e.g.: "The target XML source '/tmp/ansible.Vme64f.xml' does not exist." The 'post' task included in group.yml removes the temporary config file, whereas constraint.yml doesn't include the pre.yml (or post.yml) files. And if I run the 'constraint' task first, I get the error that 'pcmk_config' is undefined, which it is, since it never gets defined. Suggested solution: add include_tasks: pre.yml and include_tasks: post.yml to the constraint.yml file, as it is in group.yml, resource.yml and advanced_resource.yml.

prevent creation of pacemaker_resources multiple times

It looks to me that there is nothing in the code https://github.com/styopa/ansible-pacemaker/blob/1ed86d75da60828ef1758f0b44c08cc4839f9283/tasks/main.yml#L50 to prevent from trying to create already defined pacemaker_resources on the cluster.

Take the following 2-node scenario:

  1. node1 defines pacemaker_resources
  2. run ansible again on node1 - pacemaker_resources will get defined again?
  3. node1 fails, and node2 becomes the only nodes in the cluster
  4. node1 is reinstalled from scratch - pacemaker_resources are defined again unaware of node2 being present?

pacemaker is undefined

hello
i get this error and the play stops

TASK [styopa.pacemaker : Install Pacemaker Configuration System package] *******
task path: /home/dimtheo/my-roles/styopa.pacemaker/tasks/main.yml:6
fatal: [debian3]: FAILED! => {"failed": true, "msg": "'pacemaker' is undefined"}
fatal: [debian4]: FAILED! => {"failed": true, "msg": "'pacemaker' is undefined"}

can you help?

ported to Debian

Hi
i have finnaly ported your role to Debian 9 . Everything works except the part that you have to create corosync-keygen , i did that manually. I still need to figure out how to do it with ansible.

For debian8 the role fails miserably . You have to use jessie backports but even then packages don;t install for some reason.

I can share the edited role with you if you are interested

the other 2 issues still remain open ..

Debian10 pacemaker group resources

Hi,

Thank you so much for the Anisble role. It's really helpful.

I'm trying to setup Pacemaker cluster on Debian 10 and with little tweaks it's working fine.

We would like to add 2 Virtual IPs and a service for squid. I followed the taks_from: group but post tasks - Verify cluster configuration is failing with CIB did not pass DTD/schema validation.

We trying to setup something like http://wiki.stocksy.co.uk/wiki/High_Availability_IP_Addresses_in_Debian

Could you please guide how to configure our requirement please?

error

I am getting the following error when running the play book with the following included.
_pacemaker_private_interface: |
{% for interface in ansible_interfaces %}
{% if 'docker' in interface or 'lo' in interface %}{% continue %}{% endif %}
{% set int = 'ansible%s' | format(interface) %}
{% if _int in hostvars[inventory_hostname] and 'ipv4' in hostvars[inventory_hostname][_int] and hostvars[inventory_hostname][_int]['ipv4']['address'] is defined %}
{% if hostvars[inventory_hostname][_int]['ipv4']['address'] | ipaddr('private') %}
{{ interface|trim}}
{% break %}{% endif %}
{% endif %}
{% endfor %}

pacemaker_private_interface: "{{ _pacemaker_private_interface | trim }}"
pacemaker_corosync_ring_interface: "{{ pacemaker_private_interface }}"

here is the error;
TASK [weldpua2008.pacemaker : Creates corosync config] **************************************************
fatal: [intblade03]: FAILED! => {"changed": false, "msg": "AnsibleError: {{ pacemaker_private_interface }}: {{ _pacemaker_private_interface | trim }}: {% for interface in ansible_interfaces %}\n{% if 'docker' in interface or 'lo' in interface %}{% continue %}{% endif %}\n{% set int = 'ansible%s' | format(interface) %}\n{% if _int in hostvars[inventory_hostname] and 'ipv4' in hostvars[inventory_hostname][_int] and hostvars[inventory_hostname][_int]['ipv4']['address'] is defined %}\n {% if hostvars[inventory_hostname][_int]['ipv4']['address'] | ipaddr('private') %}\n {{ interface|trim}}\n {% break %}{% endif %}\n{% endif %}\n{% endfor %}\n: template error while templating string: Encountered unknown tag 'continue'. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'.. String: {% for interface in ansible_interfaces %}\n{% if 'docker' in interface or 'lo' in interface %}{% continue %}{% endif %}\n{% set int = 'ansible%s' | format(interface) %}\n{% if _int in hostvars[inventory_hostname] and 'ipv4' in hostvars[inventory_hostname][_int] and hostvars[inventory_hostname][_int]['ipv4']['address'] is defined %}\n {% if hostvars[inventory_hostname][_int]['ipv4']['address'] | ipaddr('private') %}\n {{ interface|trim}}\n {% break %}{% endif %}\n{% endif %}\n{% endfor %}\n"}
fatal: [intblade04]: FAILED! => {"changed": false, "msg": "AnsibleError: {{ pacemaker_private_interface }}: {{ _pacemaker_private_interface | trim }}: {% for interface in ansible_interfaces %}\n{% if 'docker' in interface or 'lo' in interface %}{% continue %}{% endif %}\n{% set int = 'ansible%s' | format(interface) %}\n{% if _int in hostvars[inventory_hostname] and 'ipv4' in hostvars[inventory_hostname][_int] and hostvars[inventory_hostname][_int]['ipv4']['address'] is defined %}\n {% if hostvars[inventory_hostname][_int]['ipv4']['address'] | ipaddr('private') %}\n {{ interface|trim}}\n {% break %}{% endif %}\n{% endif %}\n{% endfor %}\n: template error while templating string: Encountered unknown tag 'continue'. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'.. String: {% for interface in ansible_interfaces %}\n{% if 'docker' in interface or 'lo' in interface %}{% continue %}{% endif %}\n{% set int = 'ansible%s' | format(interface) %}\n{% if _int in hostvars[inventory_hostname] and 'ipv4' in hostvars[inventory_hostname][_int] and hostvars[inventory_hostname][_int]['ipv4']['address'] is defined %}\n {% if hostvars[inventory_hostname][_int]['ipv4']['address'] | ipaddr('private') %}\n {{ interface|trim}}\n {% break %}{% endif %}\n{% endif %}\n{% endfor %}\n"}

- name: Authenticate all nodes

hi
i get stuck here

  • name: Authenticate all nodes
    command: >
    pcs cluster auth
    {% for host in groups[pacemaker_ansible_group] %}
    {{ hostvars[host]['ansible_hostname'] }}
    {% endfor %}
    -u {{ pacemaker_user }} -p {{ pacemaker_password }}
    run_once: true
    args:
    creates: /var/lib/pcsd/tokens

it craps out with a message saying it can not create the above dir
can you give some help please?

fatal: [debian3]: FAILED! => {"changed": false, "cmd": "pcs cluster auth debian3 debian4 -u hacluster -p secret", "failed": true, "invocation": {"module_args": {"_raw_params": "pcs cluster auth debian3 debian4 -u hacluster -p secret", "_uses_shell": false, "chdir": null, "creates": "/var/lib/pcsd/tokens", "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "msg": "[Errno 2] No such file or directory", "rc": 2}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.