Giter Club home page Giter Club logo

ovirt-ansible-hosted-engine-setup's Introduction

ovirt-ansible-hosted-engine-setup

This role has been migrated to oVirt Ansible Collection, please use latest version from there. This repository is now readonly and no longer used for active development.

Ansible role for deploying oVirt Hosted-Engine

Requirements

  • Ansible version 2.9.11 or higher
  • Python SDK version 4.2 or higher
  • Python netaddr library on the ansible controller node

Dependencies

No.

Prerequisites

  • A fully qualified domain name prepared for your Engine and the host. Forward and reverse lookup records must both be set in the DNS.

  • /var/tmp has at least 5 GB of free space.

  • Unless you are using Gluster, you must have prepared storage for your Hosted-Engine environment (choose one):

  • Install additional oVirt ansible role:

    $ ansible-galaxy install ovirt.engine-setup # case-sensitive

Role variables

General Variables

Name Default value Description
he_bridge_if eth0 The network interface ovirt management bridge will be configured on
he_fqdn null The engine FQDN as it configured on the DNS
he_mem_size_MB max The amount of memory used on the engine VM
he_reserved_memory_MB 512 The amount of memory reserved for the host
he_vcpus max The amount of CPUs used on the engine VM
he_disk_size_GB 61 Disk size of the engine VM
he_vm_mac_addr null MAC address of the engine vm network interface.
he_domain_type null Storage domain type. available options: nfs, iscsi, glusterfs, fc
he_storage_domain_addr null Storage domain IP/DNS address
he_ansible_host_name localhost hostname in use on the first HE host (not necessarily the Ansible controller one)
he_restore_from_file null a backup file created with engine-backup to be restored on the fly
he_pki_renew_on_restore false Renew engine PKI on restore if needed
he_cluster Default name of the cluster with hosted-engine hosts
he_cluster_cpu_type null cluster CPU type to be used in hosted-engine cluster (the same as HE host or lower)
he_cluster_comp_version null Compatibility version of the hosted-engine cluster. Default value is the latest compatibility version
he_data_center Default name of the datacenter with hosted-engine hosts
he_data_center_comp_version null Compatibility version of the hosted-engine data center. Default value is the latest compatibility version
he_host_name $(hostname -f) name used by the engine for the first host
he_host_address $(hostname -f) address used by the engine for the first host
he_bridge_if null interface used for the management bridge
he_apply_openscap_profile false apply a default OpenSCAP security profile on HE VM
he_network_test dns the way of the network connectivity check performed by ovirt-hosted-engine-ha and ovirt-hosted-engine-setup, available options: dns, ping, tcp or none.
he_tcp_t_address null hostname to connect if he_network_test is tcp
he_tcp_t_port null port to connect if he_network_test is tcp
he_pause_host false Pause the execution to let the user interactively fix host configuration
he_offline_deployment false If True, updates for all packages will be disabled
he_additional_package_list [] List of additional packages to be installed on engine VM apart from ovirt-engine package
he_debug_mode false If True, HE deployment will execute additional tasks for debug
he_db_password UNDEF Engine database password
he_dwh_db_password UNDEF DWH database password

NFS / Gluster Variables

Name Default value Description
he_mount_options '' NFS mount options
he_storage_domain_path null shared folder path on NFS server
he_nfs_version auto NFS version. available options: auto, v4, v3, v4_0, v4_1, v4_2
he_storage_if null the network interface name that is connected to the storage network, assumed to be pre-configured

ISCSI Variables

Name Default value Description
he_iscsi_username null iscsi username
he_iscsi_password null iscsi password
he_iscsi_target null iscsi target
he_lun_id null Lun ID
he_iscsi_portal_port null iscsi portal port
he_iscsi_portal_addr null iscsi portal address (just for interactive iSCSI discovery, use he_storage_domain_addr for the deployment)
he_iscsi_tpgt null iscsi tpgt
he_discard false Discard the whole disk space when removed. more info here

Static IP configuration Variables

DHCP configuration is used on the engine VM by default. However, if you would like to use static ip instead, define the following variables:

Name Default value Description
he_vm_ip_addr null engine VM ip address
he_vm_ip_prefix null engine VM ip prefix
he_dns_addr null engine VM DNS server
he_default_gateway null engine VM default gateway
he_vm_etc_hosts false Add engine VM ip and fqdn to /etc/hosts on the host

Example Playbook

This is a simple example for deploying Hosted-Engine with NFS storage domain.

This role can be used to deploy on localhost (the ansible controller one) or on a remote host (please correctly set he_ansible_host_name). All the playbooks can be found inside the examples/ folder.

hosted_engine_deploy_localhost.yml

---
- name: Deploy oVirt hosted engine
  hosts: localhost
  connection: local
  roles:
    - role: ovirt.hosted_engine_setup

hosted_engine_deploy_remotehost.yml

---
- name: Deploy oVirt hosted engine
  hosts: host123.localdomain
  roles:
    - role: ovirt.hosted_engine_setup

passwords.yml

---
# As an example this file is keep in plaintext, if you want to
# encrypt this file, please execute following command:
#
# $ ansible-vault encrypt passwords.yml
#
# It will ask you for a password, which you must then pass to
# ansible interactively when executing the playbook.
#
# $ ansible-playbook myplaybook.yml --ask-vault-pass
#
he_appliance_password: 123456
he_admin_password: 123456

Example 1: extra vars for NFS deployment with DHCP - he_deployment.json

{
    "he_bridge_if": "eth0",
    "he_fqdn": "he-engine.example.com",
    "he_vm_mac_addr": "00:a5:3f:66:ba:12",
    "he_domain_type": "nfs",
    "he_storage_domain_addr": "192.168.100.50",
    "he_storage_domain_path": "/var/nfs_folder"
}

Example 2: extra vars for iSCSI deployment with static IP, remote host - he_deployment_remote.json

{
    "he_bridge_if": "eth0",
    "he_fqdn": "he-engine.example.com",
    "he_vm_ip_addr": "192.168.1.214",
    "he_vm_ip_prefix": "24",
    "he_gateway": "192.168.1.1",
    "he_dns_addr": "192.168.1.1",
    "he_vm_etc_hosts": true,
    "he_vm_mac_addr": "00:a5:3f:66:ba:12",
    "he_domain_type": "iscsi",
    "he_storage_domain_addr": "192.168.1.125",
    "he_iscsi_portal_port": "3260",
    "he_iscsi_tpgt": "1",
    "he_iscsi_target": "iqn.2017-10.com.redhat.stirabos:he",
    "he_lun_id": "36589cfc000000e8a909165bdfb47b3d9",
    "he_mem_size_MB": "4096",
    "he_ansible_host_name": "host123.localdomain"
}

Test iSCSI connectivity and get LUN WWID before deploying

[root@c75he20180820h1 ~]# iscsiadm -m node --targetname iqn.2017-10.com.redhat.stirabos:he -p 192.168.1.125:3260 -l
[root@c75he20180820h1 ~]# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 6.2.0.874-7
Target: iqn.2017-10.com.redhat.stirabos:data (non-flash)
	Current Portal: 192.168.1.125:3260,1
	Persistent Portal: 192.168.1.125:3260,1
		**********
		Interface:
		**********
		Iface Name: default
		Iface Transport: tcp
		Iface Initiatorname: iqn.1994-05.com.redhat:6a4517b3773a
		Iface IPaddress: 192.168.1.14
		Iface HWaddress: <empty>
		Iface Netdev: <empty>
		SID: 1
		iSCSI Connection State: LOGGED IN
		iSCSI Session State: LOGGED_IN
		Internal iscsid Session State: NO CHANGE
		*********
		Timeouts:
		*********
		Recovery Timeout: 5
		Target Reset Timeout: 30
		LUN Reset Timeout: 30
		Abort Timeout: 15
		*****
		CHAP:
		*****
		username: <empty>
		password: ********
		username_in: <empty>
		password_in: ********
		************************
		Negotiated iSCSI params:
		************************
		HeaderDigest: None
		DataDigest: None
		MaxRecvDataSegmentLength: 262144
		MaxXmitDataSegmentLength: 131072
		FirstBurstLength: 131072
		MaxBurstLength: 16776192
		ImmediateData: Yes
		InitialR2T: Yes
		MaxOutstandingR2T: 1
		************************
		Attached SCSI devices:
		************************
		Host Number: 3	State: running
		scsi3 Channel 00 Id 0 Lun: 2
			Attached scsi disk sdb		State: running
		scsi3 Channel 00 Id 0 Lun: 3
			Attached scsi disk sdc		State: running
Target: iqn.2017-10.com.redhat.stirabos:he (non-flash)
	Current Portal: 192.168.1.125:3260,1
	Persistent Portal: 192.168.1.125:3260,1
		**********
		Interface:
		**********
		Iface Name: default
		Iface Transport: tcp
		Iface Initiatorname: iqn.1994-05.com.redhat:6a4517b3773a
		Iface IPaddress: 192.168.1.14
		Iface HWaddress: <empty>
		Iface Netdev: <empty>
		SID: 4
		iSCSI Connection State: LOGGED IN
		iSCSI Session State: LOGGED_IN
		Internal iscsid Session State: NO CHANGE
		*********
		Timeouts:
		*********
		Recovery Timeout: 5
		Target Reset Timeout: 30
		LUN Reset Timeout: 30
		Abort Timeout: 15
		*****
		CHAP:
		*****
		username: <empty>
		password: ********
		username_in: <empty>
		password_in: ********
		************************
		Negotiated iSCSI params:
		************************
		HeaderDigest: None
		DataDigest: None
		MaxRecvDataSegmentLength: 262144
		MaxXmitDataSegmentLength: 131072
		FirstBurstLength: 131072
		MaxBurstLength: 16776192
		ImmediateData: Yes
		InitialR2T: Yes
		MaxOutstandingR2T: 1
		************************
		Attached SCSI devices:
		************************
		Host Number: 6	State: running
		scsi6 Channel 00 Id 0 Lun: 0
			Attached scsi disk sdd		State: running
		scsi6 Channel 00 Id 0 Lun: 1
			Attached scsi disk sde		State: running
[root@c75he20180820h1 ~]# lsblk /dev/sdd
NAME                                MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sdd                                   8:48   0  100G  0 disk
└─36589cfc000000e8a909165bdfb47b3d9 253:10   0  100G  0 mpath
[root@c75he20180820h1 ~]# lsblk /dev/sde
NAME                                MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
sde                                   8:64   0  10G  0 disk
└─36589cfc000000ab67ee1427370d68436 253:0    0  10G  0 mpath
[root@c75he20180820h1 ~]# /lib/udev/scsi_id --page=0x83 --whitelisted --device=/dev/sdd
36589cfc000000e8a909165bdfb47b3d9
[root@c75he20180820h1 ~]# iscsiadm -m node --targetname iqn.2017-10.com.redhat.stirabos:he -p 192.168.1.125:3260 -u
Logging out of session [sid: 4, target: iqn.2017-10.com.redhat.stirabos:he, portal: 192.168.1.125,3260]
Logout of [sid: 4, target: iqn.2017-10.com.redhat.stirabos:he, portal: 192.168.1.125,3260] successful.

Usage

  1. Check all the prerequisites and requirements are met.
  2. Encrypt passwords.yml
$ ansible-vault encrypt passwords.yml
  1. Execute the playbook

Local deployment:

$ ansible-playbook hosted_engine_deploy.yml --extra-vars='@he_deployment.json' --extra-vars='@passwords.yml' --ask-vault-pass

Deployment over a remote host:

ansible-playbook -i host123.localdomain, hosted_engine_deploy.yml --extra-vars='@he_deployment.json' --extra-vars='@passwords.yml' --ask-vault-pass

Deploy over a remote host from Ansible AWX/Tower

The flow creates a temporary VM with a running engine to use for configuring and bootstrapping the whole environment. The bootstrap engine VM runs over libvirt natted network so, in that stage, is not reachable from outside the host where it's running on.

When the role dynamically adds the freshly created engine VM to the inventory, it also configures the host to be used as an ssh proxy and this perfectly works directly running the playbook with ansible-playbook. On the other side, Ansible AWX/Tower by defaults relies on PRoot to isolate jobs and so the credentials supplied by AWX/Tower will not flow to the jump host configured with ProxyCommand.

This can be avoided disabling job isolation in AWX/Tower

Please notice that job isolation can be configured system wise but not only for the HE deploy job and so it's not a recommended practice on production environments.

Deployment time improvements

To significantly reduce the amount of time it takes to deploy a hosted engine over a remote host, add the following lines to /etc/ansible/ansible.cfg under the [ssh_connection] section:

ssh_args = -C -o ControlMaster=auto -o ControlPersist=30m
control_path_dir = /root/cp
control_path = %(directory)s/%%h-%%r
pipelining = True

Make changes in the engine VM during the deployment

In some cases, a user may want to make adjustments to the engine VM during the deployment process. There are 2 ways to do that:

Automatic:

Write ansible playbooks that will run on the engine VM before or after the engine VM installation.

Add the playbooks that will run before the engine setup to hooks/enginevm_before_engine_setup and the playbooks that will run after the engine setup to hooks/enginevm_after_engine_setup.

These playbooks will be consumed automatically by the role when you execute it.

Manual:

To make manual adjustments you can set the variable he_pause_host to true. This will pause the deployment after the engine has been setup and create a lock-file at /tmp that ends with _he_setup_lock on the machine the role was executed on. The deployment will continue after deleting the lock-file, or after 24 hours ( if the lock-file hasn't been removed ).

In order to proceed with the deployment, before deleting the lock-file, make sure that the host is on 'up' state at the engine's URL.

Both of the lock-file path and the engine's URL will be presented during the role execution.

Demo

Here a demo showing a deployment on NFS configuring the engine VM with static IP. asciicast

License

Apache License 2.0

ovirt-ansible-hosted-engine-setup's People

Contributors

akrejcir avatar almusil avatar arachmani avatar arano-kai avatar didib avatar dominikholler avatar eslutsky avatar gobindadas avatar gorbyo avatar irosenzw avatar kedark3 avatar kobihk avatar nijinashok avatar sandrobonazzola avatar stluke avatar tiraboschi avatar tnisan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ovirt-ansible-hosted-engine-setup's Issues

[RFE] Enable update of appliance from channels after VM is deployed

It is common practice to consume old appliance and update packages from official repositories. However this we we might be unable to deploy the engine itself, as this is only possible after this role ends.

My suggestion is to have flag/variable controlling if yum update will run on the appliance VM right after it has been created and before engine-setup is run on it.

delegate_to won't resolve on remote host

Currently the playbook is hanging on here when resolving the newly provisioned VM via the remote host:

TASK [ovirt.hosted_engine_setup : Wait for the local VM]

My understanding is that Ansible should use the details used as defined in the dynamically provisioned host entry for the VM, running the command as the remote_host as to benefit from the /etc/hosts routing but currently it won't, ping, despite a manual connection from hypervisor to VM working as expected.

Get rid of stages variables for tags

The role has to be executed against the host, than the engine VM, than the host again and so on.

This is now triggered with stages varibales https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/defaults/main.yml#L2

But this is really polluting execution output with a lot of:
[ INFO ] TASK [oVirt.hosted-engine-setup : 01 Bootstrap local VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : 02 Bootstrap local VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : 03 Bootstrap local VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : 04 Bootstrap local VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : 05 Bootstrap local VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Create Storage Domain]
[ INFO ] skipping: [localhost]

We can probably have something more user friendly using ansible tags instead of custom variables.

Deployment errata if he_data_center defined but he_cluster not

How-to:

  • define he_data_center to something
  • leave he_cluster as is
  • bonus: use vlan for he_bridge_if 🙃

Expected:

  • host in Default cluster
  • Default cluster in he_data_center
  • network is ok

Actual

  • host in Default cluster 👍
  • Default cluster in Default data-center
  • bonus used -> fail after Add host

Reason
Upon engine-deploy, Default cluster and data-center is created. Then, in 05_add_host.yml:

  • ovirt_datacenter create custom he_data_center
  • ovirt_cluster is fine 🐶

Latter is bug in module, since cluster still in Default data-center.
As bonus, vlan propagated on he_data_center only, so host fail upon net config for vlan-less Default data-center.

Cannot use via include_role

Got an error when using role via include_role:

$ ansible-playbook -i inventory/ --limit pointblank playbooks/ovirt-hosted-engine.yml

PLAY [Deploy oVirt hosted engine] *****************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************
ok: [pointblank]

TASK [include_role : ovirt.hosted-engine-setup] ***************************************************************************************************

TASK [ovirt.hosted-engine-setup : Install oVirt Hosted Engine packages] ***************************************************************************
ok: [pointblank]

TASK [ovirt.hosted-engine-setup : System configuration validations] *******************************************************************************
included: /home/andrew/automate/roles/ovirt.hosted-engine-setup/tasks/pre_checks/001_validate_network_interfaces.yml for pointblank => (item=/home/andrew/automate/roles/ovirt.hosted-engine-setup/tasks/pre_checks/001_validate_network_interfaces.yml)
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'rfind'
to see the full traceback, use -vvv

Reproduce playbook:

---
- name: Deploy oVirt hosted engine
  hosts: all
  tasks:
    - include_role:
        name: ovirt.hosted-engine-setup

Ansible version:

$ ansible-playbook --version
ansible-playbook 2.7.10
  config file = /home/andrew/automate/ansible.cfg
  configured module search path = [u'/home/andrew/automate/library']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]

Error with initial_clean and sudo

In tasks/initial_clean.yml there is, at the end a task:

  - name: Remove eventually entries for the local VM from known_hosts file
    known_hosts:
      name: "{{ he_fqdn }}"
      state: absent
    delegate_to: localhost

This causes an error of:

fatal: [foo.example.com -> localhost]: FAILED! => changed=false 
  module_stderr: |-
    sudo: a password is required
  module_stdout: ''
  msg: |-
    MODULE FAILURE
    See stdout/stderr for the exact error
  rc: 1

since it is trying to sudo up on the local machine running Ansible since we have, in our playbook:

  become: True
  become_user: root

I think that the task could do with a become: no.

If you agree with this issue and would like, I can put a PR together?

Support of hosted engine deploy from remote host

Currently, we have the option to deploy only from localhost
I would ask if you could add support to deploy the hosted engine from a remote host.

in our case,
we want to deploy the hosted engine environment by using ansible from a remote machine.
the WA that we found for now is to run ansible-playbook as a shell from our ansible playbook like:
- name: "Run HE ansible deploy locally on the first host"
shell: "ansible-playbook /tmp/hosted_engine_deploy.yml --extra-vars='@/tmp/he_deployment.json'"

but IMHO

  1. It does not look good.
  2. The task "stuck" for ~20 minutes without any output.
  3. In the end, when the task is complete the output is hard to read.

Add role variable for the host name instead of default he_host_name

Hi,
It will be great if we can set the name of the hosted-engine host as part of the variable role instead of taking the he_host_name by default:
Actual Result:
{{ he_host_name }}

Expected to add it to the general variable in the JSON file:
"he_host_name": "host_number_1"

Thanks,
Kobi

installer fails with OVN configured host

My host is configured with OVN and I wanted to re-deploy hosted engine. Currently this is not possible:

ovs-vsctl show

f11dac71-55ba-4878-8a69-1296fade4f7d
Bridge vdsmbr_htdZMRuz
Port vdsmbr_htdZMRuz
Interface vdsmbr_htdZMRuz
type: internal
Port ovirtmgmt
tag: 3
Interface ovirtmgmt
type: internal
Port "bond0"
Interface "bond0"
Port VMs
tag: 5
Interface VMs
type: internal
Bridge br-int
fail_mode: secure
Port "ovn-6e8d76-0"
Interface "ovn-6e8d76-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.XX.XX"}
Port br-int
Interface br-int
type: internal
Port "ovn-f090b1-0"
Interface "ovn-f090b1-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.XX.XX"}
ovs_version: "2.11.0"

[ INFO ] TASK [ovirt.hosted_engine_setup : Detecting interface on existing management bridge] [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'interfaces'\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/pre_checks/001_validate_network_interfaces.yml': line 4, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n block:\n - name: Detecting interface on existing management bridge\n ^ here\n"} [ ERROR ] Failed to execute stage 'Environment customization': Failed executing ansible-playbook

Please add OVN support.

Whats the correct way to deploy on multiple Hosts

I get a he_host name error when I try to add multiple hosts?(Presently use a separate playbook with the ovirt_hosts module after this runs and that works but wondering if there is a correct way to do this)

Havent been able to make this deploy a gluster cluster during install(he_domain_type=gluster), Is that supposed to work presently?

Thanks

Re-run fails with "RTNETLINK answers: File exists" when setting routing rules

I'm trying to setup a oVirt 4.4-alpha hosted-engine with ovirt-ansible-hosted-engine-setup-1.0.35-1.el8.noarch which is part of the official ovirt-4.4-pre repository.

If I re-run engine-setup after a failed installation attempt the Ansible role aborts with the following error:

2020-02-18 16:17:29,895+0100 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_task': 'ovirt.hosted_engine_setup : Get routing rules, IPv4'}
2020-02-18 16:17:29,896+0100 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Get routing rules, IPv4 kwargs is_conditional:False 
2020-02-18 16:17:29,896+0100 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Get routing rules, IPv4 kwargs 
2020-02-18 16:17:30,441+0100 DEBUG var changed: host "localhost" var "route_rules_ipv4" type "<class 'dict'>" value: "{
    "changed": true,
    "cmd": [
        "ip",
        "rule"
    ],
    "delta": "0:00:00.003368",
    "end": "2020-02-18 16:17:30.106158",
    "failed": false,
    "rc": 0,
    "start": "2020-02-18 16:17:30.102790",
    "stderr": "",
    "stderr_lines": [],
    "stdout": "0:\tfrom all lookup local \n100:\tfrom all to 192.168.222.1/24 lookup main \n100:\tfrom all to 192.168.1.1/24 lookup main \n101:\tfrom 192.168.222.1/24 lookup main \n101:\tfrom 192.168.1.1/24 lookup main \n32766:\tfrom all lookup main \n32767:\tfrom all lookup default ",
    "stdout_lines": [
        "0:\tfrom all lookup local ",
        "100:\tfrom all to 192.168.222.1/24 lookup main ",
        "100:\tfrom all to 192.168.1.1/24 lookup main ",
        "101:\tfrom 192.168.222.1/24 lookup main ",
        "101:\tfrom 192.168.1.1/24 lookup main ",
        "32766:\tfrom all lookup main ",
        "32767:\tfrom all lookup default "
    ]
}"
2020-02-18 16:17:30,441+0100 INFO ansible ok {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_host': 'localhost', 'ansible_task': 'Get routing rules, IPv4', 'task_duration': 0}
2020-02-18 16:17:30,442+0100 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fb2cdbf1c18> kwargs 
2020-02-18 16:17:30,779+0100 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_task': 'ovirt.hosted_engine_setup : debug'}
[...]
2020-02-18 16:17:35,914+0100 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_task': 'ovirt.hosted_engine_setup : Fetch IPv4 CIDR for virbr0'}
2020-02-18 16:17:35,914+0100 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Fetch IPv4 CIDR for virbr0 kwargs is_conditional:False 
2020-02-18 16:17:35,915+0100 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Fetch IPv4 CIDR for {{ virbr_default }} kwargs 
2020-02-18 16:17:36,305+0100 DEBUG var changed: host "localhost" var "virbr_cidr_ipv4" type "<class 'ansible.utils.unsafe_proxy.AnsibleUnsafeText'>" value: ""192.168.222.1/24""
2020-02-18 16:17:36,305+0100 INFO ansible ok {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_host': 'localhost', 'ansible_task': 'Fetch IPv4 CIDR for virbr0', 'task_duration': 0}
[...]
2020-02-18 16:17:38,664+0100 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_task': 'ovirt.hosted_engine_setup : Add IPv4 outbound route rules'}
2020-02-18 16:17:38,664+0100 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Add IPv4 outbound route rules kwargs is_conditional:False 
2020-02-18 16:17:38,665+0100 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Add IPv4 outbound route rules kwargs 
2020-02-18 16:17:39,214+0100 DEBUG var changed: host "localhost" var "result" type "<class 'dict'>" value: "{
    "changed": true,
    "cmd": [
        "ip",
        "rule",
        "add",
        "from",
        "192.168.222.1/24",
        "priority",
        "101",
        "table",
        "main"
    ],
    "delta": "0:00:00.002805",
    "end": "2020-02-18 16:17:38.875350",
    "failed": true,
    "msg": "non-zero return code",
    "rc": 2,
    "start": "2020-02-18 16:17:38.872545",
    "stderr": "RTNETLINK answers: File exists",
    "stderr_lines": [
        "RTNETLINK answers: File exists"
    ],
    "stdout": "",
    "stdout_lines": []
}"
2020-02-18 16:17:39,214+0100 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<class 'list'>" value: "[]"
2020-02-18 16:17:39,214+0100 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<class 'list'>" value: "[]"
2020-02-18 16:17:39,214+0100 DEBUG var changed: host "localhost" var "play_hosts" type "<class 'list'>" value: "[]"
2020-02-18 16:17:39,215+0100 ERROR ansible failed {
    "ansible_host": "localhost",
    "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
    "ansible_result": {
        "_ansible_no_log": false,
        "changed": true,
        "cmd": [
            "ip",
            "rule",
            "add",
            "from",
            "192.168.222.1/24",
            "priority",
            "101",
            "table",
            "main"
        ],
        "delta": "0:00:00.002805",
        "end": "2020-02-18 16:17:38.875350",
        "invocation": {
            "module_args": {
                "_raw_params": "ip rule add from 192.168.222.1/24 priority 101 table main",
                "_uses_shell": false,
                "argv": null,
                "chdir": null,
                "creates": null,
                "executable": null,
                "removes": null,
                "stdin": null,
                "stdin_add_newline": true,
                "strip_empty_ends": true,
                "warn": true
            }
        },
        "msg": "non-zero return code",
        "rc": 2,
        "start": "2020-02-18 16:17:38.872545",
        "stderr": "RTNETLINK answers: File exists",
        "stderr_lines": [
            "RTNETLINK answers: File exists"
        ],
        "stdout": "",
        "stdout_lines": []
    },
    "ansible_task": "Add IPv4 outbound route rules",
    "ansible_type": "task",
    "status": "FAILED",
    "task_duration": 0
}

I quickly check the involved tasks/bootstrap_local_vm/01_prepare_routing_rules.yml and it seems that there is actually an intended condition that should skip this task when the rule is already present. This however doesn't seem to work.

The role is run with:

# ansible --version
ansible 2.9.4
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
# rpm -q ansible
ansible-2.9.4-1.el8.noarch

Pass variables between role invocations

I fear that we have an issue on

  • name: Add an entry for this host on /etc/hosts on the local VM
    lineinfile:
    dest: /etc/hosts
    line: "{{ he_host_ip }} {{ he_host_name }}"

in tasks/bootstrap_local_vm/03_engine_initial_tasks.yml
due to the fact that he_host_ip and he_host_name are empty there.

when deployment hosted engine, step 3 : prepare vm failure

log info: fatal:[localhost]:FALED!=> msg: the ovirt_host_facts module has been renamed to ovirt_host_info,and the renamed one no longer returns ansible_facts

The deployment environment: engine rpm version : ovirt-engine-appliance-4.3 、 OS : ovirt node 4.3.7

微信图片_20200916174920
微信图片_20200916174947

Remote host install assumes root as connection username

File: tasks/bootstrap_local_vm/02_create_local_vm.yml

      ansible_ssh_extra_args: >-
        -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {% if he_ansible_host_name != "localhost" %}
        -o ProxyCommand="ssh -W %h:%p -q
        {% if not host_key_checking %} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {% endif %}
        root@{{ he_ansible_host_name }}" {% endif %}

Assumes that root is the connection username to the hypervisor. Ideally this would be pulled as the ansible_user from the inventory for the he_ansible_host.

Ansible inventory for he_host has to be FQDN

This probably just needs to be made clear at the readme level, but if the he_host is not fully qualified the task "Add an entry for this host on /etc/hosts on the local VM" will fail as it resolves hostname[{{ he_host_fqdn }}] or a variable to that affect, rather than referring to the actual inventory name defined.

hosted-engine deploy failed with VLAN_ID

I'm trying to deploy the hosted-engine on a machine with vlan setup. From what i can tell the vlan id is correctly extracted but I can't figure it out what variable shouldn't be None and why is not set.
Any help appreciated

Host: CentOS 7.7.1908
oVirt: 4.3.8
INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Detect VLAN ID] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 changed: [localhost] DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 TASK [ovirt.hosted_engine_setup : debug] DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 vlan_id_out: {'stderr_lines': [], u'changed': True, u'end': u'2020-01-29 11:37:31.444252', u'stdout': u'97', u'cmd': u"ip -d link show enp2s0f0.97 | grep 'vlan ' | grep -Po 'id \\K[\\d]+' | cat", 'failed': False, u'delta': u'0:00:00.007897', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'97'], u'start': u'2020-01-29 11:37:31.436355'} INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Set Engine public key as authorized key without validating the TLS/SSL certificates] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 changed: [localhost] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : include_tasks] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ok: [localhost] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ok: [localhost] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Ensure that the target datacenter is present] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ok: [localhost] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Ensure that the target cluster is present in the target datacenter] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ok: [localhost] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Check actual cluster location] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 skipping: [localhost] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Enable GlusterFS at cluster level] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 skipping: [localhost] INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Set VLAN ID at datacenter level] DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 {u'invocation': {u'module_args': {u'comment': None, u'external_provider': None, u'timeout': 180, u'description': None, u'name': u'ovirtmgmt', u'poll_interval': 3, u'state': u'present', u'nested_attributes': [], u'label': None, u'fetch_nested': False, u'vm_network': None, u'data_center': u'Default', u'clusters': None, u'vlan_tag': 97, u'mtu': None, u'id': None, u'wait': True}}, u'msg': u"Entity 'None' was not found.", u'exception': u'Traceback (most recent call last):\n File "/tmp/ansible_ovirt_network_payload_3F32JI/ansible_ovirt_network_payload.zip/ansible/modules/cloud/ovirt/ovirt_network.py", line 327, in main\n File "/tmp/ansible_ovirt_network_payload_3F32JI/ansible_ovirt_network_payload.zip/ansible/module_utils/ovirt.py", line 592, in create\n new_entity = self.build_entity()\n File "/tmp/ansible_ovirt_network_payload_3F32JI/ansible_ovirt_network_payload.zip/ansible/modules/cloud/ovirt/ovirt_network.py", line 175, in build_entity\n File "/tmp/ansible_ovirt_network_payload_3F32JI/ansible_ovirt_network_payload.zip/ansible/module_utils/ovirt.py", line 327, in get_id_by_name\n raise Exception("Entity \'%s\' was not found." % name)\nException: Entity \'None\' was not found.\n', u'changed': False, u'_ansible_no_log': False} ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 Exception: Entity 'None' was not found. ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Entity 'None' was not found."}

Module copy issue introduced in ansible 2.9.12

Hi,
there is an issue introduced in ansible 2.9.12.
The issue is when copying files the destination does not have the same permission as the original (it has 0600).
Please check your role for this issue.

Task "Install oVirt Hosted Engine packages" still use yum module in Node 4.4.1.

system: oVirt Node 4.4.1-2020071311 (https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4.1-2020071311/el8/)

When I tried to Migrate from a standalone Engine to a self-hosted engine.

After I do these command:

hosted-engine --deploy --restore-from-file=file.backup

It tell me some error like this:

[ INFO ] TASK [ovirt.hosted_engine_setup : Install oVirt Hosted Engine packages]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the dnf Ansible module instead."}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook

I've already done "ovirt-hosted-engine-cleanup" before deploy.

Full SSH log:

SSH log.txt

Full error loge:

ovirt-hosted-engine-setup-20200728095418-e64xay.log

The ipv4 filter requires python's netaddr be installed on the ansible controller

Hi

Im running into this issue

TASK [ovirt.hosted_engine_setup : Wait for the bridge to appear on the host] ************************************************************************************************************************************
changed: [all-in-one.home.adrians.computer]

TASK [ovirt.hosted_engine_setup : Refresh network facts] ********************************************************************************************************************************************************
ok: [all-in-one.home.adrians.computer]

TASK [ovirt.hosted_engine_setup : Fetch IPv4 CIDR for virbr0] ***************************************************************************************************************************************************
fatal: [all-in-one.home.adrians.computer]: FAILED! => {"msg": "The ipv4 filter requires python's netaddr be installed on the ansible controller"}

PLAY RECAP ******************************************************************************************************************************************************************************************************
all-in-one.home.adrians.computer : ok=103  changed=27   unreachable=0    failed=1    skipped=41   rescued=0    ignored=0   

and I have netaddr installed

$ ll -d /usr/lib/python2.7/site-packages/netad*
drwxr-xr-x. 6 root root 4096 May 24 09:41 /usr/lib/python2.7/site-packages/netaddr
drwxr-xr-x. 2 root root 4096 May 24 09:41 /usr/lib/python2.7/site-packages/netaddr-0.7.19-py2.7.egg-info

Restore initial libvirt default network on failures

An IPv6 deployment attempt will setup libvirt default network for IPv6 only so a second IPv4 attempt will fail with something like:

[ INFO ] TASK [oVirt.hosted-engine-setup : Fetch IPv4 CIDR for "virbr0"]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ipv4'\n\nThe error appears to have been in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml': line 57, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tags: ['skip_ansible_lint']\n - name: Fetch IPv4 CIDR for \"{{ virbr_default }}\"\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"}

Always restoring /usr/share/libvirt/networks/default.xml over /etc/libvirt/qemu/networks/default.xml should prevent this

task "Rename previous HE storage domain to avoid name conflicts" fails due to unknown variable

The task "Rename previous HE storage domain to avoid name conflicts" fails with following error message

The task includes an option with an undefined variable. The error was: 'STORAGE_DOMAIN_NAME' is undefined

The error appears to have been in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/restore_backup.yml': line 71, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  • debug: var=db_remove_he_host
  • name: Rename previous HE storage domain to avoid name conflicts
    ^ here

OS: CentOS Linux release 7.6.1810 (Core)
version: ovirt-hosted-engine-setup-2.3.3-1.el7.noarch

I got around this issue by patching the file (but I'm not sure if this is the right way to fix it)
sed -i.bak 's/STORAGE_DOMAIN_NAME/he_storage_domain_name/' /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/restore_backup.yml

https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/restore_backup.yml#L75-L76

Need to handle the scenario of running this role on deployed environment

Need to handle the scenario of running this role on deployed environment

IMHO we should skip with an error message something like:
"This environment already deployed, to be able to redeploy please run the command ovirt-hosted-engine-cleanup command on the "

In my case the environment destroyed partially and I got the following error:
TASK [oVirt.hosted-engine-setup : Fail if user chose more memory then the available memory] ************************************************************************************************************************************************
task path: /tmp/ovirt-ansible-hosted-engine-setup/tasks/pre_checks/validate_memory_size.yml:35
Wednesday 05 December 2018 15:34:57 +0200 (0:00:00.045) 0:02:42.445 ****
fatal: [automation-xxx.yyy.com]: FAILED! => {
"changed": false,
"msg": "Not enough memory: 12288MB, while only 9740MB are available on the host"
}

Need to add netaddr as a requirement since you use ipv4 filter

Need to add netaddr as a requirement since you use ipv4 filter in:
ovirt-ansible-hosted-engine-setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml:44

as you can see:
https://docs.ansible.com/ansible/2.5/user_guide/playbooks_filters_ipaddr.html

To use this filter in Ansible, you need to install netaddr Python library on a computer on which you use Ansible (it is not required on remote hosts). It can usually be installed either via your system package manager, or using pip:

hosted_engine_deploy fails for remote host

I am trying to run remote deployment using this ansible role and it keeps failing for me on this. Not sure why. Probably a bug ?

TASK [ovirt.hosted_engine_setup : Wait for the local VM]
fatal: [smicro-5037-02.cfme.lab.eng.rdu2.redhat.com -> rhv-ims-hosted-engine.cfme.lab.eng.rdu2.redhat.com]: FAILED! => {"changed": false, "elapsed": 195, "msg": "timed out waiting for ping module test success: Failed to connect to the host via ssh: ssh: connect to host rhv-ims-hosted-engine.cfme.lab.eng.rdu2.redhat.com port 22: Connection timed out"}

ip -j rule -> Option "-j" is unknown on RHEL 7.8

The 01_prepare_routing_rules.yml fail because the ip command issued with unknown option

[root@lynx09 ~]# cat /etc/*elease
NAME="Red Hat Enterprise Linux"
VERSION="7.8"
VERSION_ID="7.8"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Red Hat Virtualization Host"
VARIANT_ID="ovirt-node"
PRETTY_NAME="Red Hat Virtualization Host 4.3.9 (el7.8)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.8:GA:hypervisor"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

FIXME

REDHAT_BUGZILLA_PRODUCT="Red Hat Virtualization"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.8
REDHAT_SUPPORT_PRODUCT="Red Hat Virtualization"
REDHAT_SUPPORT_PRODUCT_VERSION=7.8
Red Hat Enterprise Linux release 7.8
Red Hat Enterprise Linux release 7.8

[root@lynx09 ~]# ip -j rule
Option "-j" is unknown, try "ip -help".

Add role variable for deploying hosted engine on specific datacenter and cluster instead of using the default objects

Hi,
It will be great if we can deploy the hosted engine host/vm/sd on specific datacenter and cluster instead of using the default object:
Actual Result:
hosted engine host/vm/sd deploying on:
Default - datacenter
Default - cluster

Expected to add it to the general variable in the JSON file:
"dc_name": "dc_number_1",
"cl_name": "cl_number_1",
to be able to deploy hosted engine host/vm/sd on specific datacenter and cluster.

Thanks,
Kobi

Failure while waiting for the host to be up

Hello contributors, first of all i would like to thank you for you hard work providing these Ansible roles.

To the issue at hand, i am trying to install oVirt engine in HA mode using this playbook.
Below are the variables i am using:

he_bridge_if: 'p3p1'
he_fqdn: 'engine.example.com'
he_vm_ip_addr: '192.168.44.189'
he_vm_ip_prefix: '24'
he_gateway: '192.168.44.254'
he_dns_addr:
  - '8.8.8.8'
  - '192.168.195.1'
  - '192.168.163.1'
he_vm_etc_hosts: true
he_vm_mac_addr: '00:a5:3f:66:ba:12'
he_domain_type: 'nfs'
he_storage_domain_path: "/var/nfs"
he_storage_domain_addr: 'nfs.example.com'
he_mem_size_MB: '8192'
he_vcpus: '2'
he_ansible_host_name: 'node-01.example.com'
he_host_name: 'node-01.example.com'
he_network_test: 'ping'

So currently i am running the playbook from my workstation PC targeting 'node-01.example.com'.

The role works fine until it reaches the point after it has added the host to the engine. Then it goes through the retries but it cannot get the right response back from the engine with the below ansible output:

TASK [ovirt.hosted_engine_setup : Wait for the host to be up] *******************************************************************************************************************************
task path: /Users/user/.ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml:111
fatal: [node-01.example.com]: FAILED! => {
    "ansible_facts": {
        "ovirt_hosts": []
    },
    "attempts": 120,
    "changed": false,
    "invocation": {
        "module_args": {
            "all_content": false,
            "fetch_nested": false,
            "nested_attributes": [],
            "pattern": "name={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False'}"
        }
    }
}

Log from the ovirt-engine:

2019-05-30 10:28:20,158+01 INFO [org.ovirt.engine.core.bll.SearchQuery] (default task-1) [6f91508e-4f5e-4a5d-b3a1-23f96af7e24b] ResourceManager::searchBusinessObjects - erroneous search text - ''Hosts : name={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False'}'' error - ''Hosts : name=$'c$hanged': False, 'skipped': True, 'skip_reason': 'Conditional result was False'}''

I am running the following software:

  • Ansible 2.7.8 on Mac OS
  • Latest Ansible roles (downloaded from ansible-galaxy)
  • oVirt node 4.4.0 for the host OS

Thank you

Use static imports

Currently the deploy plabook looks too complex:

- name: Hosted-Engine-Setup_Part_01
  hosts: localhost
  connection: local
  vars_files:
    - passwords.yml
  vars:
    he_install_packages: true
    he_pre_checks: true
    he_initial_clean: true
    he_bootstrap_local_vm: true
    ovirt_repositories_ovirt_release_rpm: "{{ ovirt_repo_release_rpm }}"
  roles:
    - role: oVirt.repositories
    - role: oVirt.hosted-engine-setup

- name: Hosted-Engine-Setup_Part_02
  hosts: engine
  vars_files:
    - passwords.yml
  vars:
    he_bootstrap_pre_install_local_engine_vm: true
  roles:
    - role: oVirt.hosted-engine-setup

- name: Hosted-Engine-Setup_Part_03
  hosts: engine
  vars_files:
    - passwords.yml
  vars:
    ovirt_engine_setup_hostname: "{{ he_fqdn.split('.')[0] }}"
    ovirt_engine_setup_organization: "{{ he_cloud_init_domain_name }}"
    ovirt_engine_setup_dwh_db_host: "{{ he_fqdn.split('.')[0] }}"
    ovirt_engine_setup_firewall_manager: null
    ovirt_engine_setup_answer_file_path: /root/ovirt-engine-answers
    ovirt_engine_setup_use_remote_answer_file: True
    ovirt_engine_setup_accept_defaults: True
    ovirt_engine_setup_update_all_packages: false
    ovirt_engine_setup_offline: true
    ovirt_engine_setup_admin_password: "{{ he_admin_password }}"
  roles:
    - role: oVirt.engine-setup

- name: Hosted-Engine-Setup_Part_04
  hosts: engine
  vars_files:
    - passwords.yml
  vars:
    he_bootstrap_post_install_local_engine_vm: true
  roles:
    - role: oVirt.hosted-engine-setup

- name: Hosted-Engine-Setup_Part_05
  hosts: localhost
  connection: local
  vars_files:
    - passwords.yml
  vars:
    he_bootstrap_local_vm_add_host: true
    he_create_storage_domain: true
    he_create_target_vm: true
  roles:
    - role: oVirt.hosted-engine-setup

- name: Hosted-Engine-Setup_Part_06
  hosts: engine
  vars_files:
    - passwords.yml
  vars:
    he_engine_vm_configuration: true
  roles:
    - role: oVirt.hosted-engine-setup

- name: Hosted-Engine-Setup_Part_07
  hosts: localhost
  connection: local
  vars_files:
    - passwords.yml
  vars:
    he_final_tasks: true
    he_final_clean: true
  roles:
    - role: oVirt.hosted-engine-setup

AFAIR this was because of delegate_to don't work with include_tasks, but it works with import_tasks, can we use that instead? It would be more user friendly, and maybe in future when deleage_to will work with include_tasks, we don' thave to change the interface how user will execute the playbook.

Use singular form for oVirt ansible modules (ansible 2.8)

All the oVirt related modules are going to be renamed with singular form for ansible 2.8 and now we see a deprecation message:

2018-11-20 09:43:23,394-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 host_result_up_check: {u'deprecations': [{u'msg': u"The 'ovirt_hosts_facts' module is being renamed 'ovirt_host_facts'", u'version': 2.8}], 'attempts': 19, u'changed': False, u'ansible_facts': {u'ovirt_hosts': [{u'comment': u'', u'update_available': False, u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [], u'cluster': {u'href': u'/ovirt-engine/api/clusters/1402ac0c-ecd2-11e8-9ef1-5452c0a8c863', u'id': u'1402ac0c-ecd2-11e8-9ef1-5452c0a8c863'}, u'href': u'/ovirt-engine/api/hosts/a6ad8e09-26b8-4d6d-8205-4748919aa447', u'devices': [], u'id': u'a6ad8e09-26b8-4d6d-8205-4748919aa447', u'external_status': u'ok', u'statistics': [], u'certificate': {u'organization': u'lago.local', u'subject': u'O=lago.local,CN=lago-he-basic-ansible-suite-master-host-0'}, u'nics': [], u'iscsi': {u'initiator': u'iqn.2014-07.org.lago:lago-he-basic-ansible-suite-master-host-0:lago-he-basic-ansible-suite-master-host-0'}, u'cpu': {u'speed': 2397.0, u'name': u'Intel Xeon E312xx (Sandy Bridge)', u'topology': {u'cores': 1, u'threads': 1, u'sockets': 2}}, u'port': 54321, u'hardware_information': {u'version': u'RHEL 7.5.0 PC (i440FX + PIIX, 1996)', u'uuid': u'2DD014B0-3210-4653-AE5D-D8787B03AB4A', u'family': u'Red Hat Enterprise Linux', u'product_name': u'KVM', u'supported_rng_sources': [u'hwrng', u'random'], u'manufacturer': u'Red Hat'}, u'version': {u'full_version': u'vdsm-4.30.2-11.git6c62ff1.el7', u'revision': 0, u'major': 4, u'minor': 30, u'build': 2}, u'memory': 5673844736, u'ksm': {u'enabled': False}, u'se_linux': {u'mode': u'enforcing'}, u'type': u'rhel', u'status': u'up', u'tags': [], u'katello_errata': [], u'external_network_provider_configurations': [], u'ssh': {u'port': 22, u'fingerprint': u'SHA256:+jGoKFmwcsZaj+HagLa/NWmYzJ0LiozMZQqxomsxDg4'}, u'address': u'lago-he-basic-ansible-suite-master-host-0', u'numa_nodes': [], u'device_passthrough': {u'enabled': False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported': False, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8', u'revision': 0, u'major': 3, u'minor': 9, u'build': 0}, u'power_management': {u'kdump_detection': True, u'enabled': False, u'pm_proxies': [], u'automatic_pm_enabled': True}, u'name': u'lago-he-basic-ansible-suite-master-host-0', u'max_scheduling_memory': 5337251840, u'summary': {u'active': 1, u'migrating': 0, u'total': 1}, u'auto_numa_status': u'disable', u'transparent_huge_pages': {u'enabled': True}, u'network_attachments': [], u'os': {u'version': {u'full_version': u'7 - 5.1804.el7.centos', u'major': 7}, u'type': u'RHEL', u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline': u'BOOT_IMAGE=/vmlinuz-3.10.0-862.2.3.el7.x86_64 root=UUID=548a847d-dc36-462c-b6e0-3b3578098519 ro console=tty0 rd_NO_PLYMOUTH crashkernel=auto console=ttyS0,115200 LANG=en_US.UTF-8'}, u'storage_connection_extensions': [], u'kdump_status': u'disabled', u'spm': {u'priority': 5, u'status': u'none'}}]}, 'failed': False}

Let's consume them with singular form name only.
eg:
ovirt_hosts_facts -> ovirt_host_facts
ovirt_storage_domains_facts -> ovirt_storage_domain_facts
ovirt_storage_domains -> ovirt_storage_domain
ovirt_vms -> ovirt_vm
ovirt_vms_facts -> ovirt_vm_facts
...

it fails to install over a vlan tagged lacp bond

TASK [ovirt.hosted_engine_setup : Parse libvirt default network configuration] *********************************************************
fatal: [voyager]: FAILED! => {"changed": false, "msg": "network default not found"}

Does not properly setup he_bridge_if

I would expect that the HostedEngineLocal vm to be configured with a bridge attached to he_bridge_if, but this is not done resulting in HostedEngineLocal vm to be unreachable. This seems like a regression from 4.2 behavior.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.