Giter Club home page Giter Club logo

redhatqe / teflo Goto Github PK

View Code? Open in Web Editor NEW
14.0 15.0 16.0 8.05 MB

Teflo is a standalone orchestration software that controls the flow of a set of testing scenarios, allowing users to provision machines, deploy software, execute tests against them and manage generated artifacts and report results.

Home Page: https://teflo.readthedocs.io/en/latest/

License: GNU General Public License v3.0

Makefile 0.11% Python 99.56% Shell 0.08% Jinja 0.24%

teflo's Introduction

Welcome to Teflo!

Warning

This project is in maintenance mode and will not have any new feature development.

What is Teflo?

TEFLO stands for (T est E xecution F ramework L ibraries and O bjects)

Teflo is an orchestration software that controls the flow of a set of testing scenarios. It is a standalone tool written in Python that includes all aspects of the workflow. It allows users to provision machines, deploy software, execute tests against them and manage generated artifacts and report results.

Teflo Provides structure, readability, extensibility and flexibility by :

  • providing a YAML file to express a test workflow as a series of steps.
  • enabling integration of external tooling to execute the test workflow as defined by the steps.

Teflo can be used for an E2E (end to end) multi-product scenario. Teflo handles coordinating the E2E task workflow to drive the scenario execution.

Teflo can be used for an E2E (end to end) multi-product scenario. Teflo handles coordinating the E2E task workflow to drive the scenario execution.

What does an E2E workflow consist of?

At a high level teflo executes the following tasks when processing a scenario.

  • Provision system resources
  • Perform system configuration
  • Install products
  • Configure products
  • Install test frameworks
  • Configure test frameworks
  • Execute tests
  • Report results
  • Destroy system resources
  • Send Notifications

Teflo has following stages

Provision - Create resources to test against (physical resources, VMs etc)

Orchestrate - Configure the provisioned resources (e.g. install packages on them, run scripts, ansible playbooks etc)

Execute - Execute tests on the configured resources

Report - Send or collect logs from the tests run

Notification - Send email/gchat/slack notification during each stage of teflo run or at the end based on the set triggers

Cleanup - Cleanup all the deployed resources.

These stages can be run individually or together.

Teflo follows a plugable architechture, where users can add different pluggins to support external tools Below is a diagram that gives you a quick overview of the Teflo workflow

/docs/_static/teflo_workflow.png

teflo's People

Contributors

dannyb48 avatar dno-github avatar greg-hellings avatar guyyaakov1 avatar jbpratt avatar junqizhang0 avatar mcornea avatar rujutashinde avatar ryankwilliams avatar shay6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

teflo's Issues

bkr: support `--ks-append` for provisioner plugin

We often want to just append on a small set of commands onto the existing kickstart file that is provided (if existing). Rather than requiring the entire thing, we can make use of the kickstart append command option on the beaker client.

bkr workflow-simple \
  --whiteboard="example" \
  --family="RedHatEnterpriseLinux8" \
  --tag="RELEASED" \
  --variant="BaseOS" \
  --arch="x86_64" \
  --ks-append="
reqpart
part /boot --recommended
part pv.rhel --grow
volgroup ${vg_name} pv.rhel
logvol /    --vgname=${vg_name} --name=root --size=1 --grow
logvol swap --vgname=${vg_name} --name=swap --recommended
" \
  --task="/distribution/check-install" \
  --machine="${PROVISIONING_HOST}" \
  --debug \
  --prettyxml \
  --wait

Recursive include of SDFs

In a complex scenario, or mono-repo containing numerous scenarios, sharing SDFs is desired. The include functionality works great for this, it would be awesome to expand this to handle an included file that also includes others.

---
name: Provision child
description: child

provision:
    - name: child
      groups:
        - localhost
      ip_address: 127.0.0.1
      ansible_params:
        ansible_connection: local
---
name: Provision parent
description: includes another SDF

include:
  - provision_child.yml
---
name: Example
description: hello world

include:
  - provision_parent.yml
❯ teflo validate -s scenario.yml
2021-06-03 11:40:44,786 WARNING Scenario workspace was not set, therefore the workspace is automatically assigned to the current working directory. You may experience problems if files needed by teflo do not exists in the scenario workspace.
2021-06-03 11:40:44,790 INFO

2021-06-03 11:40:44,790 INFO                                TEFLO RUN (START)
2021-06-03 11:40:44,790 INFO -------------------------------------------------------------------------------
2021-06-03 11:40:44,791 INFO  * Data Folder           : .teflo/pqlidmh3j7
2021-06-03 11:40:44,791 INFO  * Workspace             : /home/bpratt/pit/git/teflo_recursive_include
2021-06-03 11:40:44,792 INFO  * Log Level             : info
2021-06-03 11:40:44,792 INFO  * Tasks                 : ['validate']
2021-06-03 11:40:44,793 INFO  * Scenario              : Example
2021-06-03 11:40:44,793 INFO  * Included Scenario(s)  : ['Provision parent']
2021-06-03 11:40:44,793 INFO -------------------------------------------------------------------------------

2021-06-03 11:40:44,794 INFO  * Task    : validate
2021-06-03 11:40:44,794 INFO Sending out any notifications that are registered.
2021-06-03 11:40:44,795 INFO ..................................................
2021-06-03 11:40:44,796 INFO Starting tasks on pipeline: notify
2021-06-03 11:40:44,797 WARNING ... no tasks to be executed ...
2021-06-03 11:40:44,798 INFO ..................................................
2021-06-03 11:40:44,799 INFO Starting tasks on pipeline: validate
2021-06-03 11:40:44,799 INFO --> Blaster v0.4.0 <--
2021-06-03 11:40:44,800 INFO Task Execution: Sequential
2021-06-03 11:40:44,801 INFO Tasks:
2021-06-03 11:40:44,802 INFO 1. Task     : Provision parent
                                Class    : <class 'teflo.tasks.validate.ValidateTask'>
                                Methods  : ['run']
2021-06-03 11:40:44,802 INFO 2. Task     : Example
                                Class    : <class 'teflo.tasks.validate.ValidateTask'>
                                Methods  : ['run']
2021-06-03 11:40:44,803 INFO ** BLASTER BEGIN **
2021-06-03 11:40:44,804 INFO Validating <class 'teflo.resources.scenario.Scenario'> (Provision parent)
2021-06-03 11:40:44,858 INFO Validating <class 'teflo.resources.scenario.Scenario'> (Example)
2021-06-03 11:40:44,908 INFO ** BLASTER COMPLETE **
2021-06-03 11:40:44,908 INFO     -> TOTAL DURATION: 0h:0m:0s
2021-06-03 11:40:44,909 INFO ..................................................
2021-06-03 11:40:44,909 INFO Sending out any notifications that are registered.
2021-06-03 11:40:44,911 INFO ..................................................
2021-06-03 11:40:44,912 INFO Starting tasks on pipeline: notify
2021-06-03 11:40:44,913 WARNING ... no tasks to be executed ...
2021-06-03 11:40:44,921 INFO

2021-06-03 11:40:44,922 INFO                                SCENARIO RUN (END)
2021-06-03 11:40:44,923 INFO -------------------------------------------------------------------------------
2021-06-03 11:40:44,924 INFO  * Duration                       : 0h:0m:0s
2021-06-03 11:40:44,924 INFO  * Passed Tasks                   : ['validate']
2021-06-03 11:40:44,925 INFO  * Results Folder                 : .teflo/.results
2021-06-03 11:40:44,926 INFO  * Included Scenario Definition   : ['.teflo/.results/Provision parent_results.yml']
2021-06-03 11:40:44,926 INFO  * Final Scenario Definition      : .teflo/.results/results.yml
2021-06-03 11:40:44,927 INFO -------------------------------------------------------------------------------
2021-06-03 11:40:44,927 INFO TEFLO RUN (RESULT=PASSED)

pit/git/teflo_recursive_include  v3.9.5(venv) 10s
❯ teflo run -t provision -s .teflo/.results/results.yml
--------------------------------------------------
Teflo Framework v1.2.0
Copyright (C) 2021, Red Hat, Inc.
--------------------------------------------------
2021-06-03 11:40:54,859 WARNING Scenario workspace was not set, therefore the workspace is automatically assigned to the current working directory. You may experience problems if files needed by teflo do not exists in the scenario workspace.
2021-06-03 11:40:54,863 INFO

2021-06-03 11:40:54,863 INFO                                TEFLO RUN (START)
2021-06-03 11:40:54,864 INFO -------------------------------------------------------------------------------
2021-06-03 11:40:54,864 INFO  * Data Folder           : .teflo/ox9ut0cb8e
2021-06-03 11:40:54,864 INFO  * Workspace             : /home/bpratt/pit/git/teflo_recursive_include
2021-06-03 11:40:54,865 INFO  * Log Level             : info
2021-06-03 11:40:54,865 INFO  * Tasks                 : ['provision']
2021-06-03 11:40:54,865 INFO  * Scenario              : Example
2021-06-03 11:40:54,866 INFO  * Included Scenario(s)  : ['Provision parent']
2021-06-03 11:40:54,866 INFO -------------------------------------------------------------------------------

2021-06-03 11:40:54,866 INFO  * Task    : provision
2021-06-03 11:40:54,867 INFO Sending out any notifications that are registered.
2021-06-03 11:40:54,867 INFO ..................................................
2021-06-03 11:40:54,867 INFO Starting tasks on pipeline: notify
2021-06-03 11:40:54,868 WARNING ... no tasks to be executed ...
2021-06-03 11:40:54,869 INFO ..................................................
2021-06-03 11:40:54,869 INFO Starting tasks on pipeline: provision
2021-06-03 11:40:54,869 WARNING ... no tasks to be executed ...
2021-06-03 11:40:54,872 INFO ..................................................
2021-06-03 11:40:54,872 INFO Sending out any notifications that are registered.
2021-06-03 11:40:54,873 INFO ..................................................
2021-06-03 11:40:54,873 INFO Starting tasks on pipeline: notify
2021-06-03 11:40:54,874 WARNING ... no tasks to be executed ...
2021-06-03 11:40:54,878 INFO

2021-06-03 11:40:54,879 INFO                                SCENARIO RUN (END)
2021-06-03 11:40:54,879 INFO -------------------------------------------------------------------------------
2021-06-03 11:40:54,880 INFO  * Duration                       : 0h:0m:0s
2021-06-03 11:40:54,880 INFO  * Passed Tasks                   : ['provision']
2021-06-03 11:40:54,880 INFO  * Results Folder                 : .teflo/.results
2021-06-03 11:40:54,881 INFO  * Included Scenario Definition   : ['.teflo/.results/Provision parent_results.yml']
2021-06-03 11:40:54,881 INFO  * Final Scenario Definition      : .teflo/.results/results.yml
2021-06-03 11:40:54,882 INFO -------------------------------------------------------------------------------
2021-06-03 11:40:54,882 INFO TEFLO RUN (RESULT=PASSED)

The scenario passes validation with no error thrown, but the child SDF is still not included.

Multiple layers of templating aren't resolved

If I have multiple layers of Jinja templating, they are not resolved properly. For example, if my SDF file has something like this:

name: {{ some_var }}

And my vars-data file has something like this:

some_var: value-{{ child_var }}
child_var: child_val

Then the SDF should resolve to name: value-child_val but, instead, it only resolves once to name: value-{{ child_var }}. Ansible deals with this by essentially passing the values through the template engine repeatedly until the value does not change between runs. It's actually slightly more complicated than that, because they can handle {% raw %} ... {% endraw %} values. But, for getting part of the way there, just iterating through the engine until the values don't change between consecutive runs would provide enough functionality.

Feature: Allow including of SDFs from URL

---
name: ...
description: ...

include:
- orchestrate.yml
- https://github.com/RedHatQE/teflo/examples/execute.yml

This should resolve any includes the remote SDF may have as well.

Teflo fails to preserve whitespace when dumping ansible output

Ansible config

[defaults]
stdout_callback         = yaml
bin_ansible_callbacks   = True

Ansible playbook

- name: Example playbook
  hosts: localhost
  connection: local
  tasks:
    - command: oc get pods -o yaml
      register: pods
      no_log: true

    - debug:
        msg: "{{ pods.stdout }}"

Teflo SDF

---
name: example
description: example

provision:
  - name: driver
    groups:
      - localhost
    ip_address: 127.0.0.1

orchestrate:
  - name: example
    description: ...
    orchestrator: ansible
    hosts: localhost
    ansible_playbook:
      name: example.yml

Output from ansible directly:

PLAY [Example playbook] ******************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************************
ok: [localhost]

TASK [command] ***************************************************************************************************************************************************
changed: [localhost]

TASK [debug] *****************************************************************************************************************************************************
ok: [localhost] =>
  msg: |-
    apiVersion: v1
    items:
    - apiVersion: v1
      kind: Pod
      metadata:
        annotations:
...

Output from Teflo (latest version)

2022-01-17 09:31:52,270 INFO PLAYBOOK: example.yml **********************************************************
2022-01-17 09:31:52,270 INFO 1 plays in /tmp/xyz/example.yml
2022-01-17 09:31:52,271 INFO
2022-01-17 09:31:52,271 INFO PLAY [Example playbook] ********************************************************
2022-01-17 09:31:52,277 INFO
2022-01-17 09:31:52,277 INFO TASK [Gathering Facts] *********************************************************
2022-01-17 09:31:52,277 INFO task path: /tmp/xyz/example.yml:1
2022-01-17 09:31:53,452 INFO ok: [127.0.0.1]
2022-01-17 09:31:53,459 INFO META: ran handlers
2022-01-17 09:31:53,464 INFO
2022-01-17 09:31:53,464 INFO TASK [command] *****************************************************************
2022-01-17 09:31:53,464 INFO task path: /tmp/xyz/example.yml:5
2022-01-17 09:31:53,985 INFO changed: [127.0.0.1] => changed=true
2022-01-17 09:31:53,985 INFO censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result'
2022-01-17 09:31:53,998 INFO
2022-01-17 09:31:53,998 INFO TASK [debug] *******************************************************************
2022-01-17 09:31:53,998 INFO task path: /tmp/xyz/example.yml:9
2022-01-17 09:31:54,021 INFO ok: [127.0.0.1] =>
2022-01-17 09:31:54,022 INFO msg: |-
2022-01-17 09:31:54,022 INFO apiVersion: v1
2022-01-17 09:31:54,022 INFO items:
2022-01-17 09:31:54,022 INFO - apiVersion: v1
2022-01-17 09:31:54,022 INFO kind: Pod
2022-01-17 09:31:54,023 INFO metadata:
2022-01-17 09:31:54,023 INFO annotations:
...

Error presented is incorrect

The error presented upon failure is not relevant. The playbook fails due to a fatal ansible task, not the output that is being shown by teflo

2022-01-14 05:37:01,292 INFO FAILED - RETRYING: [127.0.0.1]: Wait until RHOAM is installed (1 retries left).
2022-01-14 05:38:01,919 INFO fatal: [127.0.0.1]: FAILED! => changed=true
2022-01-14 05:38:01,919 INFO attempts: 30
2022-01-14 05:38:01,920 INFO failed_when_result: true
2022-01-14 05:38:01,920 INFO output:
2022-01-14 05:38:01,922 INFO state: installing
2022-01-14 05:38:01,923 INFO updated_timestamp: '2022-01-14T11:28:39.49225Z'
2022-01-14 05:38:01,923 INFO
2022-01-14 05:38:01,923 INFO PLAY RECAP *********************************************************************
2022-01-14 05:38:01,923 INFO 127.0.0.1                  : ok=17   changed=9    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
2022-01-14 05:38:01,923 INFO
2022-01-14 05:38:01,987 ERROR Orchestration failed : Playbook /home/bpratt/pit/git/mps/ms/solution/venv-test/solution/ansible/create_cluster.yml failed to run
The error is:
[WARNING]: Found variable using reserved name: hosts
/home/bpratt/pit/git/mps/ms/solution/venv-test/lib64/python3.10/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.corp.redhat.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/home/bpratt/pit/git/mps/ms/solution/venv-test/lib64/python3.10/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.corp.redhat.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(

2022-01-14 05:38:02,034 ERROR Failed to run orchestration Provision osd cluster + install ms addons
2022-01-14 05:38:02,035 ERROR Orchestration failed : Failed to perform  Provision osd cluster + install ms addons
2022-01-14 05:38:02,036 ERROR A exception was raised while processing task: Provision osd cluster + install ms addons method: run
Traceback (most recent call last):
  File "/home/bpratt/pit/git/mps/ms/solution/venv-test/lib64/python3.10/site-packages/blaster/blast.py", line 83, in run
    value = getattr(task_obj, method)()
  File "/home/bpratt/pit/git/mps/ms/solution/venv-test/lib64/python3.10/site-packages/teflo/tasks/orchestrate.py", line 59, in run
    self.orchestrator.run()
  File "/home/bpratt/pit/git/mps/ms/solution/venv-test/lib64/python3.10/site-packages/teflo/orchestrators/action_orchestrator.py", line 68, in run
    raise TefloOrchestratorError("Orchestration failed : Failed to perform  %s" % self.plugin.action_name)
teflo.exceptions.TefloOrchestratorError: Orchestration failed : Failed to perform  Provision osd cluster + install ms addons

Teflo failing to install rsync during reporting phase

Starting about 2 days ago, our teflo runs have been failing to install rsync (Log excerpt here)

`2022-11-08 16:48:56,430 INFO TASK [check if rsync package is installed] *************************************
2022-11-08 16:48:56,430 INFO changed: [10.8.0.174]
2022-11-08 16:48:56,910 INFO
2022-11-08 16:48:56,911 INFO TASK [Add repository] **********************************************************
2022-11-08 16:48:56,911 INFO ok: [10.8.0.174]
2022-11-08 16:48:58,336 INFO
2022-11-08 16:48:58,337 INFO TASK [Install rsync] ***********************************************************
2022-11-08 16:48:58,337 INFO task path: /var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/openstack_lv2guest_rhel/cbn_execute_synchronize_i33dw.yml:89
2022-11-08 16:48:58,337 INFO fatal: [10.8.0.174]: FAILED! => changed=false
2022-11-08 16:48:58,337 INFO msg: 'Failed to download metadata for repo ''epel'': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried'
2022-11-08 16:48:58,337 INFO rc: 1
2022-11-08 16:48:58,337 INFO results: []
2022-11-08 16:48:58,405 INFO
2022-11-08 16:48:58,405 INFO TASK [copy failed artifacts results to file] ***********************************
2022-11-08 16:48:58,405 INFO task path: /var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/openstack_lv2guest_rhel/cbn_execute_synchronize_i33dw.yml:129
2022-11-08 16:48:58,405 INFO fatal: [10.8.0.174 -> localhost]: FAILED! =>
2022-11-08 16:48:58,405 INFO msg: '''sync_output'' is undefined'
2022-11-08 16:48:58,444 INFO
2022-11-08 16:48:58,445 INFO PLAY RECAP *********************************************************************
2022-11-08 16:48:58,445 INFO 10.8.0.174 : ok=4 changed=1 unreachable=0 failed=1 skipped=7 rescued=1 ignored=0
2022-11-08 16:48:58,445 INFO
2022-11-08 16:48:58,561 ERROR [WARNING]: Found variable using reserved name: hosts

2022-11-08 16:48:58,591 ERROR Failed to execute Update result xml name and save artifacts
2022-11-08 16:48:58,592 ERROR A failure occurred while trying to copy test artifacts.
2022-11-08 16:48:58,593 ERROR A exception was raised while processing task: Update result xml name and save artifacts method: run
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 359, in run
getattr(self, '%s' % attr)()
File "/var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 238, in artifacts
raise TefloExecuteError('A failure occurred while trying to copy '
teflo.exceptions.TefloExecuteError: A failure occurred while trying to copy test artifacts.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/lib64/python3.9/site-packages/blaster/blast.py", line 83, in run
value = getattr(task_obj, method)()
File "/var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/lib64/python3.9/site-packages/teflo/tasks/execute.py", line 57, in run
self.executor.run()
File "/var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/lib64/python3.9/site-packages/teflo/executors/execute_manager.py", line 72, in run
res = self.plugin.run()
File "/var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 367, in run
self.artifacts()
File "/var/lib/jenkins/workspace/RhelLayeredProducts/rhosp-16.1-lv2guest-rhel-8/envs/scenario/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 238, in artifacts
raise TefloExecuteError('A failure occurred while trying to copy '
teflo.exceptions.TefloExecuteError: A failure occurred while trying to copy test artifacts.`

Looking at the code in playbooks.py;

- name: Add repository ansible.builtin.yum_repository: name: epel-release baseurl: https://dl.fedoraproject.org/pub/epel/{{ ansible_distribution_major_version }}/x86_64 state: present description: EPEL YUM repo gpgcheck: no become: true when: (rsync_installed.rc != 0 ) and (ansible_facts['os_family'] == 'RedHat')

I then compared with the the epel location here;
https://dl.fedoraproject.org/pub/epel/8/

IT looks like there is an inconsistency in the way the repos are structured and that this change doesn't work for rhel 8.

When is the next release?

The last release was back in July. When is the next release going to be available? Just wondering as there is a commit in the develop branch that would benefit my teams testing. Thanks!

RFE: Support teflo command aliases similar to git

The use case is to be able to have a config section called aliases that would allow a user to specify aliases for their teflo run commands. Similar to git aliases

This would allow teflo commands to be more intuitive, shortened and allow of group tasks together into a single command without explicitly specifying them.

Right now we achieve this by writing lightweight wrapper shell/python scripts that wrap teflo or we utilize tox to provide this in an abstract manner using testenvs

Some examples:

  1. Provisioning and Deploying a product with this command
    teflo run -t validate -t provision -t orchestrate -s <path/to/sdf> -w ./

Aliased to

[aliases]
deploy="run -t validate -t provision -t orchestrate -s <path/to/sdf> -w ./"


[user]$ teflo deploy
  1. If I want to deploy a particular product and run integration tests
    teflo run -t orchestrate -t execute -s <path/to/results.yml> -w ./ -l product-a -l product-a-integration

Aliased to

[aliases]
product-integration="run -t orchestrate -t execute -s <path/to/results.yml> -w ./ -l product-a -l product-a-integration"


[user]$ teflo production-integration
  1. Similar to second example but I want to run sanity tests and context switch between running teflo automation against a dev environment vs a production environment
    teflo run -t validate -t provision -t orchestrate -t execute -s <path/to/sdf> -w ./ -l sanity --vars-data <path/to/dev/vars>
    teflo run -t validate -t provision -t orchestrate -t execute -s <path/to/sdf> -w ./ -l sanity --vars-data <path/to/prod/vars>

Aliased to

[aliases]
dev-sanity="run -t validate -t provision -t orchestrate -t execute -s <path/to/sdf> -w ./ -l sanity --vars-data <path/to/dev/vars>"
prod-sanity="run -t validate -t provision -t orchestrate -t execute -s <path/to/sdf> -w ./ -l sanity --vars-data <path/to/prod/vars>"


[user]$ teflo dev-sanity
[user]$ eflo prod-sanity

Bad tests in test_cli.py

in tests/functional/test_cli.py the methods test_valid_run_var_file and test_valid_run_var_raw_json are invalid.

If you check the output of both tests, they complain that the file they're being passed (descriptor.yml) is invalid. Yet, teflo is returnning a results.exit_code == 0, so the test is passing. This means there are multiple problems:

  1. Teflo should not be returning 0 when it encounters the error
  2. This test isn't testing the templating at all

Allow multiple --vars-data arguments

Currently it's not possible to pass multiple instances of --vars-data to teflo. It would be very useful to allow this.

Additionally, the documentation makes no comment about the structure of the data, as best I can tell, that needs to be inside of the vars-data file. It should be mentioned that a YAML-formatted file is expected.

Still having issues with --var-data. Now error is jinja2.exceptions.TemplateSyntaxError: expected token ',', got 'string'

Trying to test running my scenario using 1.2.5 and 2.1.0 using some modified version of my ansible vars file that I pass to --vars-data and now I get the following errors

Teflo Framework v2.1.0
Copyright (C) 2021, Red Hat, Inc.
--------------------------------------------------
Traceback (most recent call last):
  File "/home/dbaez/.virtualenvs/psi_pipelines/bin/teflo", line 8, in <module>
    sys.exit(teflo())
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/click/core.py", line 1128, in __call__
    return self.main(*args, **kwargs)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/click/core.py", line 1053, in main
    rv = self.invoke(ctx)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/click/core.py", line 1659, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/click/core.py", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/click/core.py", line 754, in invoke
    return __callback(*args, **kwargs)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/teflo/cli.py", line 284, in run
    scenario_graph: ScenarioGraph = validate_cli_scenario_option(ctx, scenario, cbn.config, vars_data)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/teflo/helpers.py", line 1805, in validate_cli_scenario_option
    scenario_graph = validate_render_scenario(scenario, config, vars_data)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/teflo/helpers.py", line 1552, in validate_render_scenario
    temp_data = preprocyaml_jinja(temp_data)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/teflo/helpers.py", line 1496, in preprocyaml_jinja
    t = jinja2.Template(result, undefined=NullUndefined)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/jinja2/environment.py", line 1195, in __new__
    return env.from_string(source, template_class=cls)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/jinja2/environment.py", line 1092, in from_string
    return cls.from_code(self, self.compile(source), gs, None)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/jinja2/environment.py", line 757, in compile
    self.handle_exception(source=source_hint)
  File "/home/dbaez/.virtualenvs/psi_pipelines/lib/python3.7/site-packages/jinja2/environment.py", line 925, in handle_exception
    raise rewrite_traceback_stack(source=source)
  File "<unknown>", line 66, in template
jinja2.exceptions.TemplateSyntaxError: expected token ',', got 'string'

I know the most recent change I had to do the the var file was the following below. I've confirmed if I don't specify the --vars-data to the file that has the variables below. It proceeds through.

osp_director_input_dir: "{{ lookup('config', 'COLLECTIONS_PATH', 'COLLECTIONS_PATHS', wantlist=True, on_missing='skip')[0][0] }}/ansible_collections/css/psi/tests/environments/{{ os_cloud }}/cloud/undercloud"
osp_overcloud_input_dir: "{{ lookup('config', 'COLLECTIONS_PATH', 'COLLECTIONS_PATHS', wantlist=True, on_missing='skip')[0][0] }}/ansible_collections/css/psi/tests/environments/{{ os_cloud }}/cloud/overcloud"
undercloud_server_privkey: "{{ lookup('file', lookup('config', 'COLLECTIONS_PATH', 'COLLECTIONS_PATHS', wantlist=True, on_missing='skip')[0][0] | dirname + '/keystore/rhosp_certs/Server-Private-Key.key') }}"
undercloud_server_cert: "{{ lookup('file', lookup('config', 'COLLECTIONS_PATH', 'COLLECTIONS_PATHS', wantlist=True, on_missing='skip')[0][0] | dirname + '/keystore/rhosp_certs/c0-Wildcard-Server-Certificate.crt') }}"
undercloud_ssl_intermediate_certificate: "{{ lookup('file', lookup('config', 'COLLECTIONS_PATH', 'COLLECTIONS_PATHS', wantlist=True, on_missing='skip')[0][0] | dirname + '/keystore/rhosp_certs/DigiCert-Intermediate-CA.crt') }}"
undercloud_ssl_root_certificate: "{{ lookup('file', lookup('config', 'COLLECTIONS_PATH', 'COLLECTIONS_PATHS', wantlist=True, on_missing='skip')[0][0] | dirname + '/keystore/rhosp_certs/DigiCert-Root-CA.crt') }}"

I had to drop back down to teflo==1.2.0 again in order for this to work.

Ansible stderr often gets swallowed

When runs of ansible-playbook error, they often result in none of their own stderr being output. Instead, I typically get a Python stack trace from inside of teflo. This then requires me to run the playbook again, manually, in order to see the output error.

Please don't swallow stderr

All teflo resource objects should implement __eq__ and __hash__ for equality testing

Currently all equality testing is based on a resources name attribute which is fine but is done upfront by the developer. The framework can keep using name if it wants or other parameters/attributes but it should be abstracted away to allow developers to just simply say if asset_obj == asset_obj or when checking if a resource is in a list by simply saying if asset_obj in res_list rather needing to access a particular attribute in the list.

This comes into play when reloading rsource from tasks that were run in parallel by blaster.

Enable use of templates in teflo.cfg

Currently I cannot template values into teflo.cfg. This makes it difficult for me to template the value of things like my OpenStack cloud name. Please allow template resolution of values pulled from the teflo.cfg file.

Support additional reporting methods

In my POC of the podman provisioner, I utilized the Ansible Podman Connection plugin(https://blog.tomecek.net/post/ansible-and-podman-can-play-together-now/) which allows ansible to run against a container the same way it would via SSH and a standard machine. The problem comes into play with the use of the synchronize module which handles copying and reporting results. After discussion with @shay6 @rujutashinde and @JunqiZhang0 , the best approach seems to be to add an additional block for when: ansible_connection == "podman" or something similar and execute podman cp ...(man podman cp) to fetch the files from the container.

Regardless of the provisioner plugin used, the new podman provisioner or terraform, this will ultimately be a problem with a container and copying files as the artifacts are neither on localhost nor a remote system (the exception here would be remote podman, which podman cp ... could handle). A 'hack' that we discussed was one that I have attempted and not been too successful: mount the results directory as a volume onto the container, then use localhost to copy it like normal. The main issue with this is that, while it works, the results are reported incorrectly under localhost and not the provisioned machine.

Running `teflo validate` fails with 'skip-fail' KeyError

ISSUE: When running teflo validate -s $TEFLO_SDF (where $TEFLO_SDF is scenario.yml file), the task returns an error.

EXPECTED RESULT: The specified scenario is validated and the Teflo step passes.

ACTUAL RESULT: The task fails with error:
... File "/home/dfrazzet/gitlab/mock-rhel/venv/lib/python3.7/site-packages/teflo/teflo.py", line 597, in exit_on_status and self._teflo_options['skip_fail'] is not True and state is 'FAILED': KeyError: 'skip_fail'

(Ref repo RedHatQE/teflo/teflo/teflo.py). Output attached. teflo_validate_error.txt

WORKAROUND: Running command teflo run -t validate -s $TEFLO_SDF is ##successful.

Fail to render vars-data file with KeyError: 'ansible_facts.distribution_version'

We have a regression with using --vars-data. To be honest a regression got introduced somewhere and we haven't been able to use anything past teflo 1.2.0. Here is the latest error using 1.2.4

teflo run -t validate -s css_psi_customerzero/carbon_sdf/psi_stack.yml -d ./carbon_data -w . --vars-data C0/tests/dev/inventory/group_vars/decepticon.yml --vars-data C0/tests/dev/inventory/group_vars/dev.yml --log-level debug


Teflo Framework v1.2.4
Copyright (C) 2021, Red Hat, Inc.
--------------------------------------------------
Traceback (most recent call last):
  File "/home/dbaez/.virtualenvs/test_pipeline_install/bin/teflo", line 8, in <module>
    sys.exit(teflo())
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/teflo/cli.py", line 246, in run
    scenario_stream = validate_cli_scenario_option(ctx, scenario, cbn.config, vars_data)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/teflo/helpers.py", line 1667, in validate_cli_scenario_option
    scenario_stream = validate_render_scenario(scenario, config, vars_data)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/teflo/helpers.py", line 1503, in validate_render_scenario
    temp_data.update({item[0]: preprocyaml(item[1], temp_data)})
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/teflo/helpers.py", line 1450, in preprocyaml
    return preprocyaml_str(input, temp_data)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/teflo/helpers.py", line 1437, in preprocyaml_str
    return replace_brackets(input, temp_data)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/teflo/helpers.py", line 1427, in replace_brackets
    return replace_brackets(ret, temp_data)
  File "/home/dbaez/.virtualenvs/test_pipeline_install/lib/python3.7/site-packages/teflo/helpers.py", line 1419, in replace_brackets
    if not isinstance(temp_data[key], str):
KeyError: 'ansible_facts.distribution_version'

Below are some example variables in the variable file

baseos_repo: "http://download.eng.{{ repo_geo_code }}.redhat.com/released/RHEL-8/{{ ansible_facts.distribution_version }}.0/BaseOS/x86_64/os/"
appstream_repo: "http://download.eng.{{ repo_geo_code }}.redhat.com/released/RHEL-8/{{ ansible_facts.distribution_version }}.0/AppStream/x86_64/os/"
hostname_inject_hosts_ip_address: "{{ ansible_facts.default_ipv4.address }}"
hostname_aliases:
  - "{{ ansible_facts.hostname }}"

@JunqiZhang0 this is blocking us from using newer versions of teflo to leverage some of more of the variable advanced features. Any chance we get a hotfix 1.2.5?

openstack provisioner: using a network with iPv4 and iPv6, teflo writes the ipv6 to inventory

When using an OSP network that provides ipv4 and ipv6 addresses, we get the ipv6 address assigned to the host in the inventory file, where we would prefer the ipv4.

IP Addresses

provider_net_cci_1
    10.0.10.10,  2620:52:0:84:f816:3eff:fe97:6541
❯ cat .teflo/.results/inventory/inventory-aki48qstst  -p
[clients:children]
el8_client
el9_client

[test_machines:children]
el8_client
el9_client
test_driver

[el9_client]
2620:52:0:84:f816:3eff:fe97:6541
...

custom resource_check does not honor the ansible_galaxy_options

With teflo 1.2.x and the following in my resource check below. It does not actually download the required collections.

--
name: Init and Validate C0 collection

resource_check:
  playbook:
    - name: css_psi_customerzero/ansible/playbooks/validate_c0_collection.yml
      ansible_galaxy_options:
        role_file: css_psi_customrezero/requirements.yml

You can see the value get's loaded into the ansible_service https://github.com/RedHatQE/teflo/blob/develop/teflo/utils/resource_checker.py#L82

but it never gets downloaded, it just calls run_playbook function
https://github.com/RedHatQE/teflo/blob/develop/teflo/utils/resource_checker.py#L96

Should add a conditional check to see if the field is populated and call the download_roles function before calling run_playbook

Is there a plan for a v2.2.10 or v2.3.0 release?

#264 was resolved on 2022-12-05 by 5b96891. When will this commit be available in a new teflo release? Last release was back in November 2022. Just wondering because a CI pipeline my team currently maintains has been locking down the version of ansible used since this issue came about.

RFE: Silence notification messages logged to the console output when no notifications are enabled

It would be great if teflo could silence some of the messages logged to the console output about notifications. When a teflo user does not use the built in notification plugins for certain actions, it produces a good amount of output by default (info logging level). Would it be possible to make this output smaller or even switch it to (debug logging level) and only print out in (info logging level) if its actually sending a notification?

2021-12-06 00:51:13,839 INFO                                TEFLO RUN (START)                               
2021-12-06 00:51:13,840 INFO -------------------------------------------------------------------------------
2021-12-06 00:51:13,840 INFO  * Data Folder           : /home/jenkins/agent/workspace/PipelineSimulator/sol-mock-openshift4.8/.teflo/wxr157zmlg
2021-12-06 00:51:13,840 INFO  * Workspace             : .
2021-12-06 00:51:13,841 INFO  * Log Level             : info
2021-12-06 00:51:13,841 INFO  * Tasks                 : ['orchestrate']
2021-12-06 00:51:13,841 INFO  * Iterate Method        : by_level
2021-12-06 00:51:13,842 INFO  * Scenario              : resource_check
2021-12-06 00:51:13,842 INFO  * Scenario              : provision_localhost
2021-12-06 00:51:13,842 INFO  * Scenario              : orchestrate_stack
2021-12-06 00:51:13,843 INFO  * Scenario              : bare_metal_stack
2021-12-06 00:51:13,843 INFO -------------------------------------------------------------------------------

2021-12-06 00:51:13,843 INFO ..................................................
2021-12-06 00:51:13,844 INFO �[32m'resource_check' is running from the scenario file: resource_check_results.yml�[0m
2021-12-06 00:51:13,844 INFO ..................................................
2021-12-06 00:51:13,844 INFO Sending out any notifications that are registered.
2021-12-06 00:51:13,845 INFO ..................................................
2021-12-06 00:51:13,846 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:13,846 WARNING ... no tasks to be executed ...
2021-12-06 00:51:13,846 INFO  * Task    : orchestrate
2021-12-06 00:51:13,847 INFO ..................................................
2021-12-06 00:51:13,897 INFO Starting tasks on pipeline: orchestrate
2021-12-06 00:51:13,897 WARNING ... no tasks to be executed ...
2021-12-06 00:51:13,898 INFO ..................................................
2021-12-06 00:51:13,898 INFO Sending out any notifications that are registered.
2021-12-06 00:51:13,899 INFO ..................................................
2021-12-06 00:51:13,899 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:13,900 WARNING ... no tasks to be executed ...
2021-12-06 00:51:13,900 INFO ..................................................
2021-12-06 00:51:13,900 INFO �[32m'provision_localhost' is running from the scenario file: provision_localhost_results.yml�[0m
2021-12-06 00:51:13,901 INFO ..................................................
2021-12-06 00:51:13,901 INFO Sending out any notifications that are registered.
2021-12-06 00:51:13,902 INFO ..................................................
2021-12-06 00:51:13,902 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:13,903 WARNING ... no tasks to be executed ...
2021-12-06 00:51:13,903 INFO  * Task    : orchestrate
2021-12-06 00:51:13,904 INFO ..................................................
2021-12-06 00:51:13,904 INFO Starting tasks on pipeline: orchestrate
2021-12-06 00:51:13,905 WARNING ... no tasks to be executed ...
2021-12-06 00:51:13,905 INFO ..................................................
2021-12-06 00:51:13,905 INFO Sending out any notifications that are registered.
2021-12-06 00:51:13,906 INFO ..................................................
2021-12-06 00:51:13,906 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:13,907 WARNING ... no tasks to be executed ...
2021-12-06 00:51:13,907 INFO ..................................................
2021-12-06 00:51:13,908 INFO �[32m'orchestrate_stack' is running from the scenario file: orchestrate_results.yml�[0m
2021-12-06 00:51:13,908 INFO ..................................................
2021-12-06 00:51:13,908 INFO Sending out any notifications that are registered.
2021-12-06 00:51:13,909 INFO ..................................................
2021-12-06 00:51:13,909 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:13,910 WARNING ... no tasks to be executed ...
2021-12-06 00:51:13,910 INFO  * Task    : orchestrate
2021-12-06 00:51:13,911 INFO ..................................................
2021-12-06 00:51:13,911 INFO Starting tasks on pipeline: orchestrate
2021-12-06 00:51:13,912 INFO --> Blaster v0.5.0 <--
2021-12-06 00:51:13,912 INFO Task Execution: Sequential
2021-12-06 00:51:13,913 INFO Tasks:
2021-12-06 00:51:13,913 INFO 1. Task     : Install OpenShift
                                Class    : <class 'teflo.tasks.orchestrate.OrchestrateTask'>
                                Methods  : ['run']
2021-12-06 00:51:13,914 INFO ** BLASTER BEGIN **
2021-12-06 00:51:13,915 INFO    running orchestration Install OpenShift for ['scenario_driver']
2021-12-06 00:51:13,916 INFO Executing playbook:
2021-12-06 00:51:13,916 INFO Ansible options used: {}
2021-12-06 00:51:13,916 INFO Executing playbook : ansible/install_ocp.yml
2021-12-06 00:51:13,929 INFO 127.0.0.1
2021-12-06 00:51:14,802 INFO 
2021-12-06 00:51:14,803 INFO PLAY [Install OpenShift] *******************************************************
2021-12-06 00:51:14,810 INFO 
2021-12-06 00:51:14,810 INFO TASK [Gathering Facts] *********************************************************
2021-12-06 00:51:16,397 INFO ok: [127.0.0.1]
2021-12-06 00:51:16,411 INFO 
2021-12-06 00:51:16,411 INFO TASK [Deploy OpenShift cluster] ************************************************
2021-12-06 00:51:16,420 INFO ok: [127.0.0.1] => {
2021-12-06 00:51:16,420 INFO "msg": "Deploying OpenShift using build: None"
2021-12-06 00:51:16,420 INFO }
2021-12-06 00:51:16,427 INFO 
2021-12-06 00:51:16,427 INFO TASK [Write out dummy kubeconfig file] *****************************************
2021-12-06 00:51:17,627 INFO changed: [127.0.0.1]
2021-12-06 00:51:17,634 INFO 
2021-12-06 00:51:17,634 INFO TASK [Ensure archive directory exists] *****************************************
2021-12-06 00:51:18,240 INFO changed: [127.0.0.1]
2021-12-06 00:51:18,302 INFO 
2021-12-06 00:51:18,303 INFO TASK [Archive generated assets] ************************************************
2021-12-06 00:51:18,942 INFO changed: [127.0.0.1]
2021-12-06 00:51:19,011 INFO 
2021-12-06 00:51:19,011 INFO PLAY RECAP *********************************************************************
2021-12-06 00:51:19,011 INFO 127.0.0.1                  : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
2021-12-06 00:51:19,011 INFO 
2021-12-06 00:51:19,126 INFO Successfully completed playbook : ansible/install_ocp.yml
2021-12-06 00:51:19,229 INFO Orchestration passed : Successfully completed orchestrate action: Install OpenShift.
2021-12-06 00:51:19,229 INFO ** BLASTER COMPLETE **
2021-12-06 00:51:19,229 INFO     -> TOTAL DURATION: 0h:0m:5s
2021-12-06 00:51:19,230 INFO ..................................................
2021-12-06 00:51:19,230 INFO Sending out any notifications that are registered.
2021-12-06 00:51:19,231 INFO ..................................................
2021-12-06 00:51:19,232 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:19,232 WARNING ... no tasks to be executed ...
2021-12-06 00:51:19,232 INFO ..................................................
2021-12-06 00:51:19,233 INFO �[32m'bare_metal_stack' is running from the scenario file: results.yml�[0m
2021-12-06 00:51:19,233 INFO ..................................................
2021-12-06 00:51:19,234 INFO Sending out any notifications that are registered.
2021-12-06 00:51:19,234 INFO ..................................................
2021-12-06 00:51:19,235 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:19,235 WARNING ... no tasks to be executed ...
2021-12-06 00:51:19,236 INFO  * Task    : orchestrate
2021-12-06 00:51:19,236 INFO ..................................................
2021-12-06 00:51:19,237 INFO Starting tasks on pipeline: orchestrate
2021-12-06 00:51:19,237 WARNING ... no tasks to be executed ...
2021-12-06 00:51:19,238 INFO ..................................................
2021-12-06 00:51:19,238 INFO Sending out any notifications that are registered.
2021-12-06 00:51:19,239 INFO ..................................................
2021-12-06 00:51:19,239 INFO Starting tasks on pipeline: notify
2021-12-06 00:51:19,239 WARNING ... no tasks to be executed ...
2021-12-06 00:51:19,303 INFO 

2021-12-06 00:51:19,304 INFO                                SCENARIO RUN (END)                              
2021-12-06 00:51:19,304 INFO -------------------------------------------------------------------------------
2021-12-06 00:51:19,304 INFO  * Duration                       : 0h:0m:5s
2021-12-06 00:51:19,305 INFO  * Passed Tasks                   : ['orchestrate']
2021-12-06 00:51:19,305 INFO  * Results Folder                 : /home/jenkins/agent/workspace/PipelineSimulator/sol-mock-openshift4.8/.teflo/.results
2021-12-06 00:51:19,305 INFO  * Iterate Method                 : by_level
2021-12-06 00:51:19,306 INFO  * Scenario Definition            : resource_check
2021-12-06 00:51:19,306 INFO  * Scenario Definition            : provision_localhost
2021-12-06 00:51:19,306 INFO  * Scenario Definition            : orchestrate_stack
2021-12-06 00:51:19,307 INFO  * Scenario Definition            : bare_metal_stack
2021-12-06 00:51:19,307 INFO  * Final Scenario Definition      : /home/jenkins/agent/workspace/PipelineSimulator/sol-mock-openshift4.8/.teflo/.results/results.yml
2021-12-06 00:51:19,307 INFO -------------------------------------------------------------------------------
2021-12-06 00:51:19,308 INFO TEFLO RUN (RESULT=PASSED)

Teflo best practice

We should have an Teflo best practice documented. It should cover from end to end. How to use results.yml, how to store ansible playbooks, how to do inclusion

Scenario not retaining orchestrate action resource order if you re-run using result.yml and it starts from a failed orchestrate task

I have a scenario where the first action task passed but the second action task failed.

teflo_log:

21-06-29 20:23:09,864 INFO Starting tasks on pipeline: notify
2021-06-29 20:23:09,864 WARNING ... no tasks to be executed ...
2021-06-29 20:23:09,865 INFO ..................................................
2021-06-29 20:23:09,865 INFO Starting tasks on pipeline: orchestrate
2021-06-29 20:23:09,866 INFO --> Blaster v0.4.0 <--
2021-06-29 20:23:09,866 INFO Task Execution: Sequential
2021-06-29 20:23:09,867 INFO Tasks:
2021-06-29 20:23:09,867 INFO 1. Task     : Install OpenShift on libvirt
                                Class    : <class 'teflo.tasks.orchestrate.OrchestrateTask'>
                                Methods  : ['run']
2021-06-29 20:23:09,868 INFO 2. Task     : Install Openshift Container Storage on libvirt
                                Class    : <class 'teflo.tasks.orchestrate.OrchestrateTask'>
                                Methods  : ['run']

.
.
2021-06-29 21:56:49,713 INFO Successfully completed playbook : ansible/install_ocp_libvirt.yml
2021-06-29 21:56:49,753 INFO Orchestration passed : Successfully completed orchestrate action: Install OpenShift on libvirt.
2021-06-29 21:56:49,755 INFO    running orchestration Install Openshift Container Storage on libvirt for ['baremetal']
2021-06-29 21:57:02,421 ERROR Orchestration failed : Playbook ansible/install_ocs_libvirt.yml failed to run

teflo_results.yml:

orchestrate:
- name: Install OpenShift on libvirt
  description: null
  orchestrator: ansible
  hosts:
  - baremetal
  ansible_playbook:
    name: ansible/install_ocp_libvirt.yml
  ansible_options:
    extra_vars:
      artifact_directory: /home/dbaez/.virtualenvs/gs-workspace/cnv_odf_ocp/.teflo/.results/artifacts/sharedArtifacts
  ansible_galaxy_options:
    role_file: requirements.yml
  labels: []
  status: 0
- name: Install Openshift Container Storage on libvirt
  description: null
  orchestrator: ansible
  hosts:
  - baremetal
  ansible_playbook:
    name: ansible/install_ocs_libvirt.yml
  ansible_galaxy_options:
    role_file: requirements.yml
  labels: []
  status: 1


Then if I try to re-run from the failed task teflo -s .teflo/.results/results.yml -t orchestrate it starts what it is supposed to

2021-06-29 22:02:18,601 INFO Starting tasks on pipeline: orchestrate
2021-06-29 22:02:18,602 INFO --> Blaster v0.4.0 <--
2021-06-29 22:02:18,602 INFO Task Execution: Sequential
2021-06-29 22:02:18,603 INFO Tasks:
2021-06-29 22:02:18,603 INFO 1. Task     : Install Openshift Container Storage on libvirt
                                Class    : <class 'teflo.tasks.orchestrate.OrchestrateTask'>
                                Methods  : ['run']
2021-06-29 22:02:18,604 INFO ** BLASTER BEGIN **
2021-06-29 22:02:18,606 INFO    running orchestration Install Openshift Container Storage on libvirt for ['baremetal']
.
.
2021-06-29 22:02:31,143 ERROR Orchestration failed : Playbook ansible/install_ocs_libvirt.yml failed to run

Granted it failed again but when I look at the results.yml I see this task is first rather than second

orchestrate:
- name: Install Openshift Container Storage on libvirt
  description: null
  orchestrator: ansible
  hosts:
  - baremetal
  ansible_playbook:
    name: ansible/install_ocs_libvirt.yml
  ansible_galaxy_options:
    role_file: requirements.yml
  labels: []
  status: 1
- name: Install OpenShift on libvirt
  description: null
  orchestrator: ansible
  hosts:
  - baremetal
  ansible_playbook:
    name: ansible/install_ocp_libvirt.yml
  ansible_options:
    extra_vars:
      artifact_directory: /home/dbaez/.virtualenvs/gs-workspace/cnv_odf_ocp/.teflo/.results/artifacts/sharedArtifacts
  ansible_galaxy_options:
    role_file: requirements.yml
  labels: []
  status: 0

This becomes a problem because after I fixed whatever was breaking my playbook in the second orchestrate task, the original first task ran again and overwrote what I had done with the second task.

I suspect there is some type of regression in the reload_resources code in the Scenario resource.

Passing variables conditionally between an execute block and a report block

At the moment there is no way to pass variables from an execute block to a report block conditionally.

An example use case:
Let's say I want to execute some tests and then report to a different test run in Polarion per environment.
Ideally, I would want to export an environment variable conditionally inside a shell and then use it in the corresponding report block.

To work around this I use 1 common scenario file which handles orchestration, provisioning, and execution. And then, I create a separate scenario file per environment which consists only of a report block and includes the common file.

RFE: Allow Ansible playbooks stored within a collection to be defined within a orchestrate action

Today teflo allows users to be able to create an orchestrate action that has the following:

  • Orchestrate = Ansible
  • Ansible playbook = my_playbook.yml (where the playbook is stored within the repository where the teflo SDF file is)

With the addition of Ansible collections, collections now allows you to be able to store playbooks inside a collection. Making it very easy to call the playbook (using dot notation) once the collection is installed.

# From Ansible documentation
ansible-playbook my_namespace.my_collection.playbook1 -i ./myinventory

https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#using-collections-in-playbooks

This feature request is for teflo to allow the user to be able to define playbooks in their orchestrate action where the playbook is stored in a collection (which teflo would install). Right now this is not possible as validation fails due to the playbook needs to be on the file system in order for teflo to properly find it.

---
name: orchestrate
description: Install OpenShift

orchestrate:
  - name: Install OpenShift
    orchestrator: ansible
    hosts: hypervisor
    ansible_playbook:
      name: ansible.collection.install_openshift <<<<
    ansible_options:
      extra_vars:
        disconnected_install: "{{ DISCONNECTED }}"

Without this, the workaround is to create a playbook within the repository where the SDF file is and then have that playbook call the playbook stored in the collection using the import_playbook module.

---
name: orchestrate
description: Install OpenShift

orchestrate:
  - name: Install OpenShift
    orchestrator: ansible
    hosts: hypervisor
    ansible_playbook:
      name: ansible/install_openshift.yml <<<<
    ansible_options:
      extra_vars:
        disconnected_install: "{{ DISCONNECTED }}"
---
- import_playbook: ansible_collection.install_openshift

Templating for injecting IP Address is not templating

I'm trying to pass the IP address into a playbook according to the data pass-through document, but it doesn't seem to be working as expected.

https://github.com/RedHatQE/teflo/blob/408886ef91925b18895b946c8a750a31db15c6ac/docs/users/data_pass_through.rst#data-pass-through

name: execute
description: "Execute task resources which will run the interoperability test suite.\n"
resource_check: []
provision: []
orchestrate: []
execute:
- name: execute pytests tests playbook
  description: execute pytest tests on the RHEL 8 clients
  executor: runner
  hosts:
  - el8_client
  artifacts:
  - /tmp/tests/test-report/*
  playbook:
  - name: tests/run_pytest_tests.yml
  artifact_locations:
    artifacts/el8-client-7j6qq/:
    - suite1_results.xml
    - suite2_results.xml
- name: execute restraint tests playbook
  description: execute restraint tests on the RHEL 8 clients
  executor: runner
  hosts:
  - test_driver
  artifacts:
  - /tmp/tests/*.log
  - /tmp/tests/test_sample*
  - /tmp/tests/results.xml
  ansible_options:
    extra_vars:
      ipv4: '{ el8_client.ip_address }'
  playbook:
  - name: tests/run_restraint_tests.yml
  artifact_locations:
    artifacts/test-driver-z2mly/:
    - restraint-el8.log
    - test_sample.xml
report: []

The specific error (you can see that { el8_client.ip_address } is actually getting injected):

2021-04-21 13:58:06,709 INFO PLAY [Run restraint tests] *****************************************************
2021-04-21 13:58:08,088 INFO 
2021-04-21 13:58:08,089 INFO TASK [Gathering Facts] *********************************************************
2021-04-21 13:58:08,089 INFO ok: [10.0.106.58]
2021-04-21 13:58:08,674 INFO 
2021-04-21 13:58:08,675 INFO TASK [execute restraint tests] *************************************************
2021-04-21 13:58:08,675 INFO fatal: [10.0.106.58]: FAILED! => {"changed": true, "cmd": "restraint -vv --host 1={ el8_client.ip_address } --job test_sample.xml > restraint-el8.log\n", "delta": "0:00:00.005702", "end": "2021-04-21 13:58:08.351876", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2021-04-21 13:58:08.346174", "stderr": "Malformed host: {, see help for reference\nFailed to add recipe host. [restraint-error-quark, 15]", "stderr_lines": ["Malformed host: {, see help for reference", "Failed to add recipe host. [restraint-error-quark, 15]"], "stdout": "", "stdout_lines": []}

And here is the run_restraint_tests.yml playbook:

---
- name: Run restraint tests
  hosts: "{{ hosts }}"
  become: true
  tasks:
    - name: execute restraint tests
      shell: >
        restraint -vv --host 1={{ ipv4 }}
        --job test_sample.xml > restraint-el8.log
      register: output
      failed_when: output.rc != 0 and output.rc != 10
      args:
        chdir: /tmp/tests

    - name: xsltproc on results
      shell: >
        xsltproc /usr/share/restraint/client/job2junit.xml
        test_sample.01/job.xml > results.xml
      args:
        chdir: /tmp/tests

Test result summary does not take into account error test case elements

Teflo does not take into account the test cases that have been marked as error when gathering the test result summary. Test cases that are marked as error should be counted as part of the failure total count. Right now, teflo will state the overall result was passed, when there actually was errors within the XML files.

XML

<testsuites><testsuite name="pytest" errors="1" failures="0" skipped="1" tests="27" time="2944.001" timestamp="2021-10-08T18:06:33.335546">

Teflo output

2021-10-08 22:55:51,163 INFO -------------------------------------------------------------------------------
2021-10-08 22:55:51,164 INFO                             TESTRUN RESULTS SUMMARY                            
2021-10-08 22:55:51,164 INFO -------------------------------------------------------------------------------
2021-10-08 22:55:51,165 INFO                              * AGGREGATE RESULTS *                             
2021-10-08 22:55:51,166 INFO -------------------------------------------------------------------------------
2021-10-08 22:55:51,166 INFO  * Total Tests             : 27
2021-10-08 22:55:51,167 INFO  * Failed Tests            : 0
2021-10-08 22:55:51,168 INFO  * Skipped Tests           : 1
2021-10-08 22:55:51,169 INFO  * Passed Tests            : 26
2021-10-08 22:55:51,170 INFO -------------------------------------------------------------------------------

Jenkins xUnit Plugin

[Pipeline] // ansiColor
[Pipeline] xunit
INFO: Processing JUnit
INFO: [JUnit] - 1 test report file(s) were found with the pattern '.teflo/.results/**/*.xml' relative to '/home/jenkins/agent/workspace/gs-openshift4.8-cnv4.8-odf4.8-interop' for the testing framework 'JUnit'.
INFO: Check 'Failed Tests' threshold.
INFO: The total number of tests for the threshold 'Failed Tests' exceeds the specified "failure threshold" value.
[Pipeline] ansiColor

xsd schema used by the plugin ~ https://github.com/jenkinsci/xunit-plugin/blob/master/src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd

It would be great if the code can be updated to also include error elements as part of the failure count. To make it truly reflect the actual results.

def create_individual_testrun_results(artifact_locations, config):

[DEPRECATION WARNING]: DISPLAY_SKIPPED_HOSTS option is deprecated

Seeing warnings from teflo settings

[DEPRECATION WARNING]: DISPLAY_SKIPPED_HOSTS option, environment variables 
without ``ANSIBLE_`` prefix are deprecated, use the 
``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable instead. This feature 
will be removed from ansible-core in version 2.12. Deprecation warnings can be 
disabled by setting deprecation_warnings=False in ansible.cfg.

https://github.com/RedHatQE/teflo/blame/60554294fcb641ed7216b1ae99b6ab3fa8f58abc/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py#L232

Unable to pass multiple files to the extra_vars module

When multiple files are passed as a list in the extra_vars module within ansible_options, the task fails. The file module only works for a single file.

For example, the following code snippet doesn't work:

orchestrate:
  - name: ansible/sys_setup.yml
    description: Clones layered product testsuite
    orchestrator: ansible
    hosts: "interconnect{{ INSTANCE_TAG }}"
    connection: local
    ansible_options:
      extra_vars:
        file:
          - ansible/vars/amq-interconnect_v1.10.yml
          - ansible/amq-interconnect_creds.yml

At present, the extra_vars is looped through by file which a unique key so it can only be set once:
https://github.com/RedHatQE/teflo/blob/master/teflo/ansible_helpers.py#L178
There needs to be a provision to pass a list to the file module

Issue when rendering SDF file with jinja conditionals

I found a bug where teflo validation can't run when I have jinja conditionals mixed within my include statements. These conditionals are here to only include those SDF files when the required conditional are met.

I put together this simple Python script that pretty much takes the teflo function for jinja template and allows me to quickly run it to demonstrate this issue.

Script

import jinja2
import os
import yaml

def template_render(filepath, env_dict):
    """
    A function to do jinja templating given a file and a dictionary of key/vars
    :param filepath: path to a file
    :param env_dict: dictionary of key/values used for data substitution
    :return: stream of data with the templating complete
    :rtype: data stream
    """
    path, filename = os.path.split(filepath)
    return jinja2.Environment(loader=jinja2.FileSystemLoader(
        path), lstrip_blocks=True, trim_blocks=True).get_template(filename).render(env_dict)

data = template_render("libvirt.yml", {"SOME_ACTION": True})
print(data)
print("\n")
data_yaml = yaml.safe_load(data)
print(data_yaml)

Issue

I have the following SDF file:

---
name: libvirt_stack
description: >
  Teflo scenario descriptor file declaring the components required
  to deploy a development stack for development purposes. The
  development stack is a bare metal machine, with libvirt vms deployed
  on top and then OpenShift, CNV, ODF installed and configured.

include:
  - teflo/stack/provision_localhost.yml
  - teflo/stack/provision_libvirt.yml
{% if SOME_ACTION is defined and SOME_ACTION %}  - teflo/stack/orchestrate-123.yml{% endif %}
  - teflo/stack/orchestrate.yml

When jinja template is performed and SOME_ACTION is True, the result is the following:

---
name: libvirt_stack
description: >
  Teflo scenario descriptor file declaring the components required
  to deploy a development stack for development purposes. The
  development stack is a bare metal machine, with libvirt vms deployed
  on top and then OpenShift, CNV, ODF installed and configured.

include:
  - teflo/stack/provision_localhost.yml
  - teflo/stack/provision_libvirt.yml
  - teflo/stack/orchestrate-123.yml  - teflo/stack/orchestrate.yml


{'name': 'libvirt_stack', 'description': 'Teflo scenario descriptor file declaring the components required to deploy a development stack for development purposes. The development stack is a bare metal machine, with libvirt vms deployed on top and then OpenShift, CNV, ODF installed and configured.\n', 'include': ['teflo/stack/provision_localhost.yml', 'teflo/stack/provision_libvirt.yml', 'teflo/stack/orchestrate-123.yml  - teflo/stack/orchestrate.yml']}

You can see that both SDF files are included on the same line (same string) and when run from teflo it fails right away not even allowing me to validate it.

2021-09-24 17:50:55,576 WARNING Scenario workspace was not set, therefore the workspace is automatically assigned to the current working directory. You may experience problems if files needed by teflo do not exists in the scenario workspace.
Included File is invalid or Include section is empty.You have to provide valid scenario files to be included.

The workaround for now is to add a new line between included SDF files in order for it to work with the jinja rendering.

invalid inventory generated when groups contains the machine name

When the groups section of provision contains the exact name value, the generated inventory is invalid and will fail.

---
name: provision_bug
description: ...

provision:
  - name: laptop
    groups: laptop
    ip_address: 127.0.0.1
    ansible_params:
      ansible_connection: local
execute:
  - name: Test command
    description: "Test running command"
    executor: runner
    hosts: localhost
    shell:
      - command: "/usr/bin/true"

Inventory:

[laptop:children]
laptop

[laptop]
127.0.0.1

[laptop:vars]
ansible_connection = local

Output:

❯ teflo run -t provision -t execute -s scenario.yml

--------------------------------------------------
Teflo Framework v1.1.0
Copyright (C) 2020, Red Hat, Inc.
--------------------------------------------------
2021-05-10 08:23:14,109 WARNING Scenario workspace was not set, therefore the workspace is automatically assigned to the current working directory. You may experience problems if files needed by teflo do not exists in the scenario workspace.
2021-05-10 08:23:14,189 INFO

2021-05-10 08:23:14,189 INFO                                TEFLO RUN (START)
2021-05-10 08:23:14,190 INFO -------------------------------------------------------------------------------
2021-05-10 08:23:14,190 INFO  * Data Folder           : .teflo/ud41ik0pfi
2021-05-10 08:23:14,190 INFO  * Workspace             : /home/bpratt/pit/teflo-inventory-gen-reproduction
2021-05-10 08:23:14,191 INFO  * Log Level             : info
2021-05-10 08:23:14,191 INFO  * Tasks                 : ['provision', 'execute']
2021-05-10 08:23:14,191 INFO  * Scenario              : provision_bug
2021-05-10 08:23:14,192 INFO -------------------------------------------------------------------------------

2021-05-10 08:23:14,192 INFO  * Task    : provision
2021-05-10 08:23:14,192 INFO Sending out any notifications that are registered.
2021-05-10 08:23:14,193 INFO ..................................................
2021-05-10 08:23:14,194 INFO Starting tasks on pipeline: notify
2021-05-10 08:23:14,194 WARNING ... no tasks to be executed ...
2021-05-10 08:23:14,195 INFO ..................................................
2021-05-10 08:23:14,195 INFO Starting tasks on pipeline: provision
2021-05-10 08:23:14,196 INFO --> Blaster v0.4.0 <--
2021-05-10 08:23:14,196 INFO Task Execution: Concurrent
2021-05-10 08:23:14,198 INFO Tasks:
2021-05-10 08:23:14,199 INFO 1. Task     : laptop
                                Class    : <class 'teflo.tasks.provision.ProvisionTask'>
                                Methods  : ['run']
2021-05-10 08:23:14,199 INFO ** BLASTER BEGIN **
2021-05-10 08:23:14,208 WARNING Asset laptop is static, provision will be skipped.
2021-05-10 08:23:14,211 INFO ** BLASTER COMPLETE **
2021-05-10 08:23:14,211 INFO     -> TOTAL DURATION: 0h:0m:0s
2021-05-10 08:23:14,212 INFO Populating master inventory file with host(s) laptop
2021-05-10 08:23:14,214 INFO ..................................................
2021-05-10 08:23:14,215 INFO  * Task    : execute
2021-05-10 08:23:14,215 INFO Sending out any notifications that are registered.
2021-05-10 08:23:14,216 INFO ..................................................
2021-05-10 08:23:14,216 INFO Starting tasks on pipeline: notify
2021-05-10 08:23:14,217 WARNING ... no tasks to be executed ...
2021-05-10 08:23:14,217 INFO ..................................................
2021-05-10 08:23:14,218 INFO Starting tasks on pipeline: execute
2021-05-10 08:23:14,218 INFO --> Blaster v0.4.0 <--
2021-05-10 08:23:14,219 INFO Task Execution: Sequential
2021-05-10 08:23:14,219 INFO Tasks:
2021-05-10 08:23:14,219 INFO 1. Task     : Test command
                                Class    : <class 'teflo.tasks.execute.ExecuteTask'>
                                Methods  : ['run']
2021-05-10 08:23:14,220 INFO ** BLASTER BEGIN **
2021-05-10 08:23:14,221 INFO    executing Test command
2021-05-10 08:23:14,224 INFO Executing shell commands:
2021-05-10 08:23:14,224 INFO Executing shell command /usr/bin/true
2021-05-10 08:23:14,224 INFO Ansible options used: {}
2021-05-10 08:23:14,235 INFO Executing playbook : cbn_execute_shell_25st0.yml
[WARNING]:  * Failed to parse /home/bpratt/pit/teflo-inventory-gen-reproduction/.teflo/.results/inventory/master-ud41ik0pfi with yaml plugin:
We were unable to read either as JSON nor YAML, these are the errors we got from each: JSON: Expecting value: line 1 column 2 (char 1)
Syntax Error while loading YAML.   did not find expected <document start>  The error appears to be in '/home/bpratt/pit/teflo-inventory-gen-
reproduction/.teflo/.results/inventory/master-ud41ik0pfi': line 2, column 1, but may be elsewhere in the file depending on the exact syntax
problem.  The offending line appears to be:  [laptop:children] laptop ^ here
[WARNING]:  * Failed to parse /home/bpratt/pit/teflo-inventory-gen-reproduction/.teflo/.results/inventory/master-ud41ik0pfi with ini plugin:
can't add group to itself
[WARNING]: Unable to parse /home/bpratt/pit/teflo-inventory-gen-reproduction/.teflo/.results/inventory/master-ud41ik0pfi as an inventory
source
[WARNING]: Unable to parse /home/bpratt/pit/teflo-inventory-gen-reproduction/.teflo/.results/inventory as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
2021-05-10 08:23:14,559 INFO ansible-playbook 2.10.9
2021-05-10 08:23:14,560 INFO config file = None
2021-05-10 08:23:14,560 INFO configured module search path = ['/home/bpratt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
2021-05-10 08:23:14,560 INFO ansible python module location = /home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/ansible
2021-05-10 08:23:14,560 INFO executable location = /home/bpratt/pit/teflo-inventory-gen-reproduction/venv/bin/ansible-playbook
2021-05-10 08:23:14,560 INFO python version = 3.9.4 (default, Apr  6 2021, 00:00:00) [GCC 11.0.1 20210324 (Red Hat 11.0.1-0)]
2021-05-10 08:23:14,560 INFO No config file found; using defaults
2021-05-10 08:23:14,637 INFO Skipping callback 'default', as we already have a stdout callback.
2021-05-10 08:23:14,638 INFO Skipping callback 'minimal', as we already have a stdout callback.
2021-05-10 08:23:14,638 INFO Skipping callback 'oneline', as we already have a stdout callback.
2021-05-10 08:23:14,638 INFO
2021-05-10 08:23:14,638 INFO PLAYBOOK: cbn_execute_shell_25st0.yml ******************************************
2021-05-10 08:23:14,638 INFO 1 plays in cbn_execute_shell_25st0.yml
2021-05-10 08:23:14,650 INFO
2021-05-10 08:23:14,650 INFO PLAY [run shell and fetch results] *********************************************
2021-05-10 08:23:14,656 INFO
2021-05-10 08:23:14,656 INFO TASK [Gathering Facts] *********************************************************
2021-05-10 08:23:14,656 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_shell_25st0.yml:1
2021-05-10 08:23:15,457 INFO ok: [localhost]
2021-05-10 08:23:15,465 INFO META: ran handlers
2021-05-10 08:23:15,471 INFO
2021-05-10 08:23:15,471 INFO TASK [shell command] ***********************************************************
2021-05-10 08:23:15,471 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_shell_25st0.yml:5
2021-05-10 08:23:15,734 INFO changed: [localhost] => {"changed": true, "cmd": "/usr/bin/true", "delta": "0:00:00.002992", "end": "2021-05-10 08:23:15.716966", "rc": 0, "start": "2021-05-10 08:23:15.713974", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
2021-05-10 08:23:15,740 INFO
2021-05-10 08:23:15,740 INFO TASK [to get correct (stderr/stdout/msg) output msg] ***************************
2021-05-10 08:23:15,740 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_shell_25st0.yml:11
2021-05-10 08:23:15,752 INFO ok: [localhost] => {"ansible_facts": {"sh_out": ""}, "changed": false}
2021-05-10 08:23:15,758 INFO
2021-05-10 08:23:15,758 INFO TASK [setting json str] ********************************************************
2021-05-10 08:23:15,758 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_shell_25st0.yml:15
2021-05-10 08:23:15,772 INFO ok: [localhost] => {"ansible_facts": {"json_str": {"err": "", "host_name": "workstation", "rc": "0"}}, "changed": false}
2021-05-10 08:23:15,785 INFO
2021-05-10 08:23:15,785 INFO TASK [copy to shell results to a json file] ************************************
2021-05-10 08:23:15,785 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_shell_25st0.yml:20
2021-05-10 08:23:16,256 INFO fatal: [localhost]: FAILED! => {"changed": false, "checksum": "5334ada093057555f4a8602c96d94dff571ec3a7", "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"}
2021-05-10 08:23:16,256 INFO
2021-05-10 08:23:16,256 INFO NO MORE HOSTS LEFT *************************************************************
2021-05-10 08:23:16,257 INFO
2021-05-10 08:23:16,257 INFO PLAY RECAP *********************************************************************
2021-05-10 08:23:16,257 INFO localhost                  : ok=4    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
2021-05-10 08:23:16,257 INFO
2021-05-10 08:23:16,303 ERROR [Errno 2] No such file or directory: 'shell-results-25st0.json'
2021-05-10 08:23:16,304 ERROR Failed to find the shell-results.json file which means there was an uncaught failure running the dynamic playbook. Please enable verbose Ansible logging in the teflo.cfg file and try again.
2021-05-10 08:23:16,304 INFO Test Execution has failed but still fetching any test generated artifacts
2021-05-10 08:23:16,305 INFO Fetching test artifacts @ .teflo/.results/artifacts
2021-05-10 08:23:16,305 INFO Ansible options used: {}
2021-05-10 08:23:16,346 INFO Executing playbook : cbn_execute_synchronize_25st0.yml
2021-05-10 08:23:16,664 INFO ansible-playbook 2.10.9
2021-05-10 08:23:16,665 INFO config file = None
2021-05-10 08:23:16,665 INFO configured module search path = ['/home/bpratt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
2021-05-10 08:23:16,665 INFO ansible python module location = /home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/ansible
2021-05-10 08:23:16,665 INFO executable location = /home/bpratt/pit/teflo-inventory-gen-reproduction/venv/bin/ansible-playbook
2021-05-10 08:23:16,665 INFO python version = 3.9.4 (default, Apr  6 2021, 00:00:00) [GCC 11.0.1 20210324 (Red Hat 11.0.1-0)]
2021-05-10 08:23:16,665 INFO No config file found; using defaults
2021-05-10 08:23:16,740 INFO redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
2021-05-10 08:23:16,746 INFO Skipping callback 'default', as we already have a stdout callback.
2021-05-10 08:23:16,746 INFO Skipping callback 'minimal', as we already have a stdout callback.
2021-05-10 08:23:16,746 INFO Skipping callback 'oneline', as we already have a stdout callback.
2021-05-10 08:23:16,746 INFO
2021-05-10 08:23:16,746 INFO PLAYBOOK: cbn_execute_synchronize_25st0.yml ************************************
2021-05-10 08:23:16,747 INFO 1 plays in cbn_execute_synchronize_25st0.yml
2021-05-10 08:23:16,759 INFO
2021-05-10 08:23:16,759 INFO PLAY [fetch artifacts] *********************************************************
2021-05-10 08:23:17,438 INFO
2021-05-10 08:23:17,438 INFO TASK [Gathering Facts] *********************************************************
2021-05-10 08:23:17,438 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_synchronize_25st0.yml:1
2021-05-10 08:23:17,438 INFO ok: [localhost]
2021-05-10 08:23:17,445 INFO META: ran handlers
2021-05-10 08:23:17,482 INFO
2021-05-10 08:23:17,482 INFO TASK [setup artifacts_found list] **********************************************
2021-05-10 08:23:17,482 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_synchronize_25st0.yml:16
2021-05-10 08:23:17,482 INFO ok: [localhost] => {"ansible_facts": {"artifacts_found": []}, "changed": false}
2021-05-10 08:23:17,517 INFO
2021-05-10 08:23:17,517 INFO TASK [debug] *******************************************************************
2021-05-10 08:23:17,518 INFO task path: /home/bpratt/pit/teflo-inventory-gen-reproduction/cbn_execute_synchronize_25st0.yml:26
2021-05-10 08:23:17,518 INFO ok: [localhost] => {
2021-05-10 08:23:17,518 INFO "msg": "0"
2021-05-10 08:23:17,518 INFO }
2021-05-10 08:23:17,631 INFO redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
2021-05-10 08:23:17,688 INFO META: ran handlers
2021-05-10 08:23:17,694 INFO META: ran handlers
2021-05-10 08:23:17,695 INFO
2021-05-10 08:23:17,695 INFO PLAY RECAP *********************************************************************
2021-05-10 08:23:17,695 INFO localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=11   rescued=0    ignored=0
2021-05-10 08:23:17,695 INFO
2021-05-10 08:23:17,747 ERROR [Errno 2] No such file or directory: 'sync-results-25st0.txt'
2021-05-10 08:23:17,780 ERROR Failed to execute Test command
2021-05-10 08:23:17,781 ERROR Failed to find the sync-results.txt file which means there was an uncaught failure running the synchronization playbook. Please enable verbose Ansible logging in the teflo.cfg file and try again.
2021-05-10 08:23:17,783 ERROR A exception was raised while processing task: Test command method: run
Traceback (most recent call last):
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/ansible_helpers.py", line 677, in run_shell_playbook
    with open('shell-results-' + self.uid + '.json') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'shell-results-25st0.json'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 338, in run
    getattr(self, '__%s__' % attr)()
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 100, in __shell__
    result = self.ans_service.run_shell_playbook(shell)
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/ansible_helpers.py", line 686, in run_shell_playbook
    raise AnsibleServiceError('Failed to find the shell-results.json file '
teflo.exceptions.AnsibleServiceError: Failed to find the shell-results.json file which means there was an uncaught failure running the dynamic playbook. Please enable verbose Ansible logging in the teflo.cfg file and try again.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 225, in __artifacts__
    with open('sync-results-' + self.ans_service.uid + '.txt') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'sync-results-25st0.txt'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/blaster/blast.py", line 83, in run
    value = getattr(task_obj, method)()
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/tasks/execute.py", line 57, in run
    self.executor.run()
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/executors/execute_manager.py", line 72, in run
    res = self.plugin.run()
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 346, in run
    self.__artifacts__()
  File "/home/bpratt/pit/teflo-inventory-gen-reproduction/venv/lib64/python3.9/site-packages/teflo/executors/ext/ansible_executor_plugin/ansible_executor_plugin.py", line 229, in __artifacts__
    raise TefloExecuteError('Failed to find the sync-results.txt file '
teflo.exceptions.TefloExecuteError: Failed to find the sync-results.txt file which means there was an uncaught failure running the synchronization playbook. Please enable verbose Ansible logging in the teflo.cfg file and try again.
2021-05-10 08:23:18,787 INFO ** BLASTER COMPLETE **
2021-05-10 08:23:18,789 INFO     -> TOTAL DURATION: 0h:0m:4s
2021-05-10 08:23:18,791 ERROR One or more tasks got a status of non zero.
2021-05-10 08:23:18,794 INFO Sending out any notifications that are registered.
2021-05-10 08:23:18,799 INFO ..................................................
2021-05-10 08:23:18,801 INFO Starting tasks on pipeline: notify
2021-05-10 08:23:18,803 WARNING ... no tasks to be executed ...
2021-05-10 08:23:18,833 INFO

2021-05-10 08:23:18,834 INFO                                SCENARIO RUN (END)
2021-05-10 08:23:18,834 INFO -------------------------------------------------------------------------------
2021-05-10 08:23:18,835 INFO  * Duration                       : 0h:0m:4s
2021-05-10 08:23:18,835 INFO  * Passed Tasks                   : ['provision']
2021-05-10 08:23:18,835 INFO  * Failed Tasks                   : ['execute']
2021-05-10 08:23:18,836 INFO  * Results Folder                 : .teflo/.results
2021-05-10 08:23:18,836 INFO  * Included Scenario Definition   : []
2021-05-10 08:23:18,837 INFO  * Final Scenario Definition      : .teflo/.results/results.yml
2021-05-10 08:23:18,837 INFO -------------------------------------------------------------------------------
2021-05-10 08:23:18,838 INFO TEFLO RUN (RESULT=FAILED)

Ansible orchestrate fails with ansible 7.0.0

Bug

Teflo is using the ConfigManager class directly from ansible. With the recent ansible 7.0.0 release bringing ansible-core version 2.14.0, when an object is instantiated from this class, it no longer has the data attribute.

Version 2.14.0 ~ https://github.com/ansible/ansible/blob/devel/lib/ansible/config/manager.py

Version 2.13.6 (ansible < 7.0.0) ~ https://github.com/ansible/ansible/blob/b14bbf1c11358da8c1f3fad00f095333102ede1c/lib/ansible/config/manager.py#L290

2022-11-23 13:27:33,560 ERROR 'ConfigManager' object has no attribute 'data'
2022-11-23 13:27:33,561 ERROR A exception was raised while processing task: Install OpenShift method: run
Traceback (most recent call last):
  File "/home/jenkins/agent/workspace/PipelineSimulator/sol-mock-1.0-openshift4.12-stage/envs/solution/lib64/python3.9/site-packages/blaster/blast.py", line 83, in run
    value = getattr(task_obj, method)()
  File "/home/jenkins/agent/workspace/PipelineSimulator/sol-mock-1.0-openshift4.12-stage/envs/solution/lib64/python3.9/site-packages/teflo/tasks/orchestrate.py", line 59, in run
    self.orchestrator.run()
  File "/home/jenkins/agent/workspace/PipelineSimulator/sol-mock-1.0-openshift4.12-stage/envs/solution/lib64/python3.9/site-packages/teflo/orchestrators/action_orchestrator.py", line 61, in run
    res = self.plugin.run()
  File "/home/jenkins/agent/workspace/PipelineSimulator/sol-mock-1.0-openshift4.12-stage/envs/solution/lib64/python3.9/site-packages/teflo/orchestrators/ext/ansible_orchestrator_plugin/ansible_orchestrator_plugin.py", line 216, in run
    self.ans_service.alog_update(folder_name='ansible_orchestrator')
  File "/home/jenkins/agent/workspace/PipelineSimulator/sol-mock-1.0-openshift4.12-stage/envs/solution/lib64/python3.9/site-packages/teflo/ansible_helpers.py", line 576, in alog_update
    ans_logfile = self.ans_log_path if self.ans_log_path else self.get_default_config(key="DEFAULT_LOG_PATH")
  File "/home/jenkins/agent/workspace/PipelineSimulator/sol-mock-1.0-openshift4.12-stage/envs/solution/lib64/python3.9/site-packages/teflo/ansible_helpers.py", line 532, in get_default_config
    a_settings = acm.data.get_settings()

Should be fixed together with #263

Do not take into account provision resources that do not match the supplied teflo label

When using teflo labels, teflo should only take into account resources matching that label when generating the hosts key for any orchestrate/execute task resources. The current behavior will include all provision resources that are apart of the group defined even if it does not have the matching label.

Example:

Provision SDF

---
name: provision
description: Provision

provision:
  - name: scenario_driver
    groups:
      - localhost
      - scenario_drivers
    ip_address: 127.0.0.1
    ansible_params:
      ansible_connection: local
    labels:
      - stack-beaker-libvirt-only
      - stack-beaker-bare_metal-only

  - name: baremetal
    groups:
      - hypervisor <<<<
    provisioner: beaker-client
    <truncated>
    labels:
      - stack-beaker-libvirt-only

  - name: prod_test_driver
    groups:
      - prod_test_drivers
      - hypervisor <<<<
    provisioner: beaker-client
    <truncated>
    labels:
      - stack-beaker-bare_metal-only

When I issue the following teflo command below, the resulting result SDF file for orchestrate is generated.

teflo run -t validate -s SDF.yml -l stack-beaker-libvirt-only
name: orchestrate
description: Orchestrate
remote_workspace: []
resource_check: {}
provision: []
orchestrate:
- name: Install OpenShift on libvirt
  description: null
  orchestrator: ansible
  hosts: <<<<
  - baremetal
  - prod_test_driver
  ansible_playbook:
    name: ansible/install_ocp_libvirt.yml
  ansible_options:
    extra_vars:
      artifact_directory: /home/rywillia/Projects/gitlab/mpqe/mps/solutions/cnv_odf_ocp/venv/cnv_odf_ocp/.teflo/.results/artifacts/sharedArtifacts
  ansible_galaxy_options:
    role_file: requirements.yml
  cleanup:
    name: Uninstall OpenShift on libvirt
    orchestrator: ansible
    hosts: hypervisor
    ansible_playbook:
      name: ansible/uninstall_ocp_libvirt.yml
    ansible_options:
      extra_vars:
        foo: bar
    ansible_galaxy_options:
      role_file: requirements.yml
    labels:
    - stack-beaker-libvirt-only
  labels:
  - stack-beaker-libvirt-only

This causes issues when ansible goes to run the playbook. The inventory file has a group named hypervisor where only one of the hosts in the group is valid and the other does not exist.

The behavior when using labels should not account the second resource that has the non matching label. This should be the expected result orchestrate SDF file.

name: orchestrate
description: Orchestrate
remote_workspace: []
resource_check: {}
provision: []
orchestrate:
- name: Install OpenShift on libvirt
  description: null
  orchestrator: ansible
  hosts: <<<<
  - baremetal
  ansible_playbook:
    name: ansible/install_ocp_libvirt.yml
  ansible_options:
    extra_vars:
      artifact_directory: /home/rywillia/Projects/gitlab/mpqe/mps/solutions/cnv_odf_ocp/venv/cnv_odf_ocp/.teflo/.results/artifacts/sharedArtifacts
  ansible_galaxy_options:
    role_file: requirements.yml
  cleanup:
    name: Uninstall OpenShift on libvirt
    orchestrator: ansible
    hosts: hypervisor
    ansible_playbook:
      name: ansible/uninstall_ocp_libvirt.yml
    ansible_options:
      extra_vars:
        foo: bar
    ansible_galaxy_options:
      role_file: requirements.yml
    labels:
    - stack-beaker-libvirt-only
  labels:
  - stack-beaker-libvirt-only

Failing with ERROR Orchestration failed : 'ansible_user'

2021-11-03 20:59:35,547 INFO Executing playbook : .tox/teflo/interop_qe_openshift/ansible/playbooks/setup_test_driver.yml
2021-11-03 20:59:35,626 INFO 10.0.162.107
2021-11-03 20:59:35,627 ERROR Orchestration failed : 'ansible_user'
2021-11-03 20:59:35,698 ERROR Failed to run orchestration Configure OpenShift test driver 
2021-11-03 20:59:35,699 ERROR Orchestration failed : Failed to perform  Configure OpenShift test driver

We have a use case where we don't use ansible_params on provisioning our resources but we've specifed the ansible_user in a group_vars file and ansible_ssh_private_key_file has been specified in the ansible.cfg. So ansible is able to run but teflo fails with the above error.

Reproduce with the following scenario

  • Provision an asset without using ansible_params
  • Use the asset in an orchestrate/execute task in the hosts: key
  • Ansible playbook that templates the hosts key hosts: {{ hosts }}

When provision completes your master inventory file should look like

[dummy-group:children]
dummy-asset

[dummy-asset]
10.0.114.99

[dummy-asset:vars]

What I believe is happening is that the ssh_retry helper function is dependent on that groups vars but since it's empty we get python keyerror. https://github.com/RedHatQE/teflo/blob/develop/teflo/helpers.py#L861

teflo should be flexible to this use case and have other methods for validating connectivity without relying on that specific group_var

Unable to install Ansible collections

After installing teflo, the following ansible packages/versions are installed:

(teflo) [rywillia@t14s ~]$ pip freeze | grep teflo
teflo==1.2.0
(teflo) [rywillia@t14s ~]$ pip freeze | grep ansible
ansible==4.0.0
ansible-base==2.10.9
ansible-core==2.11.0

When trying to use teflo to install Ansible collections or even invoke the ansible-galaxy command directly, I am getting the following exception:

(teflo) [rywillia@t14s ~]$ ansible-galaxy --help
ERROR! Unexpected Exception, this is probably a bug: cannot import name 'CollectionRequirement' from 'ansible.galaxy.collection' (/home/rywillia/teflo/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py)
the full traceback was:

Traceback (most recent call last):
  File "/home/rywillia/teflo/bin/ansible-galaxy", line 92, in <module>
    mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
  File "/home/rywillia/teflo/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 24, in <module>
    from ansible.galaxy.collection import (
ImportError: cannot import name 'CollectionRequirement' from 'ansible.galaxy.collection' (/home/rywillia/teflo/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py)

The problem looks to be with the recent release of Ansible 4.0. Before Ansible 4.0, teflo would install only ansible and ansible-base packages. Now with ansible 4.0 being out, ansible, ansible-base and ansible-core get installed. Resulting in the exception above. By removing ansible-base package, everything works as expected.

(t1) [rywillia@t14s ~]$ pip list | grep teflo
teflo               1.2.0
(t1) [rywillia@t14s ~]$ pip list | grep ansible
ansible             4.0.0
ansible-core        2.11.0
(t1) [rywillia@t14s ~]$ ansible-galaxy --help
usage: ansible-galaxy [-h] [--version] [-v] TYPE ...

Perform various Role and Collection related operations.

positional arguments:
  TYPE
    collection   Manage an Ansible Galaxy collection.
    role         Manage an Ansible Galaxy role.

optional arguments:
  --version      show program's version number, config file location, configured module search path, module location, executable location and
                 exit
  -h, --help     show this help message and exit
  -v, --verbose  verbose mode (-vvv for more, -vvvv to enable connection debugging)

Ansible looks to have documented that if using Ansible 4.0, you will need to uninstall ansible-base. [1] Ansible 3 was based on ansible-base package.

[1] https://groups.google.com/g/ansible-devel/c/AeF2En1RGI8

Resources with same names will cause an error

In teflo version 2.2.0 If the scenario has resources with same name, e.g. below

---

name: data_folder
description:

provision:
  - name: laptop
    groups: localhost
    ip_address: 127.0.0.1
    ansible_params:
      ansible_connection: local

  - name: laptop
    groups: localhost
    ip_address: 127.0.0.10
    ansible_params:
      ansible_connection: local

it will throw error like below


`21:33:28  Traceback (most recent call last):
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/teflo.py", line 665, in notify
21:33:28      self.scenario_graph.remove_resources_from_scenario(scenario)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/utils/scenario_graph.py", line 387, in remove_resources_from_scenario
21:33:28      self._reports.remove(report)
21:33:28  ValueError: list.remove(x): x not in list
21:33:28  
21:33:28  During handling of the above exception, another exception occurred:
21:33:28  
21:33:28  Traceback (most recent call last):
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/bin/teflo", line 8, in <module>
21:33:28      sys.exit(teflo())
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/click/core.py", line 829, in __call__
21:33:28      return self.main(*args, **kwargs)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/click/core.py", line 782, in main
21:33:28      rv = self.invoke(ctx)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
21:33:28      return _process_result(sub_ctx.command.invoke(sub_ctx))
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
21:33:28      return ctx.invoke(self.callback, **ctx.params)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/click/core.py", line 610, in invoke
21:33:28      return callback(*args, **kwargs)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/click/decorators.py", line 21, in new_func
21:33:28      return f(get_current_context(), *args, **kwargs)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/cli.py", line 297, in run
21:33:28      cbn.run(tasklist=task)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/teflo.py", line 426, in run
21:33:28      self.run_all_helper(tasklist, final_passed_tasks, final_failed_tasks, status)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/teflo.py", line 393, in run_all_helper
21:33:28      self.run_helper(sc=sc, tasklist=tasklist_run)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/teflo.py", line 517, in run_helper
21:33:28      self.notify('on_start', status, passed_tasks, failed_tasks, scenario=sc)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/teflo.py", line 672, in notify
21:33:28      self.scenario_graph.remove_resources_from_scenario(scenario)
21:33:28    File "/var/lib/jenkins/workspace/ocp-edge-auto-tests-teflo/ocp-edge-auto/ocp-edge-venv/lib/python3.6/site-packages/teflo/utils/scenario_graph.py", line 379, in remove_resources_from_scenario
21:33:28      self._assets.remove(asset)
21:33:28  ValueError: list.remove(x): x not in list`

This is causing due to the fix #173

The workaround is to not use resources with same name, OR use teflo 2.1 release.

The fix for this will be available in the next release in January 2022

automate release process

  1. Have the release workflow moved to master branch
  2. Trigger workflow to do the bump version(once merge from develop to master, trigger it)
  3. Use teflo notification plugin/ notify service to send out release email, also should in master

Using teflo > 1.2.0 causes an exception when using vars-data

As of 1.2.1 whenever we run the following command:

teflo run -s tests/teflo_sdf/osp_dev_stack_deploy.yml --workspace ./ --vars-data playbooks/group_vars/autobot.yml --vars-data playbooks/group_vars/dev.yml --task provision --labels ceph --labels ceph_vols

We see the following error:

Teflo Framework v1.2.1
Copyright (C) 2021, Red Hat, Inc.
--------------------------------------------------
Traceback (most recent call last):
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/bin/teflo", line 8, in <module>
    sys.exit(teflo())
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/click/core.py", line 1137, in __call__
    return self.main(*args, **kwargs)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/click/core.py", line 1062, in main
    rv = self.invoke(ctx)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/click/core.py", line 1668, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/click/core.py", line 763, in invoke
    return __callback(*args, **kwargs)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/teflo/cli.py", line 240, in run
    scenario_stream = validate_cli_scenario_option(ctx, scenario, cbn.config, vars_data)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/teflo/helpers.py", line 1643, in validate_cli_scenario_option
    scenario_stream = validate_render_scenario(scenario, config, vars_data)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/teflo/helpers.py", line 1479, in validate_render_scenario
    temp_data.update({item[0]: preprocyaml(item[1], temp_data)})
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/teflo/helpers.py", line 1428, in preprocyaml
    return preprocyaml_str(input, temp_data)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/teflo/helpers.py", line 1415, in preprocyaml_str
    return replace_brackets(input, temp_data)
  File "/var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/lib/python3.6/site-packages/teflo/helpers.py", line 1403, in replace_brackets
    ret = input.replace(input[replace_start:replace_end], temp_data[key], 1)
TypeError: replace() argument 2 must be str, not list
ERROR: InvocationError for command /var/lib/jenkins/workspace/PSI-CZero-OSPQE-Test-Job/scenario/C0/.tox/teflo/bin/teflo run -s tests/teflo_sdf/osp_dev_stack_deploy.yml --workspace ./ --vars-data playbooks/group_vars/autobot.yml --vars-data playbooks/group_vars/dev.yml --task provision --labels ceph --labels ceph_vols (exited with code 1)

In order to work around this we need to rollback and pin teflo to 1.2.0. I think this has to do with the recent changes for PR #76

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.