dawidd6 / action-ansible-playbook Goto Github PK
View Code? Open in Web Editor NEW:gear: A GitHub Action for running Ansible playbooks
License: MIT License
:gear: A GitHub Action for running Ansible playbooks
License: MIT License
I try to use this Github Action using the windows-latest runner. I've got this error :
Run dawidd6/[email protected]
C:\hostedtoolcache\windows\Python\3.10.8\x64\Scripts\ansible-galaxy.exe collection install -r divona/requirements-windows.yml
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.10.8\x64\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\hostedtoolcache\windows\Python\3.10.8\x64\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\hostedtoolcache\windows\Python\3.10.8\x64\Scripts\ansible-galaxy.exe\__main__.py", line 4, in <module>
File "C:\hostedtoolcache\windows\Python\3.10.8\x64\lib\site-packages\ansible\cli\__init__.py", line 42, in <module>
check_blocking_io()
File "C:\hostedtoolcache\windows\Python\3.10.8\x64\lib\site-packages\ansible\cli\__init__.py", line 34, in check_blocking_io
if not os.get_blocking(fd):
AttributeError: module 'os' has no attribute 'get_blocking'
Error: The process 'C:\hostedtoolcache\windows\Python\3.10.8\x64\Scripts\ansible-galaxy.exe' failed with exit code 1
do you have an idea about this error ?
Hi! Nice play. Just can't find my playbook? Did you know why? I tried manually adding paths and boosting file permissions. My efforts were for nothing. Please help!
Run dawidd6/action-ansible-playbook@v2
with:
playbook: playbook.yaml
key: ***
inventory: [all]
example.com
[group1]
example.com
known_hosts: .known_hosts
options: --inventory inventory
--limit all
/opt/pipx_bin/ansible-playbook playbook.yaml --inventory inventory --limit all --key-file .ansible_key --inventory-file .ansible_inventory --ssh-common-args=-o UserKnownHostsFile=.ansible_known_hosts
ERROR! the playbook: playbook.yaml could not be found
Error: The process '/opt/pipx_bin/ansible-playbook' failed with exit code 1
Could you add option to specify inventory file instead of typing contents?
Trying to pass multiple variables to --extra-vars
without using a json file, but it looks like I am getting the following error:
ansible-playbook: error: unrecognized arguments:
Trying to separate the arguments by space it doesnt seem to be working.
options: |
--verbose
--inventory ${{ vars.ANSIBLE_HOSTS_FILE_PATH }}
--extra-vars a=something b=somethingelse
after first action is finished, the ansible_key has been cleanup, when start 2nd action, it has this error
@github-actionsgithub-actions / build .github#L1 ENOENT: no such file or directory, unlink '.ansible_key'
I need to install https://galaxy.ansible.com/azure/azcollection
To install Azure dependencies:
pip install -r requirements-azure.txt
To install Azure collection hosted in Galaxy:
ansible-galaxy collection install azure.azcollection
/bin/ansible-galaxy role install -r galaxy-requirements.yml
monolithprojects.github_actions_runner (1.16.0) is already installed, skipping.
/bin/ansible-galaxy collection install -r galaxy-requirements.yml
Process install dependency map
ERROR! Cannot meet requirement community.general:5.8.0 as it is already installed at version '5.6.0'. Use --force to overwrite
Error: The process '/bin/ansible-galaxy' failed with exit code 1
When trying to install an updated version of an Ansible Galaxy package there is an error message, The error suggests setting the force flag to overwrite the previous version of the package. It's not possible to pass through this force flag using the module at this time.
Hi,
I try your Github Action to execute an Ansible playbook :
name: Test / Linux
on:
# - push
- pull_request
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/[email protected]
- name: Run Ansible playbook
uses: dawidd6/[email protected]
with:
playbook: divona_linux.yml
directory: ./
requirements: divona/requirements-linux.yml
# the ssh private key for ansible to use to connect to the servers, stored as "ansible_ssh_private_key" in the GitHub secrets
# key: ${{ secrets.ansible_ssh_private_key }}
# the ansible inventory to use, stored as "ansible_inventory" in the GitHub secrets
inventory: inventories/linux.ini
options: |
--verbose
and the output :
Run dawidd6/[email protected]
with:
playbook: divona_linux.yml
directory: ./
requirements: divona/requirements-linux.yml
inventory: inventories/linux.ini
options: --verbose
Error: Unable to locate executable file: ansible-galaxy. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Do you have any idea ?
Hello, Please check the container, is it installed sshpass?
[webservers:vars]
ansible_user=***
ansible_sudo_pass=***
ansible_ssh_pass=***
/usr/bin/ansible-playbook ansible-begin.yml --inventory-file .ansible_inventory
PLAY [webservers] **************************************************************
TASK [Set permission to ] *****************************************
fatal: []: FAILED! => {"msg": "to use the 'ssh' connection type with passwords, you must install the sshpass program"}
First of all, great idea !
I wanted to use your action, but my project is like this :
root
└── config
└── ansible
and GitHub action do not allow uses
and working-directory
on the same step:
- name: deploy
uses: dawidd6/action-ansible-playbook
# not posible !
working-directory: ./config/ansible
with:
playbook: app.yml
key: ${{ secrets.SSH_PRIVATE_KEY }}
options: |
--inventory inventory.ini
--user ubuntu
--tags "deploy"
--verbose
Is possible to add the working directory as a with:
parameter ?
When using this GitHub action, I encountered an error saying Load key ".../.ansible_key": error in libcrypto
.
After trying to reproduce the bug, I found that it's because my SSH key ends with CRLF (created on Windows) instead of LF (as on Linux).
echo "${{ secrets.SSH_PRIVATE_KEY }}" > .ansible_key
doesn't workecho "${{ secrets.SSH_PRIVATE_KEY }}" | tr -d '\r' > .ansible_key
worksI tried different ways to paste the SSH key in a secret but it always resulted in a key with wrong line endings.
I suggest you remove the \r
characters in the code that copies the key to a file
Perhaps this is not the right community. But I'm using your plugin in my pipeline and I need to somehow install the dependency before even running Ansible on the Github Runner. And that does not seem to work for some reason. Any idea what this could be?
deploy:
name: Deploy
runs-on: ubuntu-latest
needs: build-docker
steps:
- name: Check out the codebase.
uses: actions/checkout@v2
- name: Uses Python 3.11
uses: actions/setup-python@v3
with:
python-version: '3.11.0-alpha.1'
- name: Install Jmespath Dependency
run: |
pip3 install jmespath
pip3 freeze # just to see what is installed
sudo apt install -y python3-jmespath
- name: Run playbook
uses: dawidd6/action-ansible-playbook@v2
with:
playbook: provision_vps.yml
directory: ./ansible
key: ${{secrets.ANSIBLE_PRIVATE_KEY}}
vault_password: ${{secrets.ANSIBLE_VAULT_PASS}}
options: |
--inventory hosts.inventory
--verbose
The error message I get is:
fatal: [cache.hugin.chat]: FAILED! => {"msg": "You need to install \"jmespath\" prior to running json_query filter"}
I have a client that uses Vault-IDs not just Vault Passwords.
Example:
ansible-playbook tests/test.yml --vault-id dev@~/vault-passwords/dev
Options could be to either
I know there are several other considerations that come up with this. Such as, what about when someone passed 'vault_password' and 'vault_password_file' or if someone passes 'vault_id' but no 'vault_password_file'. However, I think these could be handled fairly easily.
If I have some time, I may try to tackle this myself and open a PR.
I'm getting warnings for an outdated node.js version when I run this action in my code. The warning reccommends updating to node.js version 16.
Node.js 12 actions are deprecated. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/. Please update the following actions to use Node.js 16: dawidd6/action-ansible-playbook, dawidd6/action-ansible-playbook
Has anyone seen a custom callback plugin for Ansible which utilizes the various workflow-commands in GitHub to improve the overall readability?
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions
Best I've gotten so far is using debug.
Here is an example of using env vars to set the callback.
uses: 'dawidd6/[email protected]'
env:
ANSIBLE_STDOUT_CALLBACK: debug
ANSIBLE_DISPLAY_FAILED_STDERR: yes
ANSIBLE_CALLBACK_FORMAT_PRETTY: yes
Hey team,
Do you know how to access the instance using tunneling via bastion host? or can I pass Ansible config in the arguments?
Thank you
Hi there 👋,
Thank you for creating this action! It's very helpful. I'm using it to run ansible-playbook
with --check --diff
whenever a pull request is created.
I'd love the ability to take the results of the ansible output and post it as an issue comment in GitHub, but I believe the only way to do that across Actions is using outputs
metadata syntax.
If you are open to such a change, I believe we can do so while running the exec
:
action-ansible-playbook/main.js
Line 84 in fbcc2c2
with something like core.setOutput
which is documented as:
Outputs can be set with
setOutput
which makes them available to be mapped into inputs of other actions to ensure they are decoupled.
Is this something you would be open to?
Without sudo
, the command resolves to/opt/pipx_bin/ansible-playbook ...
. With sudo
, we get /usr/bin/sudo ansible-playbook ...
, which results in the error.
I'm guessing that its losing the PATH - Perhaps sudo --preserve-env
would fix this?
Running in a GitHub Action workflow directly, this is also the case. I have to run sudo $(which ansible-playbook)
to resolve the command path currently.
Can someone please tell me how to pass the ansible user name through the action?
I want to use your action but I need to install my playbook requirements first in order to make it work. I think that currently is not possible (I don't see any reference to it in the action.yml
).
An approach can be:
- name: Execute playbook
uses: dawidd6/action-ansible-playbook@v2
with:
playbook: playbook.yml
requirements: requirements.yml
If the requirements
field is present, then first it tries to download all required roles/collections and then execute the playbook. What do you think?
Nice work!
I'm getting warnings in my Github Actions:
Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: dawidd6/action-ansible-playbook@v2. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
Seems it is time for another version bump.
Hi! I use hashi_vault.vault_read
in one of my playbooks and I see, that Github Actions ubuntu-latest host uses "ansible_playbook_python": "/opt/pipx/venvs/ansible-core/bin/python"
And when I install pip install hvac
or pipx install --include-deps hvac
on Actions server looks like it installs hvac
, that I need to make vault_read
working.
So, how should I install it in github acitons server to have opportunity to use hashi_vault.vault_read
module? What do you think?
I am seeing this error on my build @github:
'Post job cleanup.
==> Deleting ".ansible_key" file
##[error]ENOENT: no such file or directory, unlink '.ansible_key'
The ansible playbook executes succesfully, but every action ends up in failed state because of the problem in post phase.
I have a configuration that specifies a directory, perhaps it is relevant
In order to make sure ansible has access to the boxes, I needed to remove the following line form my inventory file. Also I had to comment out the known_hosts setting.
I had the line in all:vars section
[all:vars]
ansible_ssh_private_key_file = ~/.ssh/id_rsa
After removing it, the playbook action could connect.
Warning: The
save-state
command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
Hi there,
What's the rationale behind forcing colors in the terminal here:
action-ansible-playbook/main.js
Line 82 in fbcc2c2
Would you be open to a change that optionally allows forcing the color (and defaults to true for backwards compatibility)? Alternatively, could this line be removed altogether?
My use case is this: I'm running this Action and taking the output of the run and adding it as an issue comment. Unfortunately right now, no matter what is set in ansible.cfg
for nocolor
, the ANSI color escape sequences are always present. Ideally these could be conditionally turned off.
My ansible script require multiple inventory file and multiple extra vars. We also use keyvault file.
Please help me on how to provide multiple inventory and extra vars.
Here's the invocation:
- name: Upload ISOs to vSphere Datastore
if: |
(contains(github.ref_name, 'release') || contains(github.ref_name, 'develop')) &&
inputs.FORCE_UPLOAD == true
uses: dawidd6/action-ansible-playbook@v2
with:
playbook: copy_iso.yml
directory: playbooks
vault_password: ${{ secrets.IT_VAULT_PASSWORD }}
requirements: requirements.yml
options:
--extra-vars workspace=${{ GITHUB.WORKSPACE }}
Here's the resulting run. Doesn't look like vault_password
is actually getting set?
Run dawidd6/action-ansible-playbook@v2
with:
playbook: copy_iso.yml
directory: playbooks
requirements: requirements.yml
options: --extra-vars workspace=/opt/actions-runner/_work/workflow-testing/workflow-testing
env:
BRANCH_REF: develop
ARTIFACTORY_TOKEN: ***
PARTNERS_ARTIFACTORY_TOKEN: ***
PARTNERS_ARTIFACTORY_USER: svc-bv-jenkins
PARTNERS_ARTIFACTORY_URL: partners.artifactory.<company>.com
PATH: /home/admin/pyenv/versions/3.8.16/envs/devops/bin:/home/admin/.local/bin:/home/admin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
VIRTUAL_ENV: /home/admin/pyenv/versions/3.8.16/envs/devops
/home/admin/pyenv/versions/3.8.16/envs/devops/bin/ansible-galaxy collection install -r requirements.yml
Process install dependency map
|�/�-�\�|�Starting collection install process
|�Skipping 'community.vmware' as it is already installed
/home/admin/pyenv/versions/3.8.16/envs/devops/bin/ansible-playbook copy_iso.yml --extra-vars workspace=/opt/actions-runner/_work/workflow-testing/workflow-testing
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [Copy ISOs to GREEN-VSRN01-vSAN/isos and VSRN01/ISOs] *********************
TASK [Prepare list of ISOs to copy] ********************************************
ok: [localhost]
TASK [include_tasks] ***********************************************************
included: /opt/actions-runner/_work/workflow-testing/workflow-testing/playbooks/include_iso_upload.yml for localhost
included: /opt/actions-runner/_work/workflow-testing/workflow-testing/playbooks/include_iso_upload.yml for localhost
included: /opt/actions-runner/_work/workflow-testing/workflow-testing/playbooks/include_iso_upload.yml for localhost
included: /opt/actions-runner/_work/workflow-testing/workflow-testing/playbooks/include_iso_upload.yml for localhost
TASK [Copy iso files to datastores] ********************************************
fatal: [localhost]: FAILED! => {"msg": "Attempting to decrypt but no vault secrets found"}
Am I expecting incorrect behavior for this parameter?
I was trying use an aws_ec2 dynamic inventory:
[WARNING]: * Failed to parse /home/runner/work/something/someone/ansible/aws_ec2.yml with
ansible_collections.amazon.aws.plugins.inventory.aws_ec2 plugin: The ec2 dynamic inventory plugin requires boto3 and botocore.
Trying to run a playbook on a private runner to update an internal cluster
build:
name: Run free_disk_space
runs-on: self-hosted
steps:
- name: checkout repo content
uses: actions/checkout@v3 # checkout the repository content
- name: Run playbook
uses: dawidd6/action-ansible-playbook@v2
with:
playbook: all_vms/free_disk_space.yml
key: ${{secrets.ANSIBLE_KEY}}
- name: ls
if: always()
run: ls -la /opt/pipx_bin
But self-hosted runner gets the error that ansible-playbook cant be found or is not executable.
runs-on: ubuntu-latest
works but then the servers cant be accessed. There it is found in /opt/pipx_bin but that does not exist on self-hosted.
So where is the ansible-playbook normally or how to get it running on a private runner?
edit I just realized this action does not actually install ansible itself like the kubectl gh action does. so the solution is to first install ansible.
Currently I need to add the vault file in the tree where my playbooks are. But it would be awesome to enter in the "with" section something to refer to a default vault file.
I have created a playbook that works fine locally.
Yesterday suddenly out of nowhere when i run the playbook from GA, i get the following error:
If i remove the vars from the first group the error changes to:
Can you point me in the right direction what i'm doing wrong ? the private key of my server has been added to the github repo, so that should not be the issue.
I'm trying to get the use of this action to fail instead of pass when no hosts were updated. This can happen for various reasons but I don't want my CI to think the changes were actually deployed when no hosts were found from the inventory file.
This could also just be a configuration file to avoid introducing a breaking change.
Alternatively, I've tried to store the output of this action and grep it for no hosts matched
but the issue is I need this action to write to GITHUB a output var. I can't just read the stdout of a step.
Something seems to be wrong with fs.writeFileSync
, im getting this error:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/home/runner/work/provisioning/provisioning/deployment/.ansible_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/runner/work/provisioning/provisioning/deployment/.ansible_key": bad permissions
Checking the key file after the action i get the following output:
-rw-r--r-- 1 runner docker 411 Apr 30 19:57 /home/runner/work/provisioning/provisioning/deployment/.ansible_key
I've forked the action and added an exec chmod 600 after creating the file and it worked. Not sure how to solve it properly yet.
I have the following use case:
- name: Run ansible
uses: dawidd6/action-ansible-playbook@671974ed60e946e11964cb0c26e69caaa4b1f559
with:
directory: ansible/
playbook: ${{ inputs.playbook }}.yaml
key: ${{ secrets.SSH_PRIVATE_KEY }}
requirements: galaxy-requirements.yml
options: |
--inventory "inventory/${{ inputs.environment }}.yaml"
--limit "${{ inputs.limit }}"
--extra-vars "${{ inputs.extra_vars }}"
--tags "${{ inputs.tags }}"
--skip-tags "${{ inputs.skip_tags }}"
if 'extra_vars' is using the json-string method, it can contain a value like so:
{
"attr-a": true
"attr-b": "true"
}
My expectation is that 'attr-a' has the boolean true whilst 'attr-b' has the string 'true'. However, the reality is that both attributes have boolean true.
I am getting this message when I attempt to do an ansible run against any hosts for a hosted runner:
fatal: [chubb-dev-web]: UNREACHABLE! => changed=false
msg: 'Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host'
unreachable: true
As far as I can see, there is no output from sshd
at DEBUG
level.
Here is the SSH key setup, using a jumpbox (which may be causing the issue: not sure): https://gist.github.com/ticktockhouse/81629e9c97817a2a0cd741dc4293abda
As you can see, the "known_hosts" values are stored in secrets, gleaned using ssh-keyscan
. Of course, it's possible that this may have been stored incorrectly, would be good to know if I have got this correct.
Many thanks in advance!
Hello,
This is not necessarily an issue. But, I would like to get your attention to try and use Python 3. instead of Python 2.7 as the interpreter because Python 2.7 will be deprecated very soon. It will be nice to use Python 3 as default and make it optional to use Python 2.
This will also allow to avoid those DEPRECATION WARNINGS
when running playbooks.
Just a thought to consider. Thanks for the awesome Action you created. nice and simple.
My dynamic inventory requires the requests
python package to load the hosts from an API, however I'm not able to install the package via pip. I guess because the playbook action uses its own virtual env?
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install Ansible dependencies
run: pip install -r requirements.txt
- name: Run playbook
uses: dawidd6/action-ansible-playbook@v2
with:
directory: ./ansible
playbook: swarm-apply.yaml
key: ${{secrets.SSH_PRIVATE_KEY}}
requirements: galaxy-requirements.yaml
I receive this error message telling me that the requests
package is still missing, even tho I installed it via pip one step before.
[WARNING]: * Failed to parse /home/runner/work/myproject/deployment/ansible/inventory/prod.hcloud.yaml with
ansible_collections.hetzner.hcloud.plugins.inventory.hcloud plugin: Failed to import the required Python library (requests)
on fv-az189-444's Python /opt/pipx/venvs/ansible-core/bin/python. Please read the module documentation
and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter,
please consult the documentation on ansible_python_interpreter
Is more documentation available? Is it possible to just provide a path to an inventory file as you would normally do, instead of the inline one in the example? Is key
required? Thanks!
Hi there,
Thanks for writing this action! We use a lot of internally developed ansible roles that are stored in github enterprise and need to be able to grab them with ansible galaxy, however we are getting Host key verification failed.
during the galaxy role install process. This is for private repos on GitHub Enterprise, expecting to use the same SSH KEY provided for the playbook run as authentication for the git URLs. I've added the known_hosts content for our github server and we're still getting the same error.
My questions are:
known_hosts
content for SSH-based Galaxy installs?example requirements.yaml with a git url:
roles:
- name: some-private-ansible-role
scm: git
src: "[email protected]:OCC/ansible-role-private-repo.git"
version: 0.0.1rc1
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.