southalc / podman Goto Github PK
View Code? Open in Web Editor NEWPuppet module for podman
License: Apache License 2.0
Puppet module for podman
License: Apache License 2.0
podman 4.4 has quadlet support
Is it realistic to just have puppet create quadlet files and then let the generator create the units before puppet starts them.
Hi,
Is there any plans for southalc/podman to add Puppet 8.x & puppetlabs/stdlib 9.x support? We're planning on migrating from Puppet 6 to 8 and we don't currently use this module but would like to if it supported the above.
Thanks
Ian
While its a slightly strange situation to be in steps to reproduce:
systemctl start podman.socket
systemctl start --user podman.socket
both as root.
Results in:
Error: Facter: Error while resolving custom fact fact='podman', resolution='<anonymous>': can't modify frozen String: "Cannot merge \"/run/podman/podman.sock\":String and \"/run/user/0/podman/podman.sock\":String"
The unit file generation can utilize the --files --name --container-prefix podman
options to simplify making the files.
The following Hiera configuration on CentOS 8 with puppet-agent-5.5.22-1.el8.x86_64
, puppet-agent-5.5.22-1.el8.x86_64
and the latest Git master version of the types
and podman
modules works fine (in the sense that the container gets deployed and starts up fine):
types::file:
'/etc/sfacctd_tee':
ensure: 'directory'
'/etc/sfacctd_tee/sfacctd.conf':
content: |
plugins: tee
tee_receivers: /etc/pmacct/tee_receivers.lst
'/etc/sfacctd_tee/tee_receivers.lst':
content: |
id=1 ip=[::1]:6344
podman::containers:
sfacctd_tee:
image: 'docker.io/pmacct/sfacctd:bleeding-edge'
flags:
net: host
volume: '/etc/sfacctd_tee:/etc/pmacct'
require:
- 'File[/etc/sfacctd_tee/sfacctd.conf]'
- 'File[/etc/sfacctd_tee/tee_receivers.lst]'
subscribe:
- 'File[/etc/sfacctd_tee/sfacctd.conf]'
- 'File[/etc/sfacctd_tee/tee_receivers.lst]'
However, if the container needs to be restarted due to one of the subscribed files changing, it fails half-way through. Basically it manages to stop and remove the container, but encounters a failure that prevents it from starting it back up again:
[tore@sflow-osl2 ~]$ sudo rm /etc/sfacctd_tee/sfacctd.conf
[tore@sflow-osl2 ~]$ sudo systemctl start puppet-run; sudo journalctl -fu puppet-run -n0 -o cat
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for sflow-osl2.n.bitbit.net
Info: Applying configuration version 'production, bitbit-net-feature-sflow-11-fjv6b, commit=20925515, 2021-02-11 11:52:12 +0100'
Notice: /Stage[main]/Types/Types::Type[file]/File[/etc/sfacctd_tee/sfacctd.conf]/ensure: defined content as '{md5}a6b9d0bf81a8af1787809c04fe65145d' (corrective)
Info: /Stage[main]/Types/Types::Type[file]/File[/etc/sfacctd_tee/sfacctd.conf]: Scheduling refresh of Podman::Container[sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Exec[podman_systemd_reload]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Exec[verify_container_flags_sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Exec[verify_container_image_sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Exec[podman_remove_container_and_image_sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Exec[podman_remove_container_sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Exec[podman_create_sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Exec[podman_generate_service_sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Scheduling refresh of Service[podman-sfacctd_tee]
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_systemd_reload]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[verify_container_flags_sfacctd_tee]: Triggered 'refresh' from 1 event
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[verify_container_flags_sfacctd_tee]: Scheduling refresh of Exec[podman_remove_container_sfacctd_tee]
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[verify_container_image_sfacctd_tee]: Triggered 'refresh' from 1 event
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[verify_container_image_sfacctd_tee]: Scheduling refresh of Exec[podman_remove_container_and_image_sfacctd_tee]
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_remove_container_and_image_sfacctd_tee]: Triggered 'refresh' from 2 events
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_remove_container_and_image_sfacctd_tee]: Scheduling refresh of Exec[podman_create_sfacctd_tee]
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_remove_container_sfacctd_tee]/returns: Error: error inspecting object: no such container sfacctd_tee
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_remove_container_sfacctd_tee]/returns: Error: Failed to evict container: "": Failed to find container "sfacctd_tee" in state: no container with name or ID sfacctd_tee found: no such container
Error: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_remove_container_sfacctd_tee]: Failed to call refresh: 'image=$(podman container inspect sfacctd_tee --format '{{.ImageName}}')
systemctl stop podman-sfacctd_tee || podman container stop sfacctd_tee
podman container rm --force sfacctd_tee
' returned 1 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_remove_container_sfacctd_tee]: 'image=$(podman container inspect sfacctd_tee --format '{{.ImageName}}')
systemctl stop podman-sfacctd_tee || podman container stop sfacctd_tee
podman container rm --force sfacctd_tee
' returned 1 instead of one of [0]
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_create_sfacctd_tee]: Dependency Exec[podman_remove_container_sfacctd_tee] has failures: true
Warning: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_create_sfacctd_tee]: Skipping because of failed dependencies
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_create_sfacctd_tee]: Unscheduling all events on Exec[podman_create_sfacctd_tee]
Warning: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_generate_service_sfacctd_tee]: Skipping because of failed dependencies
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_generate_service_sfacctd_tee]: Unscheduling all events on Exec[podman_generate_service_sfacctd_tee]
Warning: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Service[podman-sfacctd_tee]: Skipping because of failed dependencies
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Service[podman-sfacctd_tee]: Unscheduling all events on Service[podman-sfacctd_tee]
Info: Podman::Container[sfacctd_tee]: Unscheduling all events on Podman::Container[sfacctd_tee]
Info: Stage[main]: Unscheduling all events on Stage[main]
Notice: Applied catalog in 26.64 seconds
puppet-run.service: Main process exited, code=exited, status=6/NOTCONFIGURED
puppet-run.service: Failed with result 'exit-code'.
Running Puppet once more corrects the problem:
[tore@sflow-osl2 ~]$ sudo systemctl start puppet-run; sudo journalctl -fu puppet-run -n0 -o cat
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for sflow-osl2.n.bitbit.net
Info: Applying configuration version 'production, bitbit-net-feature-sflow-11-fjv6b, commit=20925515, 2021-02-11 11:52:12 +0100'
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_create_sfacctd_tee]/returns: executed successfully (corrective)
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_create_sfacctd_tee]: Scheduling refresh of Exec[podman_generate_service_sfacctd_tee]
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_generate_service_sfacctd_tee]: Triggered 'refresh' from 1 event
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Exec[podman_generate_service_sfacctd_tee]: Scheduling refresh of Service[podman-sfacctd_tee]
Notice: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Service[podman-sfacctd_tee]/ensure: ensure changed 'stopped' to 'running' (corrective)
Info: /Stage[main]/Podman/Podman::Container[sfacctd_tee]/Service[podman-sfacctd_tee]: Unscheduling refresh on Service[podman-sfacctd_tee]
Notice: Applied catalog in 26.71 seconds
^C
[tore@sflow-osl2 ~]$ sudo podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3aab63fb4d7 docker.io/pmacct/sfacctd:bleeding-edge -f /etc/pmacct/sf... 5 minutes ago Up 5 minutes ago sfacctd_tee
I have the issue where firewalld is being restarted by puppet and the podman rules vanish, does this module get around that somehow (such as redeploying the container), or is that a known limitation?
First of all, thank you very much for spending the time to create and maintain a puppet module for Podman. I really appreciate the effort you put into this.
I am already using it for rootless-container deployment and I found something I wanted to share with you.
When I create a container from some image and give it a static version like: myrandomcontainer:1.3
and do a puppet run, it get's correctly set up. But when I change the image to myrandomcontainer:1.4
in my puppet code it does not get updated.
If I would have used :latest
and the latest-tag would get updated it would automatically upgrade the container as intended. So this case would work, but I prefer version-pinning in production environment for reproducibility.
I searched for the relevant part in the code, which is this one:
Lines 168 to 191 in 380bebb
For your update mechanism you only compare the hash of the local image with the hash of the same image in the container registry. This means if I change the the image in my puppet config it would still compare it with the old image tag that is set in the container config on the machine. So even if I replace the puppet code with myrandomcontainer:1.4
it would still check whether the local myrandomcontainer:1.3
is the same as myrandomcontainer:1.3
in the container registry.
My question would now be whether this is your intended behavior.
If not I would also be willing to provide a Pull-Request with a fix for this.
To let the podman auto update work, the "--new" option must be used:
See: https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html
But when I try it with:
service_flags => {
'new' => ''
}
It will fails with:
returns: Error: accepts 1 arg(s), received 2
Failed to call refresh: 'podman generate systemd --new '' foo
My spec tests for a class including podman
fail:
1) role::sflow on debian-8-x86_64 Debian is expected to compile into a catalogue without dependency cycles
Failure/Error: it { should compile.with_all_deps }
error during compilation: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Unknown variable: 'requires'. (file: /home/test/env/modules/podman/manifests/container.pp, line: 162, column: 21) (file: /home/test/env/design/profile/manifests/sfacctd_tee.pp, line: 4) on node 8b31bfdca048.home
# ./spec/classes/role__sflow_spec.rb:9:in `block (4 levels) in <top (required)>'
The variable in questions is, as far as I can tell, only declared here:
Lines 107 to 110 in 380bebb
This declaration is enclosed in the following block:
Lines 89 to 90 in 380bebb
The variable is being referenced in several places outside of that block, however. I believe this is what trips up the tests.
Hi,
I'm using several versions of puppetserver 7 and 8, several versions of agents, and
mod 'southalc-podman', '0.6.7'
when running the agent or just facter --puppet, I do get this error message on some of my machines:
[2023-12-23 21:33:35.625281 ] ERROR Facter - Error while resolving custom fact fact='podman', resolution='': Could not deep merge all chunks (Original error: Cannot merge "/run/podman/podman.sock":String and "/run/user/0/podman/podman.sock":String at root["socket"]["root"]), ensure that chunks return either an Array or Hash or override the aggregate block
when both /run/podman/podman.sock and /run/user/0/podman/podman.sock do exist.
Actually, this should not really happen, because running a user-specific podman socket for user root does not really make sense. I'm still looking for what enabled this. Some other script must have erroneously enabled a user-specific podman socket for root. Although this does not make much sense, it is still a correct system configuration, and a puppet module must be able to deal with it.
However, this condition caused the error message and showed that this modules facter chunk functions return a wrong data type, i.e. a String, where Array or Hash is expected.
regards
When I use this module I am current getting a warning:
Warning: This function is deprecated, please use stdlib::merge instead. at ["/opt/user/modules/podman/manifests/container.pp", 97]:
(location: /opt/user/modules/stdlib/lib/puppet/functions/deprecation.rb:35:in `deprecation')
From the podman v0.6.7 module on forge.puppet.com:
enable
Data type: Boolean
Status of the automatically generated systemd service for the container. Valid values are 'running' or 'stopped'.
Default value: true
There is an issue with the code in container.pp. The variable $service_unit_file is defined as:
${User[$user]['home']}/.config.systemd/user/podman-${container_name}.service
In this code block, you don't specify the full path of the service:
$command_sp @("END"/L)
${systemctl} ${startup} podman-${container_name}.service
${systemctl} ${action} podman-${container_name}.service
| END
This code should be changed to:
${systemctl} ${startup} ${service_unit_file}
There is no user
parameter to the podman::network
type, so it is impossible to create a network for a non-root user.
Hi,
is the tag 'push' there by intention or is this tag just a mistake?
If its a mistake can you please delete it as it makes tag sorting a little bit difficult.
Thanks
Thomas
To use an proxy server for downloading images the environment variable http_proxy must be set.
But I can't see any option for it.
I need to be able to do:
podman run -e MYSQL_HOST=127.0.0.1 -e SECOND_ENV=foo
I don't see how to do that. I've found no "-e" equivalent in the code or documentation.
It would be nice if the numbered releases of this module (and https://github.com/southalc/types for that matter) were tagged with git-tag(1)
, so that specific versions of the module can be installed by setting :ref
or :tag
in Puppetfile
to refer to the desired version.
This is helpful when corporate policy do not allow the build infrastructure to have dependencies on third-party services (such as Puppet Forge and GitHub). To that end we mirror the upstream Git repos of all the Puppet modules we use to an on-prem GitLab instance, and point Puppetfile
at those mirrors using :git
.
Exec["verify_container_image_${handle}"]
unconditionally calls skopeo inspect docker://${image}
which hits the docker pull limit. If the current running image and the desired image are the same, there is no reason to inspect the remote image.
Hi,
It would be great if the module could support podman login
on a per-user/per-registry basis.
I am doing this right now for root
with a simple exec
but looking closer at the module code there is some complexity supporting it with rootless, and this would be better done within the module (keeping XDG_RUNTIME_DIR in sync is the main problem I think).
I may come back to this later when I have some more time, but leaving this here in case someone else has the bandwidth to look at it.
Podman exposes a varlink socket that can be enabled as a service.
https://podman.io/blogs/2019/01/16/podman-varlink.html
I'm thinking it could be it's own class, invoked like this:
podman::varlink:
enabled: true
socket: /run/podman/io.podman
Also- thanks for this module
Line 143 in 41d5506
Nothing big. Looks like copy/paste error
Hello,
I'm currently testing this module to provision a CentOS 8 vm with some podman containers.
The installation works correctly.
The image is pulled and launched.
However at one point a refresh is launched and failed which leads to the removal of the image.
This is the failed check
Exec[verify_container_image_element](provider=shell): Executing check '["/bin/sh", "-c", "if podman container exists element\n then\n image_name=$(podman container inspect element --format '{{.ImageName}}')\n running_digest=$(podman image inspect ${image_name} --format '{{.Digest}}')\n latest_digest=$(skopeo inspect docker://docker.io/vectorim/element-web | /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))[\"Digest\"]')\n [[ $? -ne 0 ]] && latest_digest=$(skopeo inspect --no-creds docker://docker.io/vectorim/element-web | /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))[\"Digest\"]')\n test -z \"${latest_digest}\" && exit 0 # Do not update if unable to get latest digest\n test \"${running_digest}\" = \"${latest_digest}\"\nfi\n"]'
Debug: Executing: '/bin/sh -c if podman container exists element
then
image_name=$(podman container inspect element --format '{{.ImageName}}')
running_digest=$(podman image inspect ${image_name} --format '{{.Digest}}')
latest_digest=$(skopeo inspect docker://docker.io/vectorim/element-web | /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
[[ $? -ne 0 ]] && latest_digest=$(skopeo inspect --no-creds docker://docker.io/vectorim/element-web | /opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
test -z "${latest_digest}" && exit 0 # Do not update if unable to get latest digest
test "${running_digest}" = "${latest_digest}"
fi
'
Debug: /Stage[main]/Main/Podman::Container[element]/Exec[verify_container_image_element]: 'true' won't be executed because of failed check 'unless'
At that point the removal of the image is launched and succeed.
When I pull and run the image manually, this script is successful (meaning this is the freshest image).
Here is the part of my .pp file regarding podman
include podman
podman::container { 'element':
image => 'docker.io/vectorim/element-web',
}
using v0.5.6 of the module
Thanks for your help.
The error I'm getting:
Notice: /Stage[main]/MyModule::Openxpki/Podman::Pod[openxpki_pod]/Exec[create_pod_openxpki_pod]/returns: time="2022-08-26T14:30:03-04:00" level=error msg="XDG_RUNTIME_DIR directory \"/run/user/\" is not owned by the current user"
Error: 'podman pod create --name 'openxpki_pod' --publish '8443:443'' returned 1 instead of one of [0]
Error: /Stage[main]/MyModule::Openxpki/Podman::Pod[openxpki_pod]/Exec[create_pod_openxpki_pod]/returns: change from 'notrun' to ['0'] failed: 'podman pod create --name 'openxpki_pod' --publish '8443:443'' returned 1 instead of one of [0] (corrective)
This is what I am doing in code:
podman::pod { 'openxpki_pod':
user => openxpki,
flags => {
publish => [
#'8080:80',
'8443:443',
],
},
}
Running the command as the user I want the pod to launch with:
podman pod create --name 'openxpki_pod' --publish '8443:443'
Is valid, and a pod is created.
edit 1: Originally mistook the issue, but pod creation fails regardless.
edit 2: i've actually noticed I missed an important part of the error!
Sample code:
$user = 'foo'
$user_home = "/var/lib/${user}"
group { $user: system => true }
user { $user:
home => $user_home,
shell => '/sbin/nologin',
system => true,
managehome => true,
require => Group[$user]
}
class { 'podman':
nodocker => absent,
podman_docker_pkg_ensure => absent,
manage_subuid => true,
}
podman::rootless { $user: }
It will fails with:
Error 500 on SERVER: Server Error: Could not find resource 'File[/var/lib/foo]' in parameter 'require'
I think the problem is line 22 and 24 of manifests/rootless.pp because puppet don't know the values on all times.
When the image will live on an private then, the login credentials must be given to podman::image.
In the theory it will work with the flags settings, but in the reality it will fail.
Sample code:
$user = Deferred(FUNCTION,[OPTION])
$pw = Deferred(FUNCTION,[OPTION])
$cred = {
creds => Sensitive(Deferred(sprintf, ['%s:%s', $user, $pw]))
}
podman::image { 'foo':
ensure => present,
image => 'REGISTRY/REPO/IMAGE:TAG',
flags => $cred
}
it will fails with:
rejected: parameter 'enumerable' expects an Iterable value, got Sensitive[Deferred] modules/podman/manifests/image.pp, line: 48, column: 29)
Hi there,
I am getting the following error on puppet agent --test
Error: 'podman volume create plausible' returned 125 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Volume[plausible]/Exec[podman_create_volume_plausible]/returns: change from 'notrun' to ['0'] failed: 'podman volume create plausible' returned 125 instead of one of [0] (corrective)
This is the debug output:
Debug: Exec[podman_create_volume_plausible](provider=posix): Executing check 'podman volume inspect plausible'
Debug: Executing with uid=plausible: 'podman volume inspect plausible'
Debug: /Stage[main]/Podman/Podman::Volume[plausible]/Exec[podman_create_volume_plausible]/unless: Error: could not get runtime: error generating default config from memory: cannot mkdir /run/user/0/libpod: mkdir /run/user/0/libpod: permission denied
Debug: Exec[podman_create_volume_plausible](provider=posix): Executing 'podman volume create plausible'
Debug: Executing with uid=plausible: 'podman volume create plausible'
Notice: /Stage[main]/Podman/Podman::Volume[plausible]/Exec[podman_create_volume_plausible]/returns: Error: could not get runtime: error generating default config from memory: cannot mkdir /run/user/0/libpod: mkdir /run/user/0/libpod: permission denied
Error: 'podman volume create plausible' returned 125 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Volume[plausible]/Exec[podman_create_volume_plausible]/returns: change from 'notrun' to ['0'] failed: 'podman volume create plausible' returned 125 instead of one of [0] (corrective)
This is my definition:
podman:
containers: {}
pods: {}
volumes:
plausible:
user: plausible
homedir: "/home/plausible"
ensure: present
OS is CentOS 8. The user has been defined this way:
accounts:
group_defaults:
system: true
group_list:
admins: {}
users: {}
user_defaults:
groups:
- users
managehome: true
system: false
user_list:
plausible:
comment: Plausible user
shell: "/bin/bash"
groups:
- users
The current implementation of the flags
parameter only permits a single instance of a given flag. For flags like --add-host
or --publish
it is often necessary to specify multiple instances of this flag.
The docs for the ::podman::manage_subuid
parameter say:
# @param manage_subuid
# Should the module manage the `/etc/subuid` and `/etc/subgid` files (default is true)
However, the implementation of this parameter is:
Boolean $manage_subuid = false
Hey mate - me again,
I've been testing and testing and testing different combinations with this module.
I did get it fully functional yesterday - and I cannot, for the life of me figure out what I did (bunch of uncommitted things).
I've tried to clean everything up and I'm having problems with the code running and trying to remove the systemctl-service.
If I have a clean Redhat 8 box and spin it up with the following hiera config:
---
podman::containers:
primary-solace:
image: 'solace/solace-pubsub-standard'
flags:
publish:
- '8080:8080'
- '50000:50000'
- '8080:8080'
- '55555:55555'
- '55443:55443'
- '55556:55556'
- '55003:55003'
- '2222:2222'
- '8300:8300'
- '8301:8301'
- '8302:8302'
- '8741:8741'
- '8303:8303'
env:
- 'username_admin_globalaccesslevel="admin"'
- 'username_admin_password="admin"'
shm-size:
- '1g'
service_flags:
timeout: '60'
It's spitting out:
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[verify_container_flags_primary-solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[verify_container_image_primary-solace]/returns: executed successfully
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]/returns: Failed to stop podman-primary-solace.service: Unit podman-primary-solace.service not loaded.
found: no such containeran/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]/returns: Error: no container with name or ID primary-solace
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]/returns: Error: failed to evict container: "": failed to find container "primary-s found: no such containertainer with name or ID primary-solace
Error: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]: Failed to call refresh: 'systemctl stop podman-primary-solace || podman container stop --time 60 primary-solace
podman container rm --force primary-solace
' returned 1 instead of one of [0]
Error: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_container_primary-solace]: 'systemctl stop podman-primary-solace || podman container stop --time 60 primary-solace
podman container rm --force primary-solace
' returned 1 instead of one of [0]
Notice: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_image_primary-solace]: Dependency Exec[podman_remove_container_primary-solace] has failures: true
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_remove_image_primary-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_create_primary-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Exec[podman_generate_service_primary-solace]: Skipping because of failed dependencies
Warning: /Stage[main]/Podman/Podman::Container[primary-solace]/Service[podman-primary-solace]: Skipping because of failed dependencies
If I butcher the container.pp file I can get it to create the service, but then it obviously won't re-create it:
# @summary manage podman container and register as a systemd service
#
# @param image
# Container registry source of the image being deployed. Required when
# `ensure` is `present` but optional when `ensure` is set to `absent`.
#
# @param user
# Optional user for running rootless containers. For rootless containers,
# the user must also be defined as a puppet resource that includes at least
# 'uid', 'gid', and 'home' attributes.
#
# @param flags
# All flags for the 'podman container create' command are supported via the
# 'flags' hash parameter, using only the long form of the flag name. The
# container name will be set as the resource name (namevar) unless the 'name'
# flag is included in the flags hash. If the flags for a container resource
# are modified the container will be destroyed and re-deployed during the
# next puppet run. This is achieved by storing the complete set of flags as
# a base64 encoded string in a container label named `puppet_resource_flags`
# so it can be compared with the assigned resource state.
# Flags that can be used more than once should be expressed as an array. For
# flags which take no arguments, set the hash value to be undef. In the
# YAML representation you can use `~` or `null` as the value.
#
# @param service_flags
# When a container is created, a systemd unit file for the container service
# is generated using the 'podman generate systemd' command. All flags for the
# command are supported using the 'service_flags" hash parameter, again using
# only the long form of the flag names.
#
# @param command
# Optional command to be used as the container entry point.
#
# @param ensure
# Valid values are 'present' or 'absent'
#
# @param enable
# Status of the automatically generated systemd service for the container.
# Valid values are 'running' or 'stopped'.
#
# @param update
# When `true`, the container will be redeployed when a new container image is
# detected in the container registry. This is done by comparing the digest
# value of the running container image with the digest of the registry image.
# When `false`, the container will only be redeployed when the declared state
# of the puppet resource is changed.
#
# @example
# podman::container { 'jenkins':
# image => 'docker.io/jenkins/jenkins',
# user => 'jenkins',
# flags => {
# publish => [
# '8080:8080',
# '50000:50000',
# ],
# volume => 'jenkins:/var/jenkins_home',
# },
# service_flags => { timeout => '60' },
# }
#
define podman::container (
String $image = '',
String $user = '',
Hash $flags = {},
Hash $service_flags = {},
String $command = '',
String $ensure = 'present',
Boolean $enable = true,
Boolean $update = true,
){
#require podman::install
# Add a label of base64 encoded flags defined for the container resource
# This will be used to determine when the resource state is changed
$flags_base64 = base64('encode', inline_template('<%= @flags.to_s %>')).chomp()
# Add the default name and a custom label using the base64 encoded flags
if has_key($flags, 'label') {
$label = [] + $flags['label'] + "puppet_resource_flags=${flags_base64}"
$no_label = $flags.delete('label')
} else {
$label = "puppet_resource_flags=${flags_base64}"
$no_label = $flags
}
# If a container name is not set, use the Puppet resource name
$merged_flags = merge({ name => $title, label => $label}, $no_label )
$container_name = $merged_flags['name']
# A rootless container will run as the defined user
if $user != '' {
ensure_resource('podman::rootless', $user, {})
$systemctl = 'systemctl --user '
# The handle is used to ensure resources have unique names
$handle = "${user}-${container_name}"
# Set default execution environment for the rootless user
$exec_defaults = {
path => '/sbin:/usr/sbin:/bin:/usr/bin',
environment => [
"HOME=${User[$user]['home']}",
"XDG_RUNTIME_DIR=/run/user/${User[$user]['uid']}",
],
cwd => User[$user]['home'],
user => $user,
}
$requires = [
Podman::Rootless[$user],
Service['systemd-logind'],
]
$service_unit_file ="${User[$user]['home']}/.config/systemd/user/podman-${container_name}.service"
# Reload systemd when service files are updated
ensure_resource('Exec', "podman_systemd_${user}_reload", {
path => '/sbin:/usr/sbin:/bin:/usr/bin',
command => "${systemctl} daemon-reload",
refreshonly => true,
environment => [
"HOME=${User[$user]['home']}",
"XDG_RUNTIME_DIR=/run/user/${User[$user]['uid']}",
],
cwd => User[$user]['home'],
provider => 'shell',
user => $user,
}
)
$_podman_systemd_reload = Exec["podman_systemd_${user}_reload"]
} else {
$systemctl = 'systemctl '
$handle = $container_name
$service_unit_file = "/etc/systemd/system/podman-${container_name}.service"
$exec_defaults = {
path => '/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin',
}
# Reload systemd when service files are updated
ensure_resource('Exec', 'podman_systemd_reload', {
path => '/sbin:/usr/sbin:/bin:/usr/bin',
command => "${systemctl} daemon-reload",
refreshonly => true,
}
)
$requires = []
$_podman_systemd_reload = Exec['podman_systemd_reload']
}
case $ensure {
'present': {
if $image == '' { fail('A source image is required') }
# Detect changes to the defined podman flags and re-deploy if needed
Exec { "verify_container_flags_${handle}":
command => 'true',
provider => 'shell',
unless => @("END"/$L),
if podman container exists ${container_name}
then
saved_resource_flags="\$(podman container inspect ${container_name} \
--format '{{.Config.Labels.puppet_resource_flags}}' | tr -d '\n')"
current_resource_flags="\$(echo '${flags_base64}' | tr -d '\n')"
test "\${saved_resource_flags}" = "\${current_resource_flags}"
fi
|END
# notify => Exec["podman_remove_container_${handle}"],
require => $requires,
* => $exec_defaults,
}
# Re-deploy when $update is true and the container image has been updated
if $update {
Exec { "verify_container_image_${handle}":
command => 'true',
provider => 'shell',
unless => @("END"/$L),
if podman container exists ${container_name}
then
image_name=\$(podman container inspect ${container_name} --format '{{.ImageName}}')
running_digest=\$(podman image inspect \${image_name} --format '{{.Digest}}')
latest_digest=\$(skopeo inspect docker://\${image_name} | \
/opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
[[ $? -ne 0 ]] && latest_digest=\$(skopeo inspect --no-creds docker://\${image_name} | \
/opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Digest"]')
test -z "\${latest_digest}" && exit 0 # Do not update if unable to get latest digest
test "\${running_digest}" = "\${latest_digest}"
fi
|END
# notify => [
# Exec["podman_remove_image_${handle}"],
# Exec["podman_remove_container_${handle}"],
# ],
require => $requires,
* => $exec_defaults,
}
} else {
# Re-deploy when $update is false but the resource image has changed
Exec { "verify_container_image_${handle}":
command => 'true',
provider => 'shell',
unless => @("END"/$L),
if podman container exists ${container_name}
then
running=\$(podman container inspect ${container_name} --format '{{.ImageName}}' | awk -F/ '{print \$NF}')
declared=\$(echo "${image}" | awk -F/ '{print \$NF}')
available=\$(skopeo inspect docker://${image} | \
/opt/puppetlabs/puppet/bin/ruby -rjson -e 'puts (JSON.parse(STDIN.read))["Name"]')
test -z "\${available}" && exit 0 # Do not update update if unable to get the new image
test "\${running}" = "\${declared}"
fi
|END
notify => [
Exec["podman_remove_image_${handle}"],
Exec["podman_remove_container_${handle}"],
],
require => $requires,
* => $exec_defaults,
}
}
# Exec { "podman_remove_image_${handle}":
# # Try to remove the image, but exit with success regardless
# provider => 'shell',
# command => "podman rmi ${image} || exit 0",
# refreshonly => true,
# notify => Exec["podman_create_${handle}"],
# require => [ $requires, Exec["podman_remove_container_${handle}"]],
# * => $exec_defaults,
# }
# Exec { "podman_remove_container_${handle}":
# # Try nicely to stop the container, but then insist
# provider => 'shell',
# command => @("END"/L),
# ${systemctl} stop podman-${container_name} || podman container stop --time 60 ${container_name}
# podman container rm --force ${container_name}
# |END
# refreshonly => true,
# notify => Exec["podman_create_${handle}"],
# require => $requires,
# * => $exec_defaults,
# }
# Convert $merged_flags hash to usable command arguments
$_flags = $merged_flags.reduce('') |$mem, $flag| {
if $flag[1] =~ String {
"${mem} --${flag[0]} '${flag[1]}'"
} elsif $flag[1] =~ Undef {
"${mem} --${flag[0]}"
} else {
$dup = $flag[1].reduce('') |$mem2, $value| {
"${mem2} --${flag[0]} '${value}'"
}
"${mem} ${dup}"
}
}
# Convert $service_flags hash to command arguments
$_service_flags = $service_flags.reduce('') |$mem, $flag| {
"${mem} --${flag[0]} '${flag[1]}'"
}
Exec { "podman_create_${handle}":
command => "podman container create ${_flags} ${image} ${command}",
unless => "podman container exists ${container_name}",
notify => Exec["podman_generate_service_${handle}"],
require => $requires,
* => $exec_defaults,
}
if $user != '' {
Exec { "podman_generate_service_${handle}":
command => "podman generate systemd ${_service_flags} ${container_name} > ${service_unit_file}",
refreshonly => true,
notify => Exec["service_podman_${handle}"],
require => $requires,
* => $exec_defaults,
}
# Work-around for managing user systemd services
if $enable { $action = 'start'; $startup = 'enable' }
else { $action = 'stop'; $startup = 'disable'
}
Exec { "service_podman_${handle}":
command => @("END"/L),
${systemctl} ${startup} podman-${container_name}.service
${systemctl} ${action} podman-${container_name}.service
|END
unless => @("END"/L),
${systemctl} is-active podman-${container_name}.service && \
${systemctl} is-enabled podman-${container_name}.service
|END
require => $requires,
* => $exec_defaults,
}
}
else {
Exec { "podman_generate_service_${handle}":
path => '/sbin:/usr/sbin:/bin:/usr/bin',
command => "podman generate systemd ${_service_flags} ${container_name} > ${service_unit_file}",
refreshonly => true,
notify => Service["podman-${handle}"],
}
# Configure the container service per parameters
if $enable { $state = 'running'; $startup = 'true' }
else { $state = 'stopped'; $startup = 'false'
}
Service { "podman-${handle}":
ensure => $state,
enable => $startup,
}
}
}
'absent': {
Exec { "service_podman_${handle}":
command => @("END"/L),
${systemctl} stop podman-${container_name}
${systemctl} disable podman-${container_name}
|END
onlyif => @("END"/$L),
test "\$(${systemctl} is-active podman-${container_name} 2>&1)" = "active" -o \
"\$(${systemctl} is-enabled podman-${container_name} 2>&1)" = "enabled"
|END
notify => Exec["podman_remove_container_${handle}"],
require => $requires,
* => $exec_defaults,
}
Exec { "podman_remove_container_${handle}":
# Try nicely to stop the container, but then insist
command => "podman container rm --force ${container_name}",
unless => "podman container exists ${container_name}; test $? -eq 1",
notify => Exec["podman_remove_image_${handle}"],
require => $requires,
* => $exec_defaults,
}
Exec { "podman_remove_image_${handle}":
# Try to remove the image, but exit with success regardless
provider => 'shell',
command => "podman rmi ${image} || exit 0",
refreshonly => true,
require => [ $requires, Exec["podman_remove_container_${handle}"]],
* => $exec_defaults,
}
File { $service_unit_file:
ensure => absent,
require => [
$requires,
Exec["service_podman_${handle}"],
],
notify => $_podman_systemd_reload,
}
}
default: {
fail('"ensure" must be "present" or "absent"')
}
}
}
I'm going to continue to play with it, but surely this is something you've come across?
It's doing my head in - thanks for your hard work!
The podman-auto-update.timer can be uses to update the container, but the default puppet service module can only handle root services. But the needed code to handle user services still exits in this module. So I think it will be an nice feature when the module also can configure the podman-auto-update.timer for root less containers.
When switching to this module from the docker module, I noticed that I started getting pulls against both my authenticated limit, and the non-authenticated limit each run.
The underlying issue is that /opt/puppetlabs/puppet/bin/ruby
is called to parse the output of skopeo inspect
and if that fails the script calls skopeo inspect --no-creds
. In an open source installation (centos 8 in this case) there is no /opt/puppetlabs/puppet/bin/ruby
It looks like 0.6.1 contains an extra find/replace that breaks the service name for systemd-logind.service
. The rename to podman systemd-logind.service
contains a space in the service name which results in
Service[podman systemd-logind]: change from 'stopped' to 'running' failed: Execution of 'journalctl -n 50 --since '5 minutes ago' -u podman systemd-logind.service --no-pager' returned 1: Failed to add match 'systemd-logind.service': Invalid argument
When I create everything by parameterizing the class, container creation fails because of missing network definition.
I think the order of creation should be swapped here.
This is a bit trickier as the pod needs to exist before containers get added, but it would be ideal if the pod unit files were updated after puppet adds a new container to the group.
Coupling that with container updates and the order of operations here gets weird.....
When the podman::container class does an update it will try to run:
podman rmi [image]
but this will fail if more than one container uses the same image. Thus the subsequent podman create will use the existing image to start a new container, instead of pulling an updated image, which leads to update being run again at the next puppet run.
This leads to puppet destroying and recreating the containers at each puppet run.
Something that I've noticed is that the podman::image resource doesn't seem to pull updates for container images. We need to pull the image first using podman::image, as we are using a private repository with credentials.
But at the moment we have no way of having Puppet update containers if we push updated images to our private repository. At the moment, I need to destroy containers/pods and prune the images if I want Puppet to update my containers/pods on next run.
Cheers
Since podman systemd generate
is deprecated we should add an option to create quadlet podman-systemd units, which was added in Podman v4.4.0, via something like podlet.
The created service file will contains an invalid line break and the start of the service fails.
Example:
# container-52fc759bdd6b25e3a22cb34d6071cace0adda6ae40dce292867321f4f7aaef9d.service
# autogenerated by Podman 4.2.0
# Tue Mar 21 08:07:05 CET 2023
[Unit]
Description=Podman container-52fc759bdd6b25e3a22cb34d6071cace0adda6ae40dce292867321f4f7aaef9d.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman container run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
-d \
--replace \
--name keycloak \
--label io.containers.autoupdate=registry \
--label puppet_resource_flags=eyJsYWJlbCI9PlsiaW8uY29udGFpbmVycy5hdXRvdXBkYXRlPXJlZ2lzdHJ5
Il0sICJwdWJsaXNoIj0+Ils6OjFdOjgwODA6ODA4MCIsICJlbnYtZmlsZSI9
PiIvdmFyL2xpYi9rZXljbG9hay9lbnYifQ== \
--publish [::1]:8080:8080 \
--env-file /var/lib/keycloak/env quay.io/keycloak/keycloak:21.0.0 start \
--hostname=foo.foo.foo
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
systemd will report this error:
systemd[862]: /var/lib/keycloak/.config/systemd/user/podman-keycloak.service:27: Missing '=', ignoring line.
systemd[862]: /var/lib/keycloak/.config/systemd/user/podman-keycloak.service:31: Unknown key name 'PiIvdmFyL2xpYi9rZXljbG9hay9lbnYifQ' in section 'Service', ignoring
And start start fails with:
2023-03-21T08:07:05+0100 systemd[862]: podman-keycloak.service: Main process exited, code=exited, status=125/n/a
2023-03-21T08:07:05+0100 podman[14918]: Error: error reading CIDFile: open /run/user/990/podman-keycloak.service.ctr-id: no such file or directory
OS: CentOS8 Stream
When I to it via the command line, it will work:
sudo -u <NonRootUser> -- podman container create --name 'foo' ....
But do the same with puppet false.
podman::container { 'foo':
image => ...,
user => <NonRootUser>,
flags => {
....
}
error:
Notice: /Stage[main]/...::Container[foo]/Exec[podman_NonRootUser_Container]/returns: time="2022-01-21T10:30:36+01:00" level=error msg="XDG_RUNTIME_DIR directory "/run/user/" is not owned by the current user"
I thing also the unless part of:
podman/manifests/container.pp 273 fails.
because an
sudo -u <NonRootUser> -- podman container exists 'foo'
will return 0
Having defines in place for podman secrets would be helpful.
Hi, I have this code
include podman
$user = 'podman'
$user_home = "/var/lib/${user}"
group { $user:
ensure => present,
gid => '60000'
}
user { $user:
ensure => present,
shell => '/bin/bash',
uid => '60000',
gid => '60000',
home => $user_home,
require => Group[$user]
}
file { 'home_user':
ensure => directory,
path => $user_home,
mode => '0755',
owner => $user,
group => $user,
}
podman::rootless { $user: }
podman::subuid { $user:
subuid => 255666,
count => 65535,
}
podman::subgid { $user:
subgid => 255666,
count => 65535,
}
podman::network { 'mynetwork':
user => $user,
driver => 'bridge',
internal => true,
}
But when I run puppet, I have an error with this description:
Error: /Stage[main]/Podman_all::Configure/Podman::Network[mynetwork]/Exec[podman_create_network_mynetwork]/returns: change from 'notrun' to ['0'] failed: 'podman network create mynetwork --driver bridge --internal
' returned 1 instead of one of [0]
Please, can you help me?
Thanks.
Using 0.5.0 in Debian Bullseye with Puppet 5.5, i get:
Debug: Exec[loginctl_linger_weblate](provider=shell): Executing check '["/bin/sh", "-c", "test $(loginctl show-user weblate --property=Linger) == 'Linger=yes'"]'
Debug: Executing: '/bin/sh -c test $(loginctl show-user weblate --property=Linger) == 'Linger=yes''
Debug: /Stage[main]/Profile::Weblate::Podman/Podman::Container[weblate]/Podman::Rootless[weblate]/Exec[loginctl_linger_weblate]/unless: /bin/sh: 1: test: Linger=yes: unexpected operator
Debug: Exec[loginctl_linger_weblate](provider=shell): Executing '["/bin/sh", "-c", "loginctl enable-linger weblate"]'
Debug: Executing: '/bin/sh -c loginctl enable-linger weblate'
I think that is because by default /bin/sh
points to /bin/dash
in Debian and then something is broken. If I run sudo dpkg-reconfigure dash
and answer "No" to make /bin/sh
point to bin/bash
instead, then the issue is fixed.
Thanks for the nice module!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.