Giter Club home page Giter Club logo

puppetlabs-patching_as_code's Introduction

puppetlabs-patching_as_code

This module is supported by the Puppet community. We expect it to be of the same high quality as our own Supported modules, but it does not qualify for Puppet Support plans. See the CODEOWNERS file for usernames of the maintainers.

Table of Contents

Description

This module provides automatic patch management for Linux, Windows and Chocolatey through desired state code.

Setup

What this module affects

This module will leverage the fact data provided by either the albatrossflavour/os_patching or PE 2019.8's builtin pe_patch module for OS patches and Linux application patches (when based on Linux package manager repositories). It is also able to detect outdated Chocolatey packages. Once available patches are known via the above facts, the module will install the patches during the configured patch window.

  • For Linux operating systems, this happens through the native Package resource in Puppet.
  • For Windows operating systems, this happens through the patching_as_code::kb class, which comes with this module.
  • For Chocolatey packages on Windows, this happens through the native Package resource in Puppet.
  • By default, a reboot is performed before patching when a pre-existing pending reboot is detected, as well as at the end of a patch run when one or more patches caused an OS reboot to become pending. You can change this behavior though, to either always reboot or never reboot.
  • You can define pre-patch, post-patch and pre-reboot commands for patching runs. We recommend that for Windows, you use Powershell-based commands for these. Specifically for pre-reboot commands on Windows, you must use Powershell-based commands.
  • This module will report the details of the last successful patch run in a patching_as_code fact.
  • This module will report the configuration for each node in a patching_as_code_config fact.
  • This module will report outdated Chocolatey packages for each node in a patching_as_code_choco fact.
  • You can define an alternate patch schedule for high priority patches, to allow patches on the high_priority_list to be installed on a different (often faster) patch cycle.

Setup Requirements

To start with patching_as_code, complete the following prerequirements:

  • Ensure this module and its dependencies are added to your control repo's Puppetfile.
  • If you are not running Puppet Enterprise 2019.8.0 or higher, you'll also need to add the albatrossflavour/os_patching module to your control repo's Puppetfile.
  • If you are running Puppet Enterprise 2019.8.0 or higher, the built-in pe_patch module will be used by default. You can however force the use of the os_patching module if so desired, by setting the optional patching_as_code::use_pe_patch parameter to false. To prevent duplicate declarations of the pe_patch class in PE 2019.8.0+, this module will default to NOT declaring the pe_patch class. This allows you to use the builtin "PE Patch Management" classification groups to classify pe_patch. If you however would like this module to control the classification of pe_patch for you (and sync the patch_group parameter, which is recommened), please set the patching_as_code::classify_pe_patch parameter to true.
  • For Linux operating systems, ensure your package managers are pointing to repositories that are publishing new package versions as needed
  • For Windows operating systems, ensure Windows Update is configured to check with a valid update server (either WSUS, Windows Update or Microsoft Update). If you want, you can use the puppetlabs/wsus_client module to manage the Windows Update configuration.
  • For Chocolatey packages, ensure your Chocolatey is pointing to repositories that are publishing new package versions as needed

Beginning with patching_as_code

To get started with the patching_as_code module, include it in your manifest:

include patching_as_code

or

class {'patching_as_code':}

This enables automatic detection of available OS patches, and puts all the nodes in the primary patch group. By default this will patch your systems on the 3rd Friday of the month, between 22:00 and midnight (00:00), and perform a reboot if necessary. On PE 2019.8 or newer this will not automatically classify the pe_patch class, so that you can control this through PE's builtin "PE Patch Management" node groups.

To allow patching_as_code to control & declare the pe_patch class, change the declaration to:

class {'patching_as_code':
  classify_pe_patch => true
}

This will change the behavior to also declare the pe_patch class, and match its patch_group parameter with this module's patch_group parameter. In this scenario, make sure you do not classify your nodes with pe_patch via the "PE Patch Management" node groups or other means.

To allow patching_as_code to control & declare the pe_patch class, and also patch Chocolatey packages, set the declaration to:

class {'patching_as_code':
  classify_pe_patch => true,
  patch_choco       => true
}

Usage

To control which patch group(s) a node belongs to, you need to set the patch_group parameter of the class. It is highly recommended to use Hiera to set the correct value for each node, for example:

patching_as_code::patch_group: early

The module provides 6 patch groups out of the box:

weekly:    patches each Thursday of the month, between 09:00 and 11:00, performs a reboot if needed
testing:   patches every 2nd Thursday of the month, between 07:00 and 09:00, performs a reboot if needed
early:     patches every 3rd Monday   of the month, between 20:00 and 22:00, performs a reboot if needed
primary:   patches every 3rd Friday   of the month, between 22:00 and 00:00, performs a reboot if needed
secondary: patches every 3rd Saturday of the month, between 22:00 and 00:00, performs a reboot if needed
late:      patches every 4th Saturday of the month, between 22:00 and 00:00, performs a reboot if needed

There are also 2 special built-in patch groups:

always:    patches immediately when a patch is available, can patch in any agent run, performs a reboot if needed
never:     never performs any patching and does not reboot

If you want to assign a node to multiple patch groups, specify an array of values in Hiera:

patching_as_code::patch_group:
  - testing
  - early

or, in flow style:

patching_as_code::patch_group: [ testing, early ]

Note: if you assign a node to multiple patch groups, the value for the patch group provided to pe_patch/os_patching will be a space-separated list of the assigned patch groups. This is because pe_patch/os_patching do not natively support multiple patch groups, so we work around this by converting our list a single string that pe_patch/os_patching can work with. This is purely for cosmetic purposes and does not affect the functionality of either solution.

When using a local apply for iterative development, the default fact_upload => true for pe_patch or os_patching may be problematic. If so, you can set fact_upload => false for patching_os_code to temporarily disable this behavior.

Customizing the patch groups

You can customize the patch groups to whatever you need. To do so, simply copy the patching_as_code::patch_schedule hash from the data/common.yaml in this module, and paste it into your own Hiera store (recommended to place it in your Hiera's own common.yaml). This Hiera value will now override the defaults that the module provides. Customize the hash to your needs.

The hash has the following structure:

patching_as_code::patch_schedule:
  <name of patch group>:
    day_of_week:   <day to patch systems>
    count_of_week: <the Nth time day_of_week occurs in the month>
    hours:         <start of patch window> - <end of patch window>
    max_runs:      <max number of times that Puppet can perform patching within the patch window>
    reboot:        always | never | ifneeded

For example, say you want to have the following 2 patch groups:

group1: patches every 2nd Sunday of the month, between 10:00 and 11:00, max 1 time, reboots if needed
group2: patches every 3nd and 4th Monday of the month, between 20:00 and 22:00, max 3 times, does not reboot
group3: patches every day in the 3rd week of the month, between 18:00 and 20:00, max 4 times, always reboots

then define the hash as follows:

patching_as_code::patch_schedule:
  group1:
    day_of_week: Sunday
    count_of_week: 2
    hours: 10:00 - 11:00
    max_runs: 1
    reboot: ifneeded
  group2:
    day_of_week: Monday
    count_of_week: [3,4]
    hours: 20:00 - 22:00
    max_runs: 3
    reboot: never
  group3:
    day_of_week: Any
    count_of_week: 3
    hours: 18:00 - 20:00
    max_runs: 4
    reboot: always

Controlling which patches get installed

If you need to limit which patches can get installed, use the blocklist/allowlist capabilties. This is best done through Hiera by defining an array values for patching_as_code::blocklist and/or patching_as_code::allowlist for Windows Updates and Linux packages. For Chocolatey packages, separate Hiera values patching_as_code::blocklist_choco and/or patching_as_code::allowlist_choco can be set.

To prevent KB2881685 and the 7zip Chocolatey package from getting installed/updated on Windows:

patching_as_code::blocklist:
  - KB2881685
patching_as_code::blocklist_choco:
  - 7zip

To only allow the patching of a specific set of 3 Linux packages:

patching_as_code::allowlist:
  - grafana
  - redis
  - nano

Allow lists and block lists can be combined, in that case the list of available updates first gets reduced to the what is allowed by the allowlist, and then gets further reduced by any blocklisted updates.

Setting a High Priority patch schedule and list

If you would like to install certain patches on a different, often faster, schedule compared to regular patches, you can configure this in the module. This requires specifying patches for the patching_as_code::high_priority_list and/or patching_as_code::high_priority_list_choco values in Hiera, and setting a patching_as_code::high_priority_patch_group to associate one of the patch schedules to this list.

For example, to allow the Microsoft Defender definition update and 7zip Chocolatey package to always be installed immediately:

patching_as_code::high_priority_patch_group: always
patching_as_code::high_priority_list:
  - KB4052623
patching_as_code::high_priority_list_choco:
  - 7zip

Note that if you want to prevent any reboots from happening for your high priority runs, you should create a custom patch group that sets the reboot parameter to never, and use that group for the patching_as_code::high_priority_patch_group parameter.

Compatibility with puppetlabs/change_window

If you leverage the puppetlabs/change_window module to define custom change windows and want to use that module in combination with the High Priority patch window support in this module, you should leverage the high_priority_only parameter for the patching_as_code class to get the correct behavior. In this case, your logic should be something as follows:

$in_patch_window = Boolean(change_window::change_window($tz, $type, $window_wday, $window_time, $window_week, $window_month))

if $in_patch_window {
  class {'patching_as_code':
    high_priority_only => false,
  }
else {
  class {'patching_as_code':
    high_priority_only => true,
  }
}

This will allow patching_as_code to keep patch information up to date outside of the change window(s) defined by puppetlabs/change_window, and only perform regular patch runs when inside those change window(s). If you don't put any patches on the high_priority_list, running with high_priority_only => true will cause nothing to happen. Conversely, if you do need a high priority patch to be deployed, running with high_priority_only => true will allow those high priority patches to be installed. Use the patch schedule capabilities of patching_as_code to control when high priority patches are allowed to be installed, as well as whether reboots are allowed to happen at all.

To assist with the use case of combining with puppetlabs/change_window, the high_priority_only => true setting, when used with a patch schedule that allows reboots, will skip acting on pre-existing pending OS reboots at the start of the patch run. This is to ensure a reboot only occurs after patching and only when at least 1 high priority patch was installed. No changes are made to the system this way unless absolutely necessary because of a high priority patch.

Defining situations when patching needs to be skipped

There could be situations where you don't want patching to occur if certain conditions are met. This module supports two such situations:

  • A specific process is running that must not be interrupted by patching
  • The node to be patched is currently connected via a metered link (Windows only)

Managing unsafe processes for patching

You can define a list of unsafe processes which, if any are found to be active on the node, should cause patching to be skipped. This is best done through Hiera, by defining an array value for patching_as_code::unsafe_process_list.

To skip patching if application1 or application2 is among the active processes:

patching_as_code::unsafe_process_list:
  - application1
  - application2

This works on both Linux and Windows, and the matching is done case-insensitive. If one process from the unsafe_process_list is found as an active process, patching will be skipped.

If you need to match on a specific process including its arguments, prepend the entry with {full}:

patching_as_code::unsafe_process_list:
  - application1
  - '{full} /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers'

You can have whitespace between {full} and the process value for sake of readability, this will be automatically stripped before the matching happens.

Managing patching over metered links (Windows only)

By default, this module will not perform patching over metered links (e.g. 3G/4G connections). You can control this behavior through the patch_on_metered_links parameter. To force patching to occur even over metered links, either define this value in Hiera:

patching_as_code::patch_on_metered_links: true

or set this parameter as part of calling the class:

class {'patching_as_code':
  patch_on_metered_links => true
}

Defining pre/post-patching and pre-reboot commands

You can control additional commands that get executed at specific times, to facilitate the patch run. For example, you may want to shutdown specific applications before patching, or drain a kubernetes node before rebooting. The order of operations is as follows:

  1. If reboots are enabled, check for pending reboots and reboot system immediately if a pending reboot is found
  2. Run pre-patching commands
  3. Install patches
  4. Run post-patching commands
  5. If reboots are enabled, run pre-reboot commands (if a reboot is pending, or when reboots are set to always)
  6. If reboots are enabled, reboot system (if a reboot is pending, or when reboots are set to always)

To define the pre/post-patching and pre-reboot commands, you need to create hashes in Hiera. The commands will be executed as Exec resources, and you can use any of the allowed attributes for that resource (just don't use metaparameters). There are 3 hashes you can define:

patching_as_code::pre_patch_commands
patching_as_code::post_patch_commands
patching_as_code::pre_reboot_commands

It's best to define this in Hiera, so that the commands can be tailored to individual nodes or groups of nodes. A hash for a command (let's use pre-reboot as an example) looks like this in Hiera:

patching_as_code::pre_reboot_commands:
  prep k8s for reboot:
    command: /usr/bin/kubectl drain k8s-3.company.local --ignore-daemonsets --delete-local-data

Here's another example, this time for a pre-patch powershell command on Windows:

patching_as_code::pre_patch_commands:
  shutdown SQL server:
    command: Stop-Service MSSQLSERVER -Force
    provider: powershell

As you can see, it's just like defining Exec resources.

Note that specifically for patching_as_code::pre_reboot_commands, the provider:, onlyif: and unless: parameters will be ignored, as these are overwritten by the internal logic to detect pending reboots. On Linux the provider: is forced to posix, on Windows it is forced to powershell.

Limitations

This solution will patching to initiate whenever an agent run occurs inside the patch window. On Windows, patch runs for Cumulative Updates can take a long time, so you may want to tune the hours of your patch windows to account for a patch run getting started near the end of the window and still taking a significant amount of time.

puppetlabs-patching_as_code's People

Contributors

binford2k avatar git-jfontanel avatar github-actions[bot] avatar jcpunk avatar kennyb-222 avatar kreeuwijk avatar martyewings avatar polaricentropy avatar prolixalias avatar robkae avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppetlabs-patching_as_code's Issues

Incorrect Code Manager/r10k declaration

Describe the Bug

The module won't load when Puppetfile declaration is inserted as presented

Expected Behavior

The line can be inserted into a Puppetfile and the module will be present

Steps to Reproduce

Insert line into Puppetfile and observe that class is not available in PE console

Additional Context

Single quotes need adding, fixes issue

Reboots unreliable - `refreshonly ==>true`

I've been working on standing up patching_as_code on the latest community version of puppet (7) and am using version 7.7.0 of the agent. I am using this with the wsus_client module (checking for updates hourly) and the os_patching module.

My target client systems are all windows.

Both wsus_client and os_patching appear to behave as expected (although os_patching only refreshes twice a day as part of the scheduled task) however reboots invoked by patching_as_code have proven unreliable. I was able to obtain the expected behavior by commenting out refreshonly ==> true on line 327 of /etc/puppetlabs/code/environments/production/modules/patching_as_code/manifests/init.pp but I am not certain why this is. When running puppet agent --debug as NT AUTHORITY\SYSTEM I noticed fail lines related to this setting.

This issue is just to create visibility, hopefully get an understanding of why this issue happens and this fix works, and to provide any other confused people with a workaround.

Thanks!

Puppet Unknown variable: `reboot`

Describe the Bug

There appears to be no value set to $reboot under one path.

[puppetserver] Puppet Unknown variable: 'reboot'. (file: /etc/puppetlabs/code/environments/production/r10k/patching_as_code/functions/process_patch_groups.pp, line: 75, column: 33)

Expected Behavior

Variables should be defined before use

Steps to Reproduce

Steps to reproduce the behavior:

  1. Define a custom group
  2. run puppet on "not patch day"
  3. review server logs

Environment

  • Version 1.1.2
  • Platform RHEL8

Additional Context

I think an else condition is needed at
https://github.com/puppetlabs/puppetlabs-patching_as_code/blob/v1.1.2/functions/process_patch_groups.pp#L41

but I'm unsure what the value of $reboot should be under that branch.

reboot only triggered from interactive run

Describe the Bug

When puppet runs in daemon mode, my system doesn't trigger any pending reboots. However, when run by hand it does.

Expected Behavior

The puppet service should trigger reboots in the same way as an interactive run

Steps to Reproduce

Steps to reproduce the behavior:

  1. Setup a system with a scheduled reboot
  2. run puppet by hand, note triggered reboot
  3. cancel reboot
  4. run puppet service, no reboot is triggered.

Environment

  • Puppet Version: 7.19.0
  • Module Version: 1.1.6
  • Platform: CentOS Stream 9

Additional Context

Here are some of the relevant debugging bits:

[root@slam-puppet02 ~]# /bin/sh /opt/puppetlabs/puppet/cache/lib/patching_as_code/pending_reboot.sh
true
[root@slam-puppet02 ~]# facter -p puppet_vardir
/opt/puppetlabs/puppet/cache
[root@slam-puppet02 ~]# cat /opt/puppetlabs/facter/facts.d/patching_configuration.json | jq .
{
  "patching_as_code_config": {
    "allowlist": [],
    "blocklist": [],
    "high_priority_list": [],
    "allowlist_choco": [],
    "blocklist_choco": [],
    "high_priority_list_choco": [],
    "enable_patching": true,
    "patch_fact": "os_patching",
    "patch_group": [
      "debug"
    ],
    "patch_schedule": {
      "debug": {
        "day_of_week": "Any",
        "count_of_week": [
          1,
          2,
          3,
          4,
          5
        ],
        "hours": "00:00 - 23:00",
        "max_runs": 9001,
        "reboot": "ifneeded"
      }
    },
    "high_priority_patch_group": "never",
    "post_patch_commands": {
      "some_command_returns_0_after_patching": {
        "path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/puppetlabs/bin:/opt/slam/bin",
        "command": "/bin/true"
      }
    },
    "pre_patch_commands": {
      "ensure_cache_is_up_to_date": {
        "path": "/usr/bin",
        "command": "/usr/bin/dnf clean expire-cache",
        "onlyif": "/usr/bin/test -e /usr/bin/dnf"
      }
    },
    "pre_reboot_commands": {
      "some_command_returns_0_if_ok_to_patch": {
        "path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/puppetlabs/bin:/opt/slam/bin",
        "command": "/bin/true"
      }
    },
    "patch_on_metered_links": false,
    "security_only": false,
    "patch_choco": false,
    "unsafe_process_list": []
  }
}
[root@slam-puppet02 ~]# facter -p os_patching
{
  blackouts => {
  },
  block_patching_on_warnings => "false",
  blocked => false,
  blocked_reasons => [

  ],
  last_run => {
  },
  missing_update_kbs => [

  ],
  package_update_count => 0,
  package_updates => [

  ],
  patch_window => "debug",
  pinned_packages => [

  ],
  reboot_override => "default",
  reboots => {
    app_restart_required => true,
    apps_needing_restart => {
      1 => "/usr/lib/systemd/systemd --system --deserialize 28",
      108860 => "/usr/lib/systemd/systemd-logind",
      111027 => "/usr/bin/conmon --api-version 1 -c a7db26200a17c971e578c82b717340e3bdfc6ee5b5c9a872f449925afde752f1 -u a7db26200a17c971e578c82b717340e3bdfc6ee5b5c9a872f449925afde752f1 -r /usr/bin/crun -b /home/puppet/.local/share/containers/storage/overlay-containers/a7db26200a17c971e578c82b717340e3bdfc6ee5b5c9a872f449925afde752f1/userdata -p /run/user/47732/containers/overlay-containers/a7db26200a17c971e578c82b717340e3bdfc6ee5b5c9a872f449925afde752f1/userdata/pidfile -n puppet-puppetdb --exit-dir /run/user/47732/libpod/tmp/exits --full-attach -s -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/47732/containers/overlay-containers/a7db26200a17c971e578c82b717340e3bdfc6ee5b5c9a872f449925afde752f1/userdata/oci-log --conmon-pidfile /run/user/47732/containers/overlay-containers/a7db26200a17c971e578c82b717340e3bdfc6ee5b5c9a872f449925afde752f1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/puppet/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/47732/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/47732/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/puppet/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg a7db26200a17c971e578c82b717340e3bdfc6ee5b5c9a872f449925afde752f1",
      1325 => "/sbin/auditd",
      1364 => "/usr/bin/dbus-broker-launch --scope system --audit",
      1365 => "dbus-broker --log 4 --controller 9 --machine-id 1403784f2bd741868e2192f0965b09f8 --max-bytes 536870912 --max-fds 4096 --max-matches 131072 --audit",
      1375 => "/usr/sbin/chronyd -F 2",
      1381 => "/usr/sbin/ledmon --foreground",
      1385 => "/usr/libexec/low-memory-monitor",
      1393 => "/usr/lib/polkit-1/polkitd --no-debug",
      1402 => "/usr/sbin/rasdaemon -f -r",
      1405 => "/usr/sbin/rsyslogd -n",
      1438 => "/usr/sbin/smartd -n -q never --capabilities",
      1476 => "/usr/lib/systemd/systemd-machined",
      1497 => "/usr/bin/python3 -s /usr/sbin/firewalld --nofork --nopid",
      1507 => "/usr/sbin/NetworkManager --no-daemon",
      175264 => "/usr/libexec/udisks2/udisksd",
      1968332 => "/usr/libexec/upowerd",
      2037 => "/sbin/agetty -o -p -- \u --noclear - linux",
      2054 => "/sbin/agetty -o -p -- \u --keep-baud 115200,57600,38400,9600 - vt220",
      2212 => "/usr/lib/systemd/systemd --user",
      2213602 => "/usr/lib/systemd/systemd-journald",
      2219 => "(sd-pam)",
      2471 => "catatonit -P",
      2472577 => "/opt/puppetlabs/puppet/bin/ruby /opt/puppetlabs/puppet/bin/puppet agent --no-daemonize",
      2547 => "/usr/bin/dbus-broker-launch --scope user",
      2548 => "dbus-broker --log 4 --controller 9 --machine-id 1403784f2bd741868e2192f0965b09f8 --max-bytes 100000000000000 --max-fds 25000000000000 --max-matches 5000000000",
      3014959 => "/usr/bin/python3 -Es /usr/sbin/tuned -l -P",
      3017154 => "/usr/lib/systemd/systemd-oomd",
      3017156 => "/usr/lib/systemd/systemd-udevd",
      3052078 => "nginx: master process /usr/sbin/nginx",
      3052079 => "nginx: worker process",
      3052080 => "nginx: worker process",
      3052081 => "nginx: worker process",
      3052082 => "nginx: worker process",
      3052083 => "nginx: worker process",
      3052084 => "nginx: worker process",
      3052085 => "nginx: worker process",
      3052086 => "nginx: worker process",
      3052087 => "nginx: worker process",
      3052088 => "nginx: worker process",
      3052089 => "nginx: worker process",
      3052090 => "nginx: worker process",
      3052091 => "nginx: worker process",
      3052092 => "nginx: worker process",
      3052093 => "nginx: worker process",
      3052094 => "nginx: worker process",
      3052095 => "nginx: worker process",
      3052096 => "nginx: worker process",
      3052097 => "nginx: worker process",
      3052098 => "nginx: worker process",
      3052099 => "nginx: worker process",
      3052100 => "nginx: worker process",
      3052101 => "nginx: worker process",
      3052102 => "nginx: worker process",
      3058485 => "/usr/libexec/postfix/master -w",
      3058507 => "qmgr -l -t fifo -u",
      49516 => "rootlessport",
      49522 => "rootlessport-child",
      49539 => "/usr/bin/conmon --api-version 1 -c 8177c3271ba2266b4f92f30263db68aa41006cdff9ca2099a96bf9355b8368fe -u 8177c3271ba2266b4f92f30263db68aa41006cdff9ca2099a96bf9355b8368fe -r /usr/bin/crun -b /home/puppet/.local/share/containers/storage/overlay-containers/8177c3271ba2266b4f92f30263db68aa41006cdff9ca2099a96bf9355b8368fe/userdata -p /run/user/47732/containers/overlay-containers/8177c3271ba2266b4f92f30263db68aa41006cdff9ca2099a96bf9355b8368fe/userdata/pidfile -n puppet-pod-infra --exit-dir /run/user/47732/libpod/tmp/exits --full-attach -s -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/47732/containers/overlay-containers/8177c3271ba2266b4f92f30263db68aa41006cdff9ca2099a96bf9355b8368fe/userdata/oci-log --conmon-pidfile /run/user/47732/containers/overlay-containers/8177c3271ba2266b4f92f30263db68aa41006cdff9ca2099a96bf9355b8368fe/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/puppet/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/47732/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/47732/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/puppet/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8177c3271ba2266b4f92f30263db68aa41006cdff9ca2099a96bf9355b8368fe",
      680069 => "/usr/bin/conmon --api-version 1 -c 9fa4cc991ad1e7fa72b03e170f0246ab7d3975e2f6b1edb31a18df4db1d85b61 -u 9fa4cc991ad1e7fa72b03e170f0246ab7d3975e2f6b1edb31a18df4db1d85b61 -r /usr/bin/crun -b /home/puppet/.local/share/containers/storage/overlay-containers/9fa4cc991ad1e7fa72b03e170f0246ab7d3975e2f6b1edb31a18df4db1d85b61/userdata -p /run/user/47732/containers/overlay-containers/9fa4cc991ad1e7fa72b03e170f0246ab7d3975e2f6b1edb31a18df4db1d85b61/userdata/pidfile -n puppet-puppetboard --exit-dir /run/user/47732/libpod/tmp/exits --full-attach -s -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/47732/containers/overlay-containers/9fa4cc991ad1e7fa72b03e170f0246ab7d3975e2f6b1edb31a18df4db1d85b61/userdata/oci-log --conmon-pidfile /run/user/47732/containers/overlay-containers/9fa4cc991ad1e7fa72b03e170f0246ab7d3975e2f6b1edb31a18df4db1d85b61/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/puppet/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/47732/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/47732/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/puppet/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 9fa4cc991ad1e7fa72b03e170f0246ab7d3975e2f6b1edb31a18df4db1d85b61",
      Updating Subscription Management repositories. => null
    },
    reboot_required => true
  },
  security_package_update_count => 0,
  security_package_updates => [

  ],
  warnings => {
  }
}

max number of times that Puppet can perform patching within the patch window functionality is not working

For some reason, the ‘max runs’ option within the patch windows is being ignored. All of our patch groups have max_runs set to 1 but will continue to patch systems if patches are still available after the 'pe_patch_fact_generation.sh’ script is executed at the end of each patch run. Example below.

2nd_thu_20_22_prod_nr:
day_of_week: Thursday
count_of_week: 2
hours: 20:00 - 22:00
max_runs: 1
reboot: never

first puppet run within patch window – the packages available for patching were determined from the cron job (pe_patch_fact_generation.sh) which ran on March 27th

Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[bpftool.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[diffutils.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[kernel.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[kernel-tools.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[kernel-tools-libs.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[nss.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[nss-sysinit.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[nss-tools.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[openssl.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[openssl-libs.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:54 itf-sannav puppet-agent[89855]: Package[zlib.x86_64] (unmanaged) will be updated by Patching_as_code
Apr 13 20:00:55 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code/Exec[Patching as Code - Before patching - pre patch default commands]/returns) executed successfully
Apr 13 20:00:56 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Exec[Patching as Code - Clean Cache]/returns) executed successfully
Apr 13 20:01:40 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[bpftool.x86_64]/ensure) ensure changed '3.10.0-1160.83.1.el7' to '0:3.10.0-1160.88.1.el7'
Apr 13 20:01:46 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[diffutils.x86_64]/ensure) ensure changed '3.3-5.el7' to '0:3.3-6.el7_9'
Apr 13 20:03:38 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[kernel.x86_64]/ensure) ensure changed '3.10.0-1160.71.1.el7; 3.10.0-1160.76.1.el7; 3.10.0-1160.80.1.el7; 3.10.0-1160.81.1.el7; 3.10.0-1160.83.1.el7' to '0:3.10.0-1160.88.1.el7'
Apr 13 20:03:47 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[kernel-tools.x86_64]/ensure) ensure changed '3.10.0-1160.83.1.el7' to '0:3.10.0-1160.88.1.el7'
Apr 13 20:03:51 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[kernel-tools-libs.x86_64]/ensure) ensure changed '3.10.0-1160.88.1.el7' to '0:3.10.0-1160.88.1.el7'
Apr 13 20:03:57 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[nss.x86_64]/ensure) ensure changed '3.79.0-4.el7_9' to '0:3.79.0-5.el7_9'
Apr 13 20:04:00 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[nss-sysinit.x86_64]/ensure) ensure changed '3.79.0-5.el7_9' to '0:3.79.0-5.el7_9'
Apr 13 20:04:03 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[nss-tools.x86_64]/ensure) ensure changed '3.79.0-5.el7_9' to '0:3.79.0-5.el7_9'
Apr 13 20:04:10 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[openssl.x86_64]/ensure) ensure changed '1:1.0.2k-25.el7_9' to '1:1.0.2k-26.el7_9'
Apr 13 20:04:16 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[zlib.x86_64]/ensure) ensure changed '1.2.7-20.el7_9' to '0:1.2.7-21.el7_9'
Apr 13 20:04:16 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code/File[Patching as Code - Save Patch Run Info]/ensure) defined content as '{sha256}363495f191055656bfb3ca11c9fe561d9497a656117dba2cf3f465bff65f4fd8'
Apr 13 20:04:16 itf-sannav puppet-agent[89855]: Patches installed, refreshing patching facts...
Apr 13 20:04:16 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code/Notify[Patching as Code - Update Fact]/message) defined 'message' as 'Patches installed, refreshing patching facts...'
Apr 13 20:04:23 itf-sannav puppet-agent[89855]: (/Stage[main]/Pe_patch/Exec[pe_patch::exec::fact_upload]) Triggered 'refresh' from 1 event

Apr 13 20:04:45 itf-sannav pe_patch_fact_generation.sh: Uploading facts
Apr 13 20:04:51 itf-sannav pe_patch_fact_generation.sh: Patch data refreshed - This is checking for new packages available for patching. If any are found, PE_PATCH facts will be updated and they will be patched on the next Puppet run. (Should be during the next Patch window)

Apr 13 20:04:51 itf-sannav puppet-agent[89855]: (/Stage[main]/Pe_patch/Exec[pe_patch::exec::fact]) Triggered 'refresh' from 1 event
Apr 13 20:04:51 itf-sannav puppet-agent[89855]: (/Stage[main]/Patching_as_code/Exec[Patching as Code - After patching - post patch default commands]/returns) executed successfully

Always reboot triggers pre reboot scripts even if no patching is done

Describe the Bug

The always reboot flag makes the pre reboot script trigger every time puppet runs.

Expected Behavior

The pre reboot script should only run when actual patching has taken place.

Steps to Reproduce

Create a manifest including a patch group that patches whenever a new patch comes and uses reboot always.
Add a pre_reboot_command
run puppet, the pre_reboot_command will trigger each and every run

Environment

  • Version puppetserver version: 7.9.0
  • Version puppetlabs-patching_as_code (v1.1.4)
  • Platform Debian 11

Compatibility with `stdlib::stages`

Use Case

The puppetlabs stdlib module provides a set of default stages. Some folks may decide to utilize them.

Describe the Solution You Would Like

It would be nice if this module could schedule the reboots after the last of those stages if they are defined.

Describe Alternatives You've Considered

Building up complex inter-dependencies rather than relying on stage based segmentation.

Additional Context

https://github.com/puppetlabs/puppetlabs-patching_as_code/blob/main/manifests/init.pp#L134
https://github.com/puppetlabs/puppetlabs-stdlib/blob/main/manifests/stages.pp#L24

Update PSWindowsUpdate module to latest version

Use Case

Not using the latest module of PSWindowsUpdate
Current version is 2.1.1.2
Latest version is 2.2.0.3

Describe the Solution You Would Like

Update PowerShell module PSWindowsUpdate to latest version
Consider moving the module to an alternate path and temporary modify $env:PSModulePath to include the version that the puppet module uses

reboot if needed, even if patching fails

Use Case

When a linux system has a pending reboot, but patching fails, the system is not rebooted. This can happen if a 3rd party repo isn't fully up to date. Some patches can get installed that would benefit from a reboot, but, because the patching failed, the system is not checked for a pending reboot.

Describe the Solution You Would Like

Perhaps make the reboot logic smarter to so reboot ifneeded can trigger under this workflow?

Describe Alternatives You've Considered

I don't have a great alternate workflow I can think of...

Additional Context

This might need an additional flag to toggle the behavior, I'm not sure...

PDK Update, style guidelines, documentation

Use Case

PDK version is currently 2.0.0, lots of style guidelines fixes needs to be applied

Describe the Solution You Would Like

Update PDK to latest version and fix errors such as trailing commas and other syntax
Update documentation of parameters
Add data types where missing
Optional parameters does not default to undef

Verification of the status of a patch run

Use Case

Need to determine the status of a patch run.

Describe the Solution You Would Like

Populate a puppet fact with return code/status of the patch run similar to the pe_patch return code.
This fact could then be queried in post-patch script.

Describe Alternatives You've Considered

A clear and concise description of any alternative solutions or features you've considered.

Additional Context

Add any other context or screenshots about the feature request here.

Duplicate Resource Declaration for Java Package

Thanks for providing the module, it's working really well so far except for one small issue that I'm having trouble pinpointing.

Describe the Bug

The patching_as_code module is unable to ensure => latest on the package openjdk-7-jre-headless due to a duplicate resource declaration.

Expected Behavior

The patching_as_code module should ensure => latest and be able to update the java package.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Use the puppetlabs-java module to manage the java installation on a node, such as:
class { 'java':
  distribution => 'jre',
}
  1. Ensure the a newer java package is available than what is currently installed.
  2. Classify the node in a patching_as_code::patch_group.
  3. When the node reaches the time frame specified in patching_as_code::patch_schedule, every puppet run results in the following error:
Failed to apply catalog: Cannot alias Package[openjdk-7-jre-headless]
to [nil, "openjdk-7-jre-headless", :apt]; resource ["Package", nil, "openjdk-7-jre-headless", :apt]
already declared (file: /opt/puppetlabs/server/data/puppetserver/filesync/client/versioned-dirs/puppet-
code/production_18b2b688aed30a89a6f194510d72258fa04fae5b/modules/java/manifests/init.pp, line: 123)

Environment

  • puppetlabs/patching_as_code Module Version: 0.7.1
  • puppetlabs/java Module Version: 6.5.0
  • Platform: Debian 8
  • Puppet Version: 6.19

Additional Context

Oddly it seems like this issue should not be occurring since the java module declares the package resource as present and the patching_as_code module should be changing it to latest.

List of unsafe processes not generated correctly

Describe the Bug

The list of unsafe processes is not being generated in a way the facter expects it. The processes are concatenated by \n instead of a newline character.

Expected Behavior

The file "patching_unsafe_processes" should list the process names line by line.

Steps to Reproduce

Just add multiple unsafe processes, e.g. in Hiera:
patching_as_code::unsafe_process_list:

  • proc1
  • proc2
  • proc3

Environment

patching_as_code 1.1.7

Additional Context

The \n must be enclosed in double quotes instead of single quotes to be interpreted as newline. The affected line in init.pp should read:
content => $unsafe_process_list.join("\n"),

potential race condition? issue RE: patching_configuration.json

updated to latest forge module release as well as the updated albatross os_patching module.

receiving error :

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Operator '[]' is not applicable to an Undef Value. (file: /etc/puppetlabs/code/environments//modules/patching_as_code/functions/is_patchday.pp, line: 50, column: 24) on node
Warning: Not using cache on failed catalog

This is properly updated/addressed when I throw in another host that can resolve or has its own configuration.json file.

Unsure exactly what the behavior is, but if that file does not exist or is empty, havoc ensues and the catalog won't compile.

Host OS's : Centos 7, ubuntu 16/18/20, RedHat 7.

Thoughts/seen this?

Powershell scripts should be executed with the -NoProfile parameter

Describe the Bug

Fact execution does not add the -NoProfile parameter when executing powershell scripts

Debug: Facter: Command C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Unrestricted -File C:/ProgramData/PuppetLabs/puppet/var/lib/facter/../patching_as_code/metered_link.ps1 completed with the following stderr message: Set-PSReadLineOption : The handle is invalid.
At C:\Windows\System32\WindowsPowerShell\v1.0\Microsoft.PowerShell_profile.ps1:2 char:1
+ Set-PSReadLineOption -PredictionViewStyle ListView
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Set-PSReadLineOption], IOException
    + FullyQualifiedErrorId : System.IO.IOException,Microsoft.PowerShell.SetPSReadLineOption
 
Set-PSReadLineOption : The predictive suggestion feature cannot be enabled because the console output doesn't support virtual terminal processing or it's redirected.
At C:\Windows\System32\WindowsPowerShell\v1.0\Microsoft.PowerShell_profile.ps1:3 char:1
+ Set-PSReadLineOption -PredictionSource History
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Set-PSReadLineOption], ArgumentException
    + FullyQualifiedErrorId : System.ArgumentException,Microsoft.PowerShell.SetPSReadLineOption

Expected Behavior

Facter::Util::Resolution.exec("#{powershell} -ExecutionPolicy Unrestricted -File #{checker_script}").to_s == 'true' should be changed to Facter::Util::Resolution.exec("#{powershell} -ExecutionPolicy Bypass -NoProfile -NoLogo -NonInteractive -File #{checker_script}").to_s == 'true'

Files that should be changed: lib/facter/metered_link.rb, lib/puppet/type/reboot_if_pending.rb

Might be more files that should be changed

Additional Context

Consider using pwshlib for executing powershell commands.

centos 7 with 0.4.0 patching as code causing continual reboots

Hey folks,

just started using 0.4.0 release and ended up with some pretty wild behaviors.
puppet-agent[28695]: Patching as Code - Pending OS reboot detected, node will reboot at start of patch window today
puppet-agent[28695]: Patching as Code - Performing Pending OS reboot before patching...
puppet-agent[28695]: (/Notify[Patching as Code - Performing Pending OS reboot before patching...]/message) defined 'message' as 'Patching as Code - Performing Pending OS reboot before patching...'
puppet-agent[28695]: (/Notify[Patching as Code - Performing Pending OS reboot before patching...]) Scheduling refresh of Reboot[Patching as Code - Pending OS reboot]
puppet-agent[28695]: Scheduling system reboot with message: "Puppet is rebooting the computer"
puppet-agent[28695]: (/Reboot[Patching as Code - Pending OS reboot]) Triggered 'refresh' from 1 event
puppet-agent[28695]: Applied catalog in 11.58 seconds

This ended up in 15-20 reboots before I finally reverted back to 0.3.0 in my control repo, which patched the nodes but did not complete a reboot (necessary) after. Thoughts?

mod 'noma4i-windows_updates', '0.3.0'
mod 'puppetlabs-powershell', '3.0.1'
mod 'puppetlabs-pwshlib', '0.7.0'
mod 'puppetlabs-puppet_agent', '4.4.0'
mod 'puppetlabs-reboot', '2.4.0'

I had to librarian-puppet my Puppetfile quite mega to handle all the dependencies - noted that puppetlabs-reboot is at 3.2.0. That a potential? There's not much in the patching_as_code documentation about using puppetlabs-reboot and any version dependencies.

Thoughts?

RHEL Packages with "nn:" in the version are not being detected by `os_patching`

Describe the Bug

A clear and concise description of what the bug is.
When we are patching using puppet_as_code, we found a few packages were not being updated and narrowed the cause to the package version having a ":" in the text IE the "32:" in the output of yum check-update:
bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.9

The packages can be updated manually with yum -y update which confirms it is patching_as_code not caching the "nn:" versioned packaged.

Expected Behavior

A clear and concise description of what you expected to happen.
I expect all packages to be updated in my puppet run and after the server reboots, yum check-update should show zero packages need to be updated.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Run puppet agent -t and see packages updating
  2. After the reboot, run yum check-update
  3. Review the list of packages still available for update (notice the "nn:" in the version)
  4. Check facts for os_patching and there are no packages listed because the "nn:" packages are skipped:
    package_update_count => 0,
    package_updates => [

],

Environment

  • RHEL 6
  • RHEL 7

Additional Context

Add any other context about the problem here.

pre_patch_commands failure doesn't abort patching

Describe the Bug

When the pre_patch_commands fail, patching proceeds.

Expected Behavior

When the pre_patch_commands have an unexpected return code, the system should not patch. Or there should be an option to opt into that behavior.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Set pre_patch_commands to run /bin/false
  2. run patch cycle with updates pending

Environment

  • Puppet: 7.14.0
  • Version: 1.1.2
  • Platform: RHEL 8

Additional Context

This probably replicates on Ubuntu/Debian as well.

Duplicate KB articles causes patch to fail

Describe the Bug

A clear and concise description of what the bug is.
When duplicate values show in missing_update_kbs an error is produced and patching fails.
Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Patching_as_code::Kb[KB4052623] is already declared at (file: /etc/puppetlabs/code/environments/production/modules/patching_as_code/manifests/windows/patchday.pp, line: 10); cannot redeclare (file: /etc/puppetlabs/code/environments/production/modules/patching_as_code/manifests/windows/patchday.pp, line: 10) (file: /etc/puppetlabs/code/environments/production/modules/patching_as_code/manifests/windows/patchday.pp, line: 10, column: 5) on node node.domain.com

Expected Behavior

A clear and concise description of what you expected to happen.
Updates should install

Steps to Reproduce

Steps to reproduce the behavior:
Run the puppet agent

Environment

  • Puppet [6.25.1]
  • Forman [3.0.1]
  • Patching_as_code [0.7.10]
  • os_patching [0.17.0]
  • Server [Ubuntu 20.04]
  • Client [Windows 10]

Additional Context

In my situation it looks like two windows defender updates are required (different versions) but have the same KB.
["KB4052623", "KB4052623", "KB5007289", "KB2267602", "KB4023057", "KB5007186"]
["Update for Windows Defender Antivirus antimalware platform - KB4052623 (Version 4.18.2001.10)", "Update for Microsoft Defender Antivirus antimalware platform - KB4052623 (Version 4.18.2110.6)", "2021-11 Cumulative Update Preview for .NET Framework 3.5 and 4.8 for Windows 10 Version 20H2 for x64 (KB5007289)", "Security Intelligence Update for Microsoft Defender Antivirus - KB2267602 (Version 1.353.1971.0)", "Intel(R) Corporation - System - 10.29.0.6367", "Intel(R) Corporation - System - 1.0.1964.0", "Intel(R) Corporation - MEDIA - 10.29.0.6367", "Intel(R) Corporation - System - 10.29.0.6367", "2021-09 Update for Windows 10 Version 20H2 for x64-based Systems (KB4023057)", "2021-11 Cumulative Update for Windows 10 Version 20H2 for x64-based Systems (KB5007186)", "Dell, Inc. - Firmware - 0.1.12.0", "Intel - SoftwareComponent - 2130.1.16.1"]

duplicate declaration

Describe the Bug

When using the patching_as_code module we get the following error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Package[grafana-enterprise] is already declared at (file: /etc/puppetlabs/code/environments/isc65515/forge-modules/patching_as_code/manifests/linux/patchday.pp, line: 65); cannot redeclare (file: /etc/puppetlabs/code/environments/isc65515/modules/sbt/manifests/grafana.pp, line: 3) (file: /etc/puppetlabs/code/environments/isc65515/modules/sbt/manifests/grafana.pp, line: 3, column: 3) on node XXXX

Expected Behavior

We expect that the patching_as_code module can be attached to servers als having other modules attached.

Steps to Reproduce

Steps to reproduce the behavior:

  1. attach puppet module(grafana.pp) that contains the following code:
    package { 'grafana-server':
    ensure => present,
    }
  2. make sure an old version of grafana-server is installed
  3. attach a module that contains the following code:
    class {'patching_as_code':
    use_pe_patch => false
    }

Environment

  • mod 'puppetlabs-patching_as_code', '0.2.5'
  • Platform ubuntu 20.04

Additional Context

potential chicken/egg issue regarding new os_patching fact storage?

Heya - I updated our Puppetfile to use 1.0.4 patching_as_code from 0.7.9

the creation of /opt/puppetlabs/os_patching as a directory appears to be impeding a run from completing.

Puppet agent version: 6.25.1
Please advise

Error: Could not set 'file' on ensure: No such file or directory - A directory component in /opt/puppetlabs/os_patching/patching_as_code_last_run20220301-24293-1oviczh.lock does not exist or is a dangling symbolic link (file: /etc/puppetlabs/code/environments/${ENV}/modules/patching_as_code/manifests/init.pp, line: 372)
Error: Could not set 'file' on ensure: No such file or directory - A directory component in /opt/puppetlabs/os_patching/patching_as_code_last_run20220301-24293-1oviczh.lock does not exist or is a dangling symbolic link (file: /etc/puppetlabs/code/environments/${ENV}/modules/patching_as_code/manifests/init.pp, line: 372)
Wrapped exception:
No such file or directory - A directory component in /opt/puppetlabs/os_patching/patching_as_code_last_run20220301-24293-1oviczh.lock does not exist or is a dangling symbolic link
Error: /Stage[main]/Patching_as_code/File[Patching as Code - Save Patch Run Info]/ensure: change from 'absent' to 'file' failed: Could not set 'file' on ensure: No such file or directory - A directory component in /opt/puppetlabs/os_patching/patching_as_code_last_run20220301-24293-1oviczh.lock does not exist or is a dangling symbolic link (file: /etc/puppetlabs/code/environments/prod/modules/patching_as_code/manifests/init.pp, line: 372)

Blocklist wildcard

Use Case

It would be good to be able to use wildcards in the blocklist, i have a package where i need to do some manual work when the kernel is updated, i would like to block the kernel updates from being updated.

The problem on atleast debian is that the package for the kernel image is linux-image-[version-number]
i have thus no way currentlyt to add the kernel to the block list to be able to manually update it later.

I would like to be able to add kernel-image-* to the blocklist

Add support for timezone

Use Case

Currently the patch window use the local time of the node. In our server parc we have servers configured with different timezones, but we would like to have the updates trigger at around the same times.

Describe the Solution You Would Like

Add a timezone field in the patch schedules, and maybe also a daylight saving time field to allow accounting for DST.

unsafe_process_list expanded to allow for selection of process instances with arguments

Use Case

The specific use case I am looking for is an expansion of the patching_as_code::unsafe_process_list parameter.
It would be fantastic if it were able to match, not only a process by its name, but an instance of that process with specific arguments passed to it.

Describe the Solution You Would Like

What I would like to see is the ability to specify in my hiera yaml file an unsafe_process_list param that holds the value:

patching_as_code::unsafe_process_list: 
- '/usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers'

This way, I will only skip updates if networkd-dispatcher is busy running the startup triggers and in no other case.

Describe Alternatives You've Considered

I could, of course, use a workaround and use the provided patching_as_code::pre_patch_commands:, potentially in conjunction with patching_as_code::pre_reboot_commands: but that does not follow the spirit of this great module.

Additional Context

None other than great work and looking forward to your reply!

Quartal patching instead of monthly

Use Case

A clear and concise description of what the problem you're trying to solve is. Ex. I'm always frustrated when [...]

We are quite bit organization and for us monthly patching is very hard to achieve. I am wondering is it possible to set patching every 3 months.

Describe the Solution You Would Like

A clear and concise description of what you want to happen.

For example in patch schedule group we have only 5 weeks across single month, but could we have 12 weeks across 3 months.

Describe Alternatives You've Considered

A clear and concise description of any alternative solutions or features you've considered.

I know that this is maybe complicated to have as feature, could I download PaC and try to find in code and customize but I am not sure where is this set count_of_week. I would appreciate for any hint.

Additional Context

Add any other context or screenshots about the feature request here.

Patching fails when 32 and 64 bit versions of the same library are installed.

Describe the Bug

When we have a library installed with both 32 and 64 bit versions, patching fails with

`(/Stage[main]/Aem::App/Package[zlib.i686]/ensure) change from '1.2.7-19.el7_9' to '0:1.2.7-20.el7_9' failed: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y update zlib.i686' returned 1: Error:  Multilib version problems found. This often means that the root
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)        cause is something else and multilib version checking is just
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)        pointing out that there is a problem. Eg.:
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)          1. You have an upgrade for zlib which is missing some
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             dependency that another package requires. Yum is trying to
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             solve this by installing an older version of zlib of the
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             different architecture. If you exclude the bad architecture
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             yum will tell you what the root cause is (which package
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             requires what). You can try redoing the upgrade with
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             --exclude zlib.otherarch ... this should give you an error
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             message showing the root cause of the problem.
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)          2. You have multiple architectures of zlib installed, but
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             yum can only see an upgrade for one of those architectures.
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             If you don't want/need both architectures anymore then you
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             can remove the one with the missing update and everything
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             will work.
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)          3. You have duplicate versions of zlib installed already.
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)             You can use "yum check" to get yum show these errors.
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)        ...you can also use --setopt=protected_multilib=false to remove
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)        this checking, however this is almost never the correct thing to
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)        do as something else is very likely to go wrong (often causing
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)        much more problems).
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)
(/Stage[main]/Aem::App/Package[zlib.i686]/ensure)        Protected multilib versions: zlib-1.2.7-20.el7_9.i686 != zlib-1.2.7-19.el7_9.x86_64`

This is due to the fact that both versions need updated at the same time.

Expected Behavior

Both versions should be updated so that we aren't having to manually patch the system

Steps to Reproduce

Steps to reproduce the behavior:

  1. Install both 32 and 64 bit versions of a library. zlib in this instance.
`rpm -qi zlib
Name        : zlib
Version     : 1.2.7
Release     : 19.el7_9
Architecture: x86_64
Install Date: Mon 01 Mar 2021 12:28:02 AM MST
Group       : System Environment/Libraries
Size        : 185222
License     : zlib and Boost
Signature   : RSA/SHA256, Wed 03 Feb 2021 09:49:11 AM MST, Key ID 24c6a8a7f4a80eb5
Source RPM  : zlib-1.2.7-19.el7_9.src.rpm
Build Date  : Tue 02 Feb 2021 09:35:34 AM MST
Build Host  : x86-01.bsys.centos.org
Relocations : (not relocatable)
Packager    : CentOS BuildSystem <http://bugs.centos.org>
Vendor      : CentOS
URL         : http://www.zlib.net/
Summary     : The compression and decompression library
Description :
Zlib is a general-purpose, patent-free, lossless data compression
library which is used by many different programs.
Name        : zlib
Version     : 1.2.7
Release     : 19.el7_9
Architecture: i686
Install Date: Mon 01 Mar 2021 12:28:50 AM MST
Group       : System Environment/Libraries
Size        : 184598
License     : zlib and Boost
Signature   : RSA/SHA256, Wed 03 Feb 2021 09:53:23 AM MST, Key ID 24c6a8a7f4a80eb5
Source RPM  : zlib-1.2.7-19.el7_9.src.rpm
Build Date  : Tue 02 Feb 2021 09:37:36 AM MST
Build Host  : x86-01.bsys.centos.org
Relocations : (not relocatable)
Packager    : CentOS BuildSystem <http://bugs.centos.org>
Vendor      : CentOS
URL         : http://www.zlib.net/
Summary     : The compression and decompression library
Description :
Zlib is a general-purpose, patent-free, lossless data compression
library which is used by many different programs.`
  1. Try to patch the system
`/usr/bin/yum -d 0 -e 0 update zlib.i686
Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:

         1. You have an upgrade for zlib which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of zlib of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude zlib.otherarch ... this should give you an error
            message showing the root cause of the problem.

         2. You have multiple architectures of zlib installed, but
            yum can only see an upgrade for one of those architectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.

         3. You have duplicate versions of zlib installed already.
            You can use "yum check" to get yum show these errors.

       ...you can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).

       Protected multilib versions: zlib-1.2.7-20.el7_9.i686 != zlib-1.2.7-19.el7_9.x86_64
`
  1. See the logs for the failure output.

  2. If an architecture isn't provided, it patches correctly.

`/usr/bin/yum -d 0  update zlib

=================================================================================================================================================================
 Package                          Arch                               Version                                    Repository                                  Size
 =================================================================================================================================================================
Updating:
 zlib                             i686                               1.2.7-20.el7_9                             centos-updates                              91 k
 zlib                             x86_64                             1.2.7-20.el7_9                             centos-updates                              90 k

Transaction Summary
=================================================================================================================================================================
Upgrade  2 Packages

Is this ok [y/d/N]:

`

Environment

  • Version 1.1.2
  • Platform CentOS 7.9

Additional Context

https://access.redhat.com/solutions/2801851
https://serverfault.com/a/597206
A few references related to multiliv versions issues.

Make use of pwshlib for executing powershell commands

Use Case

This gem allows the use of a long-lived manager to which Ruby can send PowerShell invocations and receive the exection output. This reduces the overhead time to execute PowerShell commands from seconds to milliseconds because each execution does not need to spin up a PowerShell process, execute a single pipeline, and tear the process down.

The manager operates by instantiating a custom PowerShell host process to which Ruby can then send commands over an IO pipe— on Windows machines, named pipes, on Unix/Linux, Unix Domain Sockets.

https://github.com/puppetlabs/ruby-pwsh

Describe the Solution You Would Like

Convert the "Facter::Util::Resolution.exec("#{powershell}" commands to make use of pwshlib instead for increased performance and the ability to use Powershell Core over Windows Powershell

Describe Alternatives You've Considered

Add the following parameters to the "Facter::Util::Resolution.exec("#{powershell}" command for faster load times and making sure that no powershell profile is loaded when executing commands

-NoProfile
-NonInteractive
-NoLogo
-ExecutionPolicy 'Bypass'

Additional Context

Add any other context or screenshots about the feature request here.

Reboot is not triggered

Describe the Bug

Reboot is not triggered.

Expected Behavior

Reboot

Environment

  • Module version 1.1.2
  • Puppet agent 6.26.0
  • Ubuntu 20.04

Using default config apart from new schedule:

patching_as_code::patch_schedule:
  2Tuesday:
    day_of_week: Tuesday
    count_of_week: 2
    hours: 22:00 - 23:30
    max_runs: 3
    reboot: ifneeded

Patching_as_code ran and updated OS packages:

Apr 12 22:05:42 node01 puppet-agent[2110060]: (/Stage[main]/Patching_as_code::Linux::Patchday/Exec[Patching as Code - Clean Cache]/returns) executed successfully
Apr 12 22:06:12 node01 puppet-agent[2110060]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[libnetplan0]/ensure) ensure changed '0.103-0ubuntu5~20.04.6' to '0.104-0ubuntu2~20.04.1'
Apr 12 22:06:21 node01 puppet-agent[2110060]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[netplan.io]/ensure) ensure changed '0.103-0ubuntu5~20.04.6' to '0.104-0ubuntu2~20.04.1'
Apr 12 22:06:31 node01 puppet-agent[2110060]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[ubuntu-advantage-tools]/ensure) ensure changed '27.6~20.04.1' to '27.7~20.04.1'
Apr 12 22:06:56 node01 puppet-agent[2110060]: (/Stage[main]/Patching_as_code::Linux::Patchday/Package[grub2-common]/ensure) ensure changed '2.04-1ubuntu26.13' to '2.04-1ubuntu26.15'
Apr 12 22:07:07 node01 puppet-agent[2110060]: (/Stage[main]/Patching_as_code/File[Patching as Code - Save Patch Run Info]/content) content changed '{md5}d9a325fa96c8f91a8e174c90a2193ebf' to '{md5}64d12e1837bfe102779d4a7732879faa'
Apr 12 22:07:07 node01 puppet-agent[2110060]: Patches installed, refreshing patching facts...
Apr 12 22:07:07 node01 puppet-agent[2110060]: (/Stage[main]/Patching_as_code/Notify[Patching as Code - Update Fact]/message) defined 'message' as 'Patches installed, refreshing patching facts...'
Apr 12 22:07:15 node01 puppet-agent[2110060]: (/Stage[main]/Os_patching/Exec[os_patching::exec::fact_upload]) Triggered 'refresh' from 1 event
Apr 12 22:07:29 node01 puppet-agent[2110060]: (/Stage[main]/Os_patching/Exec[os_patching::exec::fact]) Triggered 'refresh' from 1 event
Apr 12 22:07:32 node01 puppet-agent[2110060]: Applied catalog in 113.55 seconds
Apr 12 22:35:44 node01 puppet-agent[2150490]: Applied catalog in 7.30 seconds
Apr 12 23:05:45 node01 puppet-agent[2183045]: Applied catalog in 7.14 seconds
Apr 12 23:35:44 node01 puppet-agent[2216129]: Applied catalog in 6.84 seconds

Other relevant info:

$ cat /var/run/reboot-required
*** System restart required ***
$ ls -lah /var/run/reboot-required
-rw-r--r-- 1 root root 32 Mar 31 06:47 /var/run/reboot-required
$ /bin/sh /opt/puppetlabs/puppet/cache/lib/patching_as_code/pending_reboot.sh
true

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.