Giter Club home page Giter Club logo

puppet-patching's Introduction

patching

Build Status Puppet Forge Version Puppet Forge Downloads Puppet Forge Score Puppet PDK Version puppetmodule.info docs

Table of Contents

Module description

A framework for building patching workflows. This module is designed to be used as building blocks for complex patching environments of Windows and Linux (RHEL, Ubuntu) systems.

No Puppet agent is required on the end targets. The node executing the patching will need to have bolt installed.

Setup

Setup Requirements

Module makes heavy use of bolt, you'll need to install it to get started. Install instructions are here.

If you want to use the patching::snapshot_vmware plan/function then you'll need the rbvmomi gem installed in the bolt ruby environment:

/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi

Quick Start

cat << EOF >> ~/.puppetlabs/bolt/Puppetfile
mod 'puppetlabs/stdlib'
mod 'encore/patching'
EOF

bolt puppetfile install
bolt plan run patching::available_updates --targets group_a

# install rbvmomi for VMware snapshot support
/opt/puppetlabs/bolt/bin/gem install --user-install rbvmomi

Architecture

This module is designed to work in enterprise patching environments.

Assumptions:

  • RHEL targets are registered to Satellite / Foreman or the internet
  • Ubuntu targets are registered to Landscape or the internet
  • Windows targets are registered to WSUS and Chocolatey (optional)

Registration to a central patching server is preferred for speed of software downloads and control of phased patching promotions.

At some point in the future we will include tasks and plans to promote patches through these central patching server tools.

Patching Module Architecture

Design

patching is designed around bolt tasks and plans.

Individual tasks have been written to accomplish targeted steps in the patching process. Examples: patching::available_updates is used to check for available updates on targets.

Plans are then used to pretty up output and tie tasks together.

This way end users can use the tasks and plans as build blocks to create their own custom patching workflows (we all know, there is no such thing as one size fits all).

For more info on tasks and plans, see the Usage and Reference sections.

Going further, many of the settings for the plans are configurable by setting vars on your groups in the bolt inventory file.

For more info on customizing settings using vars, see the Configuration Options section

Patching Workflow

Our default patching workflow is implented in the patching plan patching/init.pp.

This workflow consists of the following phases:

  • Organize inventory into groups, in the proper order required for patching
  • For each group...
  • Check for available updates
  • Disable monitoring
  • Snapshot the VMs
  • Pre-patch custom tasks
  • Update the host (patch)
  • Post-patch custom tasks
  • Reboot that require a reboot
  • Delete snapshots
  • Enable monitoring

Usage

Check for available updates

This will reach out to all targets in group_a in your inventory and check for any available updates through the system's package manager:

  • RHEL = yum
  • Ubuntu = apt
  • Windows = Windows Update + Chocolatey (if installed)
bolt plan run patching::available_updates --targets group_a

Disable monitoring

Prior to performing the snapshotting and patching steps, the plan will disable monitoring alerts in SolarWinds (by default).

This plan/task utilizes the remote transport []

bolt plan run patching::monitoring_solarwinds --targets group_a action=disable' monitoring_target=solarwinds

Create snapshots

This plan will snapshot all of the hosts in VMware. The name of the VM in VMware is assumed to be the uri of the node the inventory file.

/opt/puppetlabs/bolt/bin/gem install rbvmomi

bolt plan run patching::snapshot_vmware --targets group_a action='create' vsphere_host='vsphere.domain.tld' vsphere_username='xyz' vsphere_password='abc123' vsphere_datacenter='dctr1'

Perform pre-patching checks and actions

This plan is designed to perform custom service checks and shutdown actions before applying patches to a node. If you have custom actions that need to be perform prior to patching, place them in the pre_update scripts and this plan will execute them. Best practice is to define and distribute these scripts as part of your normal Puppet code as part of othe role for that node.

bolt plan run patching::pre_update --targets group_a

By default this executes the following scripts (targets where the script doesn't exist are ignored):

  • Linux = /opt/patching/bin/pre_update.sh
  • Windows = C:\ProgramData\patching\pre_update.ps1

Deploying pre/post patching scripts

An easy way to deploy pre/post patching scripts is via the patching Puppet manifest or the patching::script resource.

Using the patching class:

class {'patching':
  scripts => {
    'pre_patch.sh': {
      content => template('mymodule/patching/custom_app_post_patch.sh'),
    },
    'post_patch.sh': {
      source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
    },
  },
}

Via patching::script resources:

patching::script { 'custom_app_pre_patch.sh':
  content => template('mymodule/patching/custom_app_pre_patch.sh'),
}
patching::script { 'custom_app_post_patch.sh':
  source => 'puppet:///mymodule/patching/custom_app_post_patch.sh',
}

Or via Hiera:

patching::scripts:
  custom_app_pre_patch.sh:
    source: 'puppet:///mymodule/patching/custom_app_pre_patch.sh'
  custom_app_post_patch.sh:
    source: 'puppet:///mymodule/patching/custom_app_post_patch.sh'

Run a the full patching workflow end-to-end

Organize the inventory into groups:

  • patching::ordered_groups

Then, for each group:

  • patching::cache_updates
  • patching::available_updates
  • patching::snapshot_vmware action='create'
  • patching::pre_update
  • patching::update
  • patching::post_update
  • patching::reboot_required
  • patching::snapshot_vmware action='delete'
bolt plan run patching --targets group_a

Patching with Puppet Enterprise (PE)

When executing patching with Puppet Enterprise Bolt will use the pcp transport. This transport has a default timeout of 1000 seconds. Windows patching is MUCH slower than this and the timeouts will need to be increased.

If you do not modify this default timeout, you may experience the following error in the patching::update task or any other long running task:

Starting: task patching::update on windowshost.company.com
Finished: task patching::update with 1 failure in 1044.63 sec
The following hosts failed during update:
[{"target":"windowshost.company.com","action":"task","object":"patching::update","status":"failure","result":{"_output":"null","_error":{"kind":"puppetlabs.tasks/task-error","issue_code":"TASK_ERROR","msg":"The task failed with exit code unknown","details":{"exit_code":"unknown"}}},"node":"windowshost.company.com"}]

Below is an example bolt.yaml with the settings modified:

---
pcp:
  # 2 hours = 120 minutes = 7,200 seconds
  job-poll-timeout: 7200

For a complete reference of the available settings for the pcp transport see bolt configuration reference documentation.

Configuration Options

This module allows many aspects of its runtime to be customized using configuration options in the inventory file.

For details on all of the available configuration options, see REFERENCE_CONFIGURATION.md

Example: Let's say we want to prevent some targets from rebooting during patching. This can be customized with the patching_reboot_strategy variable in inventory:

groups:
  - name: no_reboot_nodes
    vars:
      patching_reboot_strategy: 'never'
    targets:
      - abc123.domain.tld
      - def4556.domain.tld

Reference

See REFERENCE.md

Limitations

This module has been tested on the following operating systems:

  • Windows
    • 2008
    • 2012
    • 2016
  • RHEL
    • 6
    • 7
    • 8
  • Ubuntu
    • 16.04
    • 18.04

Development

See DEVELOPMENT.md

Contributors

  • Nick Maludy (@nmaludy Encore Technologies)
  • Rick Paxton (Encore Technologies)
  • Scott Strengowski (Encore Technologies)
  • Vadym Chepkov (@vchepkov)

puppet-patching's People

Contributors

asktheaxis avatar bishopbm1 avatar bishopbm2 avatar corporate-gadfly avatar dartcontainer avatar fetzerms avatar jschoewe avatar msurato avatar nmaludy avatar seanmil avatar sirinek avatar vchepkov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-patching's Issues

Check for sufficient free space in the datastore prior to taking a vm snapshot

If you're in the situation where you can't run with Storage DRS turned on, you're very likely to find that there's not sufficient space on your datastore to keep a snapshot around during an activity as potentially likely to make a snapshot grow real big such as patching. So, it's probably a good idea to add some simple checks to the snapshot task to make sure that there's enough space for the snapshot to be created, and also that there's sufficient space to grow a bit during the course of patching.

ManagedObjectNotFound: The object 'vim.Task:task-256811' has already been deleted or has not been completely created

Hi Encore team,

We have been using the patching module for some time, and no issue, and suddenly we got the error below (it is random, from 10 times doing snapshot will come once, it is always same inputs):
{
"msg" : "ManagedObjectNotFound: The object 'vim.Task:task-256811' has already been deleted or has not been completely created",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
I think that message comes from VMware (error in Plan output above) reference: https://kb.vmware.com/s/article/1039326

I wonder if it is possible to take an exception and proceed with the Plan because when this fails Plan is stopped. Since this is a function, can we raise an exception in Ruby code in case of the error above to proceed?

Thanks

plan fails on bolt 2.4.0

Just encountered it, it fails at the very end, while printing summary:

Traceback (most recent call last):
	11: from /opt/puppetlabs/bolt/bin/bolt:23:in `<main>'
	10: from /opt/puppetlabs/bolt/bin/bolt:23:in `load'
	 9: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/exe/bolt:10:in `<top (required)>'
	 8: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/cli.rb:373:in `execute'
	 7: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/cli.rb:524:in `run_plan'
	 6: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/outputter/human.rb:353:in `print_plan_result'
	 5: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/outputter/human.rb:362:in `print_result_set'
	 4: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/result_set.rb:41:in `each'
	 3: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/result_set.rb:41:in `each'
	 2: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/result_set.rb:41:in `block in each'
	 1: from /opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/outputter/human.rb:362:in `block in print_result_set'
/opt/puppetlabs/bolt/lib/ruby/gems/2.5.0/gems/bolt-2.4.0/lib/bolt/outputter/human.rb:101:in `print_result': undefined method `capitalize' for nil:NilClass (NoMethodError)

Missing Dependency?

I am working with the plans and have run into a minor issue. At least a couple of the plans (check_online and check_puppet) appear to run the task "puppet_agent::version" which I do not seem to have. Is this the task from the puppet_agent module (https://forge.puppet.com/puppetlabs/puppet_agent/tasks)? If so, the metadata does not have this module as a dependency and I am thus unsure.

Thanks for any insight!

allow user to specify provider for update task

The tasks/update.json and tasks/update_windows.json files, as well as the powershell script to install updates define a parameter $provider that controls whether windows updates, chocolatey package updates or both get installed. However there is no way for the end user to select which, meaning the default of both get installed. In my specific case I only want windows updates, as we are using Puppet to control which versions of chocolatey packages should be installed on a host and cannot simply install the latest.

release 0.4.0 broke Fedora support

Version 0.4.0 broke Fedora support :

        "action": "task",
        "object": "patching::cache_update",
        "status": "failure",
        "result": {
          "_output": "ERROR: Unknown Operating System: FEDORA\nOutput: \n",
          "_error": {
            "kind": "puppetlabs.tasks/task-error",
            "issue_code": "TASK_ERROR",
            "msg": "The task failed with exit code 2",
            "details": {
              "exit_code": 2
            }
          }
        },

needs-restarting should be checked for before installing packages?

Hi,

For CentOS/RHEL systems, needs-restarting of the yum-utils package is used to evaluate if the system needs to be rebooted. However, at this stage, updates were already launched. I think it would be useful to check first and only launch the update task if needs-restarting is available, as the normal plan workflow will not attempt reboot if there's no updates to install.

The error in question:

...
Starting: task patching::reboot_required on 192.168.58.129
Finished: task patching::reboot_required with 1 failure in 1.59 sec
Finished: plan patching::reboot_required in 1.64 sec
Finished: plan test_patching in 4 min, 58 sec
Failed on 192.168.58.129:
  The task failed with exit code 4:
  ERROR - /usr/bin/needs-restarting isn't present on a RedHat/CentOS host. You probably need to install the package: yum-utils
Failed on 1 target: 192.168.58.129

If I then try a rerun:

...
Starting: task patching::update_history on 192.168.58.129
Finished: task patching::update_history with 0 failures in 0.66 sec
host                           | upgraded | installed
-----------------------------------------------------
192.168.58.129                 | 150      | 0
Finished: plan patching::update_history in 0.75 sec
Finished: plan test_patching in 5.19 sec
Plan completed successfully with no result

Any thoughts on this?

Regards.

Update task is ignoring errors of yum command because of tee

In the RHEL updates, if yum update fails, it is silently ignored because of the pipe to tee. Since tee always returns 0 then it always succeeds.

Instead we need to inspect $PIPESTATUS.

Need to check other places where we use pipes and check $? afterwards.

When I run bolt plan run -t windows patching::pre_update. The plan executes the script on target windows nodes successfully but the output says it failed.

I have created a PS script that restarts the windows update service and stored it in C:\ProgramData\patching\bin as specified in the docucmentation. Whenever I run below command, it executes the script sucessfully but returns as failed task.

[root@csu-tst-pup-p01 ~]# bolt plan run -t windows patching::pre_update
Starting: plan patching::pre_update
Starting: plan patching::get_targets
Starting: plan patching::check_puppet
Starting: task puppet_agent::version on csu-tst-tom-p01, csu-tst-oas-p01
Finished: task puppet_agent::version with 0 failures in 1.52 sec
Starting: plan patching::puppet_facts
Starting: task patching::puppet_facts on csu-tst-tom-p01, csu-tst-oas-p01
Finished: task patching::puppet_facts with 0 failures in 5.85 sec
Finished: plan patching::puppet_facts in 5.87 sec
Finished: plan patching::check_puppet in 7.42 sec
Finished: plan patching::get_targets in 7.43 sec
Starting: plan patching::pre_post_update
pre_post_update - noop = false
Starting: plan patching::get_targets
Finished: plan patching::get_targets in 0.0 sec
Starting: task patching::pre_update on csu-tst-tom-p01, csu-tst-oas-p01
Finished: task patching::pre_update with 2 failures in 3.01 sec
Finished: plan patching::pre_post_update in 3.03 sec
Finished: plan patching::pre_update in 10.47 sec
Failed on testserver2:
The task failed with exit code 1
Found Service: wuauserv
Stopped service: wuauserv
Started service: wuauserv
Failed on testserver1:
The task failed with exit code 1
Found Service: wuauserv
Stopped service: wuauserv
Started service: wuauserv
Failed on 2 targets: testserver2,testserver1
Ran on 2 targets

Below is the PS script.

[CmdletBinding()]
param (

Name or list of service names to stop

[Parameter(Mandatory=$false)]
[string[]]
$service='wuauserv',

Restart the service immediately

[Parameter(Mandatory=$false)]
[switch]
$norestart
)

foreach ($name in $service) {
try{
$serviceObject = Get-service -Name $name
Write-output "Found Service: $name"

if(-not $serviceObject){
  Write-Output "Service not found: $name"
  continue
}

Stop-Service -InputObject $serviceObject -ErrorAction Stop

Write-Output "Stopped service: $name"

if($norestart){continue}

Start-Service -InputObject $serviceObject

Write-Output "Started service: $name"

} catch {
Write-Output "Cannot stop service: $name"
Write-Output "Dependent services: $($serviceObject.dependentservices)"

exit 1

}
}

pre/post update - pretty print scripts that were ran along with success/failure

When running the patching::pre_update and patching::post_update it would be nice to pretty-print all of the hosts and what scripts were actually execute on each one (maybe there are customizations in there) along with success/failure.

Thinking the output could look something like:

Pre-update script executions:
- name: centos.domain.tld
  script: /opt/patching/bin/pre_update.sh
  status: success
- name: centoscustom.domain.tld
  script: /my/custom/path/blah.sh
  status: success

With PE PLAN no such file to load -- rbvmomi

Good morning everyone

I am trying to use patching with PE 2019.8 but I am getting error below:

For example if I use puppet plan run patching::snapshot_vmware action=create targets=xxxxxxx vsphere_datacenter=xxxxxxx vsphere_host=xxxxxxx vsphere_insecure=true vsphere_password=xxxxxxx vsphere_username=xxxxxxx

Then I receive error:
{
"msg": "no such file to load -- rbvmomi",
"kind": "bolt/plan-failure",
"details": {
"class": "LoadError"
}
}

I tried on PE to do this
/opt/puppetlabs/server/bin/puppetserver gem install rbvmomi
and restart pe-puppetserver but it didn't help.

Any idea?

Thanks

Using /etc/os-release

I am curious if there is a rationale to using lsb_release and /etc/redhat-release instead of parsing /etc/os-release for the distribution differentiation script. According to http://0pointer.de/blog/projects/os-release os-release is a standard part of systemd which has been the standard on most Linux distributions for a little while now.

The reason I am asking is that I would like to use this module with SUSE and the branching statements would be simpler for the third distribution with a single case statement against a single variable instead of multiple if statements against multiple variables.

Evaluation error when calling plan reboot_required with noop

When called with noop, this plan will try to return a hash with key value variables which are not set, causing en evaluation error:
{ "kind": "bolt/pal-error", "msg": "Evaluation Error: Unknown variable: 'targets_reboot_attempted'. (file: modules/patching/plans/reboot_required.pp, line: 112, column: 23)", "details": { "file": "modules/patching/plans/reboot_required.pp", "line": 112, "column": 23 } }

add support for 'rolling' patches

In some workflows it's desired to patch one server at a time in a particular group.
consider splunk syslogforwaders or splunk indexers. Instead of having manually to split them between many-many groups, it would be easier to have them in one group and patch one node at a time

Document config variables used by plans

Currently we use a lot of config variables. They are documented in the REFERENCE_CONFIGURATION.md , but not documented in the plan documentation itself.

Would be great to improve the documentation on the plans to detail out what configuration options are available for that plan and how they're used.

Content promotion workflow for WSUS and Foreman/Satellite

This is mostly off-topic, so feel free to label this issue as question.

What strategies are available for patching systems in waves where QA lags production by, let's say, 2 weeks? So, you watch your QA systems for issues for 2 weeks and then roll out the patches to production.

Is it along the lines of yum repos (for RHEL) which are maintained with a 2 week lag in updates?

file::write not supported in PE

When trying to run the plans from the PE console, I get an error because File::Write isnt supported. It works from the command line though.

result file location not clear

In the module documentation, the result file is said to be at both C:\ProgramData\patching\patching.json and C:\ProgramData\PuppetLabs\patching\patching.json

However, when I use the plan to patch a windows host, the patching::update task tries to write to C:\ProgramData\patching\log\patching.json which fails because that directory does not exist. Somehow it does end up existing though, because after the plan fails I went to my windows host and was able to see the .log and .json files, so maybe they were created later?

Here's the output from the bolt command line run:

bsirinek@puppet $ bolt plan run --transport=pcp patching reboot_strategy=only_required monitoring_enabled=false snapshot_plan=disabled nodes=mywindowshost.company.com
Starting: plan patching
Starting: plan patching::check_puppet
Starting: task puppet_agent::version on mywindowshost.company.com
Finished: task puppet_agent::version with 0 failures in 2.45 sec
Starting: plan patching::puppet_facts
Starting: task patching::puppet_facts on mywindowshost.company.com
Finished: task patching::puppet_facts with 0 failures in 8.52 sec
Finished: plan patching::puppet_facts in 8.53 sec
Finished: plan patching::check_puppet in 10.99 sec
Starting: plan patching::ordered_groups
Groups = []
Group '' nodes = [mywindowshost.company.com]
Finished: plan patching::ordered_groups in 0.03 sec
Starting: task patching::cache_update on mywindowshost.company.com
Finished: task patching::cache_update with 0 failures in 2.29 sec
Starting: plan patching::available_updates
Starting: task patching::available_updates on mywindowshost.company.com
Finished: task patching::available_updates with 0 failures in 308.53 sec
Host update status: ('+' has available update; '-' no update) [num updates]
 + mywindowshost.company.com [1]
Finished: plan patching::available_updates in 5 min, 9 sec
Starting: plan patching::pre_update
Starting: plan patching::get_targets
Finished: plan patching::get_targets in 0.01 sec
Starting: plan patching::pre_post_update
pre_post_update - noop = false
Starting: plan patching::get_targets
Finished: plan patching::get_targets in 0.01 sec
Starting: task patching::pre_update on mywindowshost.company.com
Finished: task patching::pre_update with 0 failures in 3.38 sec
Finished: plan patching::pre_post_update in 3.42 sec
Finished: plan patching::pre_update in 3.45 sec
Starting: task patching::update on mywindowshost.company.com
Finished: task patching::update with 1 failure in 1045.36 sec
The following hosts failed during update:
[{"target":"mywindowshost.company.com","action":"task","object":"patching::update","status":"failure","result":{"_output":"null","_error":{"kind":"puppetlabs.tasks/task-error","issue_code":"TASK_ERROR","msg":"The task failed with exit code unknown","details":{"exit_code":"unknown"}}},"node":"mywindowshost.company.com"}]
Starting: wait until available on mywindowshost.company.com
Finished: wait until available with 0 failures in 0.04 sec
Starting: plan patching::update_history
Starting: plan patching::get_targets
Finished: plan patching::get_targets in 0.01 sec
Starting: task patching::update_history on mywindowshost.company.com
Finished: task patching::update_history with 1 failure in 2.33 sec
Finished: plan patching::update_history in 2.34 sec
Finished: plan patching in 22 min, 53 sec
{
  "kind": "bolt/run-failure",
  "msg": "Plan aborted: run_task 'patching::update_history' failed on 1 target",
  "details": {
    "action": "run_task",
    "object": "patching::update_history",
    "result_set": [
      {
        "target": "mywindowshost.company.com",
        "action": "task",
        "object": "patching::update_history",
        "status": "failure",
        "result": {
          "_error": {
            "msg": "Exited 1:\nC:\\Program Files\\Puppet Labs\\Puppet\\puppet\\bin\\PowershellShim.ps1 : Cannot \nfind path 'C:\\ProgramData\\patching\\log\\patching.json' because it does not \nexist.\n    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorExcep \n   tion\n    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorExceptio \n   n,PowershellShim.ps1\n \n",
            "kind": "puppetlabs.tasks/task-error",
            "details": {
              "exit_code": 1
            }
          },
          "_output": ""
        },
        "node": "mywindowshost.company.com"
      }
    ]
  }
}

Reboot warning step

Should put a reboot warning step at the beginning of patching so people have longer to react to their host being patched.

Pass arbitrary parameters to native package managers

Use case

We currently want to update our systems automatically using this patching module. In our special environment, we need to make sure that some repositories are ignored and others are enabled during the patching process.

Feature Request
As this differs very much across systems and their package managers, it would be great if arbitrary parameters could be passed to the system commands. Maybe this would also somehow solve #64 as the user could supply arguments that are specific for his environment.

Example:

run_plan('patching', $targets, {
    monitoring_enabled => false,
    snapshot_create => false,
    snapshot_delete => false,
    report_file => undef,
    available_updates_extra_args => '--enable-repo=*',
    update_extra_args => '--enable-repo=*'
  })

Flag for patching all hosts, not just ones with updates

This is a workflow thing.

If i run the patching workflow once and it errors for some reason during the patching::update phase, then run it again, it only tries to patch a few of the hosts that may still have updates on them (or none, if they all updated).

Instead i want to re-run the full workflow on all the hosts if they have updates or not.

Would be nice to have a flag for this.

Improve reporting

Thoughts:

  • more details about the patches applied (OS specific patch details instead of just the common fields)
  • multiple tabs in a spreadsheet, maybe? (one for each OS type or each group?)
  • timestamps in the update logs (put timestamps in the patching::update output so we know WHEN things were patched)
  • duration of patching?
  • output OS facts in the report
  • maybe iterate and gather list of available fields from all hashes in the update list, then make those our columns?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.