Giter Club home page Giter Club logo

ubuntu22-cis's Introduction

Ubuntu 22 CIS

Configure a Ubuntu 22 machine to be CIS compliant

Based on CIS Ubuntu Linux 22.04 LTS Benchmark v1.0.0 Release

Org Stars Stars Forks followers Twitter URL

Discord Badge

Release Branch Release Tag Release Date

Main Pipeline Status

Devel Pipeline Status Devel Commits

Issues Open Issues Closed Pull Requests

License


Looking for support?

Lockdown Enterprise

Ansible support

Community

Join us on our Discord Server to ask questions, discuss features, or just chat with other Ansible-Lockdown users.

Caution(s)

This role will make changes to the system that could break things. This is not an auditing tool but rather a remediation tool to be used after an audit has been conducted.

This role was developed against a clean install of the Operating System. If you are implementing on an existing system, please review this role for any site-specific changes that are needed.

Documentation

Requirements

General:

  • Basic knowledge of Ansible, below are some links to the Ansible documentation to help get started if you are unfamiliar with Ansible
  • Functioning Ansible and/or Tower Installed, configured, and running. This includes all of the base Ansible/Tower configurations, needed packages installed, and infrastructure setup.
  • Please read through the tasks in this role to gain an understanding of what each control is doing. Some of the tasks are disruptive and can have unintended consequences in a live production system. Also, familiarize yourself with the variables in the defaults/main.yml file or the Main Variables Wiki Page.

Technical Dependencies:

  • Running Ansible/Tower setup (this role is tested against Ansible version 2.12.1 and newer)
  • Python3 Ansible run environment
  • goss >= 0.4.4 (If using for audit)

Auditing (new)

This can be turned on or off within the defaults/main.yml file with the variable run_audit. The value is false by default, please refer to the wiki for more details.

This is a much quicker, very lightweight, checking (where possible) config compliance and live/running settings.

A new form of auditing has been developed, by using a small (12MB) go binary called goss along with the relevant configurations to check. Without the need for infrastructure or other tooling. This audit will not only check the config has the correct setting but aims to capture if it is running with that configuration also trying to remove false positives in the process.

Refer to UBUNTU22-CIS-Audit.

Further audit documentation can be found at Read The Docs

Role Variables

This role is designed so the end user should not have to edit the tasks themselves. All customizing should be done via the defaults/main.yml file or with extra vars within the project, job, workflow, etc.

Branches

  • devel - This is the default branch and the working development branch. Community pull requests will be pulled into this branch
  • main - This is the release branch
  • reports - This is a protected branch for our scoring reports, no code should ever go here
  • gh-pages - This is the GitHub pages branch
  • all other branches - Individual community member branches

Community Contribution

We encourage you (the community) to contribute to this role. Please read the rules below.

  • Your work is done in your own individual branch. Make sure to Signed-off and GPG sign all commits you intend to merge.
  • All community Pull Requests are pulled into the devel branch
  • Pull Requests into devel will confirm your commits have a GPG signature, Signed-off, and a functional test before being approved
  • Once your changes are merged and a more detailed review is complete, an authorized member will merge your changes into the main branch for a new release

Pipeline Testing

uses:

  • ansible-core 2.12
  • ansible collections - pulls in the latest version based on the requirements file
  • runs the audit using the devel branch
  • This is an automated test that occurs on pull requests into devel

Added Extras

  • pre-commit can be tested and can be run from within the directory
pre-commit run

ubuntu22-cis's People

Contributors

anzoman avatar bgro avatar colinbruner avatar dderemiah avatar dianamariaddm avatar dlesaffre avatar egonzalf avatar frederickw082922 avatar georgenalen avatar ipruteanu-sie avatar jamesv1994 avatar jason-hendry avatar joshavant avatar jovial avatar motehue avatar mrsteve81 avatar pre-commit-ci[bot] avatar raabf avatar technowhizz avatar tomi-bigpi avatar uk-bolly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ubuntu22-cis's Issues

Controls 5.5.1.1, 5.5.1.2, 5.5.1.3, and 5.5.1.4 are not executed for the root user, even though they should.

Describe the Issue
Controls 5.5.1.1, 5.5.1.2, 5.5.1.3, and 5.5.1.4 are not executed for the root user, even though they should,
because the respective chage operations are restricted to UIDs of 1000 and above.

Expected Behavior
For these controls, chage should be executed for all users.

Actual Behavior
see above

Control(s) Affected
5.5.1.1, 5.5.1.2, 5.5.1.3, and 5.5.1.4

Environment (please complete the following information):

  • branch being used: devel

Additional Notes
Anything additional goes here

Possible Solution
I will create a pull request.

Several tasks are not carrying the tag corresponding to the rule id

Describe the Issue
Task for rule 2.3.3 does not carry corresponding tag
Task for rule 4.1.1.3 does not carry corresponding tag
Task for rule 4.2.1.2 does not carry corresponding tag
Task for rule 4.2.1.4 does not carry corresponding tag
Task for rule 6.1.8 does not carry corresponding tag

Expected Behavior
A task for CIS rule x.y.z must carry tag of form rule_x.y.z

Actual Behavior
This is not the case for all tasks

Control(s) Affected
See above

Environment (please complete the following information):
n/a

Additional Notes
Anything additional goes here

Possible Solution
Add/correct the tags for the affected tasks.

pre_remediation_audit.yml always tag for vars file copy.

Describe the Issue
The following task requires the always tag so that the goss vars file that relates to the ansible environment can be copied over.
- name: Pre Audit Setup | Copy ansible default vars values to test audit

Expected Behavior
This task should run when trying to audit.

Actual Behavior
The task does not run if another tag is specified.

Possible Solution
Add the always tag to the task.

Dead code in role due to missing variable `ubtu22cis_auditd_uid_exclude`

Describe the Issue
There is code in the role that is only executed, if the variable ubtu22cis_auditd_uid_exclude contains an iterable value.
However, the variable is never set anywhere.

Expected Behavior
Either add the variable (then the respective code has to be added) or remove the code (it sets up
logging exceptions for users specified in ubtu22cis_auditd_uid_exclude.

Actual Behavior
Code regarding per-user audit-exceptions is never executed.

Control(s) Affected
In a sense all audit-related measures.

Environment (please complete the following information):

  • branch being used: devel

Additional Notes

Possible Solution
Either add the variable (then the respective code has to be added) or remove the code (it sets up
logging exceptions for users specified in ubtu22cis_auditd_uid_exclude.

Tasks 6.2.17 incorrect conditional statement

Describe the Issue
In Step 6.2.17, the conditional checks if the length ==0. This is incorrect as if any "dot files" are found, then the stdout will contain text. This seems to be only for the alert/warning to the user. The actual task that changes the files is correct.

Expected Behavior
This should alert the user when stdout is greater than 0 as it has found dot files.

Actual Behavior
Tells the user they have files that are writeable when they don't.

Possible Solution
Change ubtu22cis_6_2_17_audit.stdout | length == 0 to ubtu22cis_6_2_17_audit.stdout | length > 0

Improve documentation of variables in defaults/main.yml

Feature Request or Enhancement

The documentation of the variables in defaults/main.yml could be made more consistent regarding
the formatting of the comments and improved such that users can carry modifications easier
without necessarily looking into the Ansible code or documentation.

Summary of Request
I will provide a pull request that implements what I have in mind.

Describe Alternatives You've Considered
None, really

Suggested Code
I will provide a pull request that implements what I have in mind.

Tasks 2.1.4.4 and 2.3.3 do not have correct when and tag entries, respectively

Describe the Issue

  • The task for rule 2.1.4.4 has entry ubtu22cis_rule_2_1_4_3 in when: part
  • The task for rule 2.3.3 has tag rule_2.2.3

Expected Behavior
The when/tag entry should be corrected.

Actual Behavior

  • Setting ubtu22cis_rule_2_1_4_4: false in defaults/main.yml has no effect
  • -t rule_2.3.3 will not work as expected for rule 2.3.3

Control(s) Affected
see above

Environment (please complete the following information):
n/a

Additional Notes
n/a

Possible Solution
The when/tag entry should be corrected.

PRELIM mont type block failing

Describe the Issue
The prelim.yml file has a block that checks the tmp mount type. It registers a variable that is then overwritten if the fstab option is run which causes the next task in the block to fail as it is looking for a value that is no longer in the original variable.

register: tmp_mnt_type

This variable is then used in both when: to look for "generated" . However, the task changes this variable if it is found. tmp_mnt_type: fstab

Expected Behavior
The original variable shouldn't be touched and a new variable should be set indicating the mnt type.

Actual Behavior
The tmp_mnt_type variable changes and loses its original value which causes the third task in the block to fail if the second task succeeded.

Possible Solution
Either change the variable name that is used to assign the stdout of the systemctl is-enabled tmp.mount command.
Or, add an extra item to the when: to only do the third task if the second task was skipped.

Issues with 1.1.2.2 fstab option

Describe the Issue
The section that controls mounting the tmp partition and creating fstab entries results in a failure in mounting. (Ensure nodev option set on /tmp partition | fstab)

Also tmpfs should not be the forced filesystem as the partition may be using its own filesystem like ext4

Expected Behavior
The tmp partition should mount without failure at the correct mount point using the correct partition and file system

Actual Behavior
It fails to mount because the entry does not have a chosen mount point. The module documentation has been updated and says name: is an alias for path:
The current module is:
name: "{{ item.device }}"
src: "{{ item.fstype }}"
state: present
fstype: tmpfs

Possible Solution
Change the task to look something like this:
ansible.posix.mount:
path: /tmp
src: "{{ item.device }}"
state: present
fstype: <find out what the partition is otherwise default to tmpfs>
opts: defaults,{% if ubtu22cis_rule_1_1_2_2 %}nodev,{% endif %}{% if ubtu22cis_rule_1_1_2_3 %}noexec,{% endif %}{% if ubtu22cis_rule_1_1_2_4 %}nosuid{% endif %}

Task 1.3.1 does not carry corresponding `when` switch

Describe the Issue
task for rule 1.3.1 should have when: ubtu22cis_rule_1_3_1 but doesn't

Expected Behavior
when: ubtu22cis_rule_1_3_1 must be present

Actual Behavior
when: ubtu22cis_rule_1_3_1 is not present

Control(s) Affected
1.3.1

Environment (please complete the following information):
n/a

Additional Notes
Anything additional goes here

Possible Solution
Enter a suggested fix here

1.1.1.1: `cramfs` module has to be explicitly deny-listed for ending up with a CIS-compliant state

Describe the Issue

0. Initial CIS Assessment, before executing Ansible role:
** PASS **
- Module "cramfs" doesn't exist on the system
1a. Running the Ansible role
2a. Afterwards CIS assessment: Rule reported as failed.

Expected Behavior

2b. Afterwards CIS assessment: Rule reported as passed:
  ** PASS **

 - module: "cramfs" is not loadable: "install /bin/true "
 - module: "cramfs" is not loaded
 - module: "cramfs" is deny listed in: "/etc/modprobe.d/cramfs.conf"

Actual Behavior

2a. Afterwards CIS assessment: Rule reported as failed.
- Audit Result:
** FAIL **
- Reason(s) for audit failure:
- module: "cramfs" is not deny listed
- Correctly set:
- module: "cramfs" is not loadable: "install /bin/true "
- module: "cramfs" is not loaded

Control(s) Affected
Rule 1.1.1.1

Environment (please complete the following information):

  • branch being used: [e.g. devel]

Additional Notes

  1. Red-Hat's approach on preventing kernel modules to be automatically loaded: Step2 & Step3
  2. STIG for RH8, cramfs configuration
  3. OpenSCAP's way of doing it, even if for other OS-baselines( e.g. Oracle Linux )

To configure the system to prevent the cramfs from being used, add the following line to file /etc/modprobe.d/cramfs.conf:
blacklist cramfs

Possible Solution
I'll add a PR immediately.

Tasks in file cis_3.5.2.x.yml must be removed

Describe the Issue

  • Tasks for rule 3.5.3.1.1 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.1.2 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.1.3 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.2.1 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.2.2 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.2.3 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.2.4 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.3.1 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.3.2 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.3.3 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml
  • Tasks for rule 3.5.3.3.4 appear both in cis_3.5.3.x.yml and cis_3.5.2.x.yml

The following problems may follow from the problem listed above:

  • Rule 3.5.3.2.1 listed with two different titles, namely 'Ensure iptables default deny firewall policy' and 'Ensure iptables loopback traffic is configured'
  • Rule 3.5.3.2.2 listed with two different titles, namely 'Ensure iptables loopback traffic is configured' and 'Ensure iptables outbound and established connections are configured'
  • Rule 3.5.3.2.3 listed with two different titles, namely 'Ensure iptables outbound and established connections are configured' and 'Ensure iptables default deny firewall policy'
  • Rule 3.5.3.3.1 listed with two different titles, namely 'Ensure ip6tables default deny firewall policy' and 'Ensure ip6tables loopback traffic is configured'
  • Rule 3.5.3.3.2 listed with two different titles, namely 'Ensure ip6tables loopback traffic is configured' and 'Ensure ip6tables outbound and established connections are configured'
  • Rule 3.5.3.3.3 listed with two different titles, namely 'Ensure ip6tables outbound and established connections are configured' and 'Ensure ip6tables default deny firewall policy'

Expected Behavior

The tasks must be deleted from cis_3.5.2.x.yml

Actual Behavior

Tasks will be carried out two times.

Control(s) Affected
see above

Environment (please complete the following information):
n/a

Additional Notes
Anything additional goes here

Possible Solution
The tasks should be deleted from cis_3.5.2.x.yml

Task for R2.1.4.4 erroneously tagged and titled as R2.1.4.3

Describe the Issue

The task at line

https://github.com/ansible-lockdown/UBUNTU22-CIS/blob/devel/tasks/section_2/cis_2.1.4.x.yml#L66

is actually for rule 2.1.4.4 Ensure ntp is enabled and running

Expected Behavior
Title and tag must match rule 2.1.4.4 rather than 2.1.4.3

Actual Behavior
The task is tagged and titled wrongly.

Control(s) Affected
2.1.4.3 and 2.1.4.4

Environment (please complete the following information):
n/a

Additional Notes
Anything additional goes here

Possible Solution
use tag/id matching control 2.1.4.4 and title Ensure ntp is enabled and running

cis_4.2.1.x.yml failed with a RegEx Syntax Error

Hi,
The cis_4.2.1.x.yml failed with the following error:
Syntax Error while loading YAML.
found unknown escape character
The error appears to be in '/etc/ansible/roles/UBUNTU22-CIS/tasks/section_4/cis_4.2.1.x.yml': line 114, column 25, but may be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:
path: "{{ systemd_conf_file | default('/usr/lib/tmpfiles.d/systemd.conf') }}"
regexp: "^z /var/log/journal/%m/system.journal (!?06(0|4)0) root"
^ here

Expected Behavior
No errors

Actual Behavior
After the error the role isn't run and stops

Control(s) Affected
All Controls

Environment (please complete the following information):

  • Ansible Version: 2.9.27
  • Host Python Version: 2.7.5
  • Ansible Server Python Version: 2.7.5
  • Additional Details:

Additional Notes
NA

Possible Solution
For me it worked when I changed double quotes to single quotes:
"^z /var/log/journal/%m/system.journal (!?06(0|4)0) root"

Into
'^z /var/log/journal/%m/system.journal (!?06(0|4)0) root'

But as you can see my Ansible version is ancient, so this may cause it to break on newer versions

Question regarding rule 4.1.3.2 "Ensure actions as another user are always logged"

Question
It seems that the configuration used by this role does not quite capture what CIS wants to have captured.

CIS wants that actions as another user are always logged and proposes

-a always,exit -F arch=b64 -C euid!=uid -F auid!=unset -S execve -k user_emulation                   
-a always,exit -F arch=b32 -C euid!=uid -F auid!=unset -S execve -k user_emulation    

This role, however, is "only" interested into actions as root user:

-a always,exit -F arch=b64 -C euid!=uid -F euid=0 -F auid>=1000 -F auid!=4294967295 -S execve -k actions
-a always,exit -F arch=b32 -C euid!=uid -F euid=0 -F auid>=1000 -F auid!=4294967295 -S execve -k actions

Shouldn't we rather use what CIS proposes for this rule?

Task 4.2.3 fails if a log file vanishes

Describe the Issue
If during a run, a logfile is configured to say keep X histories but are uniquely named (e.g. sessionlauncher.log.2023-09-21-14-19) and that log file vanishes, then the task will fail.

Expected Behavior
Task ignores the fact a file no longer exists and carries on with the next file.

Actual Behavior
A clear and concise description of what's happening.

Control(s) Affected
4.2.3

Environment (please complete the following information):

  • branch being used: [e.g. devel]

  • Ansible Version: [e.g. 2.10]

  • Host Python Version: [e.g. Python 3.7.6]

  • Ansible Server Python Version: [e.g. Python 3.7.6]

  • branch being used: main

  • Ansible Version: ansible 2.10.8

  • Host Python Version: 3.10.12

  • Ansible Server Python Version: 3.10.12 (same - being run locally)

Additional Details:
We are targetting AWS WorkSpaces Ubuntu offering.

Additional Notes
Sample error message:

failed: [localhost] (item=/var/log/dcv/sessionlauncher.log.2023-09-21-14-19) => {"ansible_loop_var": "item", "changed": false, "item": {"atime": 1695305339.6007233, "ctime": 1695305972.2894833, "dev": 66307, "gid": 999, "gr_name": "dcv", "inode": 1047182, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0640", "mtime": 1695305927.553581, "nlink": 1, "path": "/var/log/dcv/sessionlauncher.log.2023-09-21-14-19", "pw_name": "root", "rgrp": true, "roth": false, "rusr": true, "size": 1556, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "msg": "file (/var/log/dcv/sessionlauncher.log.2023-09-21-14-19) is absent, cannot continue", "path": "/var/log/dcv/sessionlauncher.log.2023-09-21-14-19", "state": "absent"}
failed: [localhost] (item=/var/log/dcv/agent.console.log.2023-09-21-14-19) => {"ansible_loop_var": "item", "changed": false, "item": {"atime": 1695305344.936731, "ctime": 1695305972.1291497, "dev": 66307, "gid": 999, "gr_name": "dcv", "inode": 1047899, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0640", "mtime": 1695305375.0647857, "nlink": 1, "path": "/var/log/dcv/agent.console.log.2023-09-21-14-19", "pw_name": "gdm", "rgrp": true, "roth": false, "rusr": true, "size": 58456, "uid": 133, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "msg": "file (/var/log/dcv/agent.console.log.2023-09-21-14-19) is absent, cannot continue", "path": "/var/log/dcv/agent.console.log.2023-09-21-14-19", "state": "absent"}
failed: [localhost] (item=/var/log/dcv/agentlauncher.simon.baker.log.2023-09-21-14-23) => {"ansible_loop_var": "item", "changed": false, "item": {"atime": 1695305378.6407952, "ctime": 1695306196.3926826, "dev": 66307, "gid": 999, "gr_name": "dcv", "inode": 1048048, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0640", "mtime": 1695305927.7215812, "nlink": 1, "path": "/var/log/dcv/agentlauncher.simon.baker.log.2023-09-21-14-23", "pw_name": "simon.baker", "rgrp": true, "roth": false, "rusr": true, "size": 2842, "uid": 891801367, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "msg": "file (/var/log/dcv/agentlauncher.simon.baker.log.2023-09-21-14-23) is absent, cannot continue", "path": "/var/log/dcv/agentlauncher.simon.baker.log.2023-09-21-14-23", "state": "absent"}

Possible Solution
Task ignores errors? This feels a little brittle...

Rule 4.2.2.6 and 4.2.2.7 rsyslog config

Describe the Issue
The playbook has the option to specify whether the host is a log server or not ubtu22cis_system_is_log_server which is good. However, there is no option to specify the host just keeping logs locally. This affects the settings it configures in /etc/rsyslog.conf and causes an audit to fail as it finds those lines in the file

Expected Behavior
The following lines should either no appear or be commented out if the host is keeping logs to itself.
$ModLoad imtcp
$InputTCPServerRun port
$ModLoad imudp
$UDPServerRun port
$ModLoad imrelp
$InputRELPServerRun port

Actual Behavior
The host is setting up ports to listen on.

Control(s) Affected
4.2.2.6 and 4.2.2.7

Possible Solution
Add another variable option for when host keeps logs itself. That way rsyslog can be configured to not be listening on any ports. Then add a task that comments out those lines if that variable is set.

Discovering Interactive Users

Describe the Issue
Rules in section 6.2. are using the variable called interactive_users_home, which is registered in PRELIM | Interactive User accounts task:

- name: "PRELIM | Interactive User accounts"
  ansible.builtin.shell: 'cat /etc/passwd | grep -Ev "nologin|/sbin|/bin" | cut -d: -f6'
  register: interactive_users_home
  ... 

I discovered that the mechanism used for identifying such users is not working well, as the values used for grep prevent really identifying such users.

Expected Behavior
The ansible.builtin.shell command should identify all interactive users.

Actual Behavior
Interactive users are not returned by the above command.

# No filtering at all
example@exampleHost:~$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
[...]
example:x:1000:1000:,,,:/home/example:/bin/bash
_rpc:x:108:65534::/run/rpcbind:/usr/sbin/nologin
statd:x:109:65534::/var/lib/nfs:/usr/sbin/nologin


# Filtering for 'nologin' indeed gets out most of the users, respectively some of the non-interactive ones
example@exampleHost:~$ cat /etc/passwd | grep -Ev "nologin"
root:x:0:0:root:/root:/bin/bash
sync:x:4:65534:sync:/bin:/bin/sync
example:x:1000:1000:,,,:/home/example:/bin/bash

# Adding '/bin' in this manner, as a filter for the reverse grep, will end up filtering also interactive users(e.g. ones which have /bin/bash or /bin/sh as their login-shell)
example@exampleHost:~$ cat /etc/passwd | grep -Ev "nologin|/bin"                                     ****** Here's the issue, in my opinion
example@exampleHost:~$

# Our suggestion, for filtering some values like `/bin/false` only on the relevant fields:
example@exampleHost:~$ grep -E -v '^(root|halt|sync|shutdown)' /etc/passwd | awk -F: '($7 != "'"$(which nologin)"'" && $7 != "/bin/false") { print $6 }'
/home/example

Control(s) Affected
6.2.* (e.g. 6.2.11, 6.2.13, etc.)

Environment (please complete the following information):

  • branch being used: [e.g. devel]

Additional Notes

  • The root, sync, shutdown, and halt users are exempted from requiring a non-login shell.

Possible Solution
I'll provide a PR immediately.

There is no /etc/systemd/timesyncd.conf.d folder by default - Step 2.1.3.1

Describe the Issue
In step 2.1.3.1. copying 50-timesync.conf file to the /etc/systemd/timesyncd.conf.d folder, but this is not exists by default.

Expected Behavior
Need to create folder in a previous task.

Actual Behavior
Error happens, no such directory.

Environment:

  • branch being used: devel
  • Ansible Version: 2.12.0
  • Host Python Version: 3.10.12
  • Ansible Server Python Version: 3.10.12
  • Additional Details: Ubuntu 22.04 in KVM virtual server

Possible Solution
Create folder before copying task.

Rule 1.4.x Grub Config

Describe the Issue
There appears to be issues the way the grub config is setup. Mainly in the module that sets the password. It has capture groups set but backrefs isn't on so those capture groups aren't used. Also it is looking for insertafter: set superusers="{{ ubtu22cis_grub_user }}"

Also, the way grub requires encrypted passwords is different to what is stored in /etc/shadow The rule for 1.4.3 uses the same grub hash which is incorrect as it uses a different hash format.

Expected Behavior
The grub file should be updated with both the username and the password in encrypted form. The user module should update the password correctly.

Actual Behavior
No user is added and only the password is added with a \1 due to incorrect capture group usage.
The root user password ins't updated correctly in /etc/shadow

Control(s) Affected
1.4.1

Possible Solution
Accept the password variable as standard text that is just encrypted with ansible vault and then decrypt it and run it through a module that generates both a hash suitable for grub and a hash suitable for the shadow file that can then be fed into a variable during playbook execution.
Fix the module - name: "1.4.1 | PATCH | Ensure bootloader password is set" Perhaps change to blockinfile as both the user and password need to be set. Something like
- name: "1.4.1 | PATCH | Ensure bootloader password is set"
ansible.builtin.blockinfile:
path: "{{ ubtu22cis_grub_user_file }}"
insertafter: EOF
block: |
cat<<EOF
set superusers="{{ ubtu22cis_grub_user }}"
password_pbkdf2 {{ ubtu22cis_grub_user }} {{ ubtu22cis_bootloader_password_hash }}
EOF
state: present
notify: Grub update

Fix the module "1.4.3 | PATCH | Ensure authentication required for single user mode" to take the right hash format.

Task for control 1.6.1.3 does not work as expected

Describe the Issue
Control 1.6.1.3 mandates to Ensure all AppArmor Profiles are in enforce or complain mode.
However, the corresponding task only allows the role to set every profile to enforce mode --
the existing toggle in defaults/main.yml disables the tasks rather than switching between
enforce and complain mode.

Fixing this is made more complicated by the fact that Controls 1.6.1.3 and 1.6.1.4 target the same setting.
If Control 1.6.1.3 is configured to set complain, then it is weaker than 1.6.1.4. If it is set to enforce, then
it is actually equal.

What if 1.6.1.3 is set to complain and both rules are present, because the role is run with -t level1-server -t level2-server (which is what you must do in order to implement all controls required for level-2-compliance?

Then both 1.6.1.3 and 1.6.1.4 are executed. Rule 1.6.1.4 overwrite the setting of 1.6.1.3, so that is OK, I guess, but still, we targetting the same setting two times with different values.

The solution is to change the order: have 1.6.1.4 be executed first and set a flag that it has run, which is then examined by 1.6.1.3 -- if 1.6.1.4 has run, then 1.6.1.3 is skipped.

Expected Behavior
Change the implementation such that complain or enforce mode can be chosen for 1.6.1.3 and it is ensured that 1.6.1.3 and 1.6.1.4 are not carried out in the same run.

Actual Behavior
See above

Control(s) Affected
1.6.1.3, 1.6.1.4

Environment (please complete the following information):

  • branch being used: e.g. devel

Possible Solution
I will provide a pull request.

Ubuntu22-CIS can't logon after remediation

Question
I can;t logon anymore after I applied the UBUNTU22-CIS on an Ubuntu 22.04 running in Hyper-V. I'm trying to logon using the console.
I'm getting the error, sorry password authentication didn't work.

ubtu22cis_passwd value

Question
what ubtu22cis_passwd value?

Environment :

  • Ansible Version: [2.9.27]
  • Host Python Version: [3.10.6]
  • Ansible Server Python Version: [2.7.5]
  • Additional Details: tasks/section_5/cis_5.5.x.yml

I can't find the ubtu22cis_passwd value in defaults/main.yml,and CIS 5.5.1.2 run failed.
Could you tell me the value of ubtu22cis_passwd where to set ?

Variable and comment issues

Describe the Issue
I've noticed some of the comments in the defaults/main.yml are referncing wrong CIS sections but the right section is referenced in the task playbooks.

# Section 6 Control Variables
# Control 6.1.10
# ubtu22cis_no_world_write_adjust will toggle the automated fix to remove world-writable perms from all files
# Setting to true will remove all world-writable permissions, and false will leave as-is ubtu22cis_no_world_write_adjust: true

Also, one of the variables name is still ubuntu 20 instead of 22.
# 1.7.1
# disable dynamic motd to stop extra sshd message from appearing
ubtu20cis_disable_dynamic_motd: true

The variables below aren't used anywhere. At least not in the devel branch
ubtu22cis_unowned_owner: root
ubtu22cis_no_owner_adjust: true
ubtu22cis_ungrouped_group: root
ubtu22cis_no_group_adjust: true
ubtu22cis_suid_adjust: false
ubtu22cis_passwd_label: "{{ (this_item | default(item)).id }}: {{ (this_item | default(item)).dir }}"

Expected Behavior
The variable ubtu22cis_no_world_write_adjust is for Section 6.1.9 not 6.1.10

Possible Solution
Fix comments
Rename variable to be consistent with others.
Remove variables that are no longer used.

UMASK issues in 5.5.4

Describe the Issue
umask change in 5.5.4 uses lowercase umask in the login.defs file. This causes an incorrect configuration error to appear when trying to login as a user.

Expected Behavior
login.defs requires capitalised UMASK in the file. See here: https://man7.org/linux/man-pages/man5/login.defs.5.html

Actual Behavior
Value isn't being applied and throws an error when trying to login: configuration error - unknown item 'umask' (notify administrator)

Possible Solution
Create another task that just edits the login.defs file as the /etc/bash.bashrc and /etc/profile require lower case umask within the 5.5.4 block.

Errors with several auditd rules on ARM (aarch64)

Description:
Some auditd rules in templates/audit/99_auditd.rules.j2 are incorrect for ARM systems. For example this line:
-a always,exit -F arch=b64 -S creat,open,openat,truncate,ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access

Expected Behavior
The correct auditd rules for the architecure should be used.

Actual Behavior
The following error occurs on ARM systems when auditd is restarted:

Jul 03 16:27:03 docker-2 augenrules[66289]: Syscall name unknown: creat
Jul 03 16:27:03 docker-2 augenrules[66289]: There was an error in line 20 of /etc/audit/audit.rules

Control(s) Affected
ubtu22cis_rule_4_1_3_x

Environment:

root@docker-2:~# uname -a
Linux docker-2 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:21:56 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

Possible Solution
There should be a distinction for the different supported architectures (aarch64, x86_64, etc.) in templates/audit/99_auditd.rules.j2 with each having the correct syscalls in the affected rules. Available syscalls can be checked with the ausyscall command.
The following syscalls would need to be removed for aarch64:

  • arch=b64: creat, open, chmod, chown, lchown, unlink, rename, create_module, query_module
  • arch=b32: create_module, query_module

I am not sure if any would need to be added instead.

Unused variables in `defaults/main.yml` associated with non-existent control 4.3

Describe the Issue
The following variables are unused

# Control 4.3
# ubtu22cis_logrotate is the log rotate frequencey. Options are daily, weekly, monthly, and yearly
ubtu22cis_logrotate: "daily"

# Control 4.3
# ubtu22cis_logrotate_create_settings are the settings for the create parameter in /etc/logrotate.conf
# The permissions need to be 640 or more restrictive.
# If you would like to include user/group settings to this parameter format the var as below
# ubtu22cis_logrotate_create_settings: "0640 root utmp"
ubtu22cis_logrotate_create_settings: "0640"

Expected Behavior
n/a

Actual Behavior
n/a

Control(s) Affected
none

Environment (please complete the following information):

  • branch being used: devel

Additional Notes
n/a

Possible Solution

Remove the corresponding lines in defaults/main.yml.

undefined var when running with certain rules.

Describe the Issue
The variable tmp_mnt_type needs to be set for 1.1.2.2/3/4 to run. This isn't set if the PRELIM role is skipped due to the when: conditions not being met.
If the following vars are set to false it results in the PRELIM task not running and then causing the task for 1.1.2.2 to fail to run because the when: statement for that task relies on tmp_mnt_type being defined.
ubtu22cis_rule_1_1_2_1 or
ubtu22cis_rule_1_1_2_2 or
ubtu22cis_rule_1_1_2_3 or
ubtu22cis_rule_1_1_2_4

Expected Behavior
The playbook should work even if the variable isn't defined.

Actual Behavior
The playbook fails cause the var isn't defined when the PRELIM is skipped due to those rule vars being all false.

Possible Solution
Three possible fixes from highest to lowest recommendation:

  • Define a default value for the tmp_mnt_type
  • Add another condition to be tmp_mnt_type is defined
  • Move it to the bottom of the when: list so that is will only be assessed if the other criteria are met first.

Task 2.1.4.2 has wrong title

Describe the Issue
The title of task 2.1.4.2 is Ensure ntp access control is configured but it should be Ensure ntp is configured with authorized timeserver

Expected Behavior
The title should match the CIS rule

Actual Behavior
The wrong rule text is provided.

Control(s) Affected
2.1.4.2

Environment (please complete the following information):
n/a

Additional Notes
Anything additional goes here

Possible Solution
Use the correct CIS rule text as title.

Value of variable `ubtu22cis_auditd.admin_space_left_action` not used in code

Describe the Issue
As the following search
shows: the variable value ubtu22cis_auditd.admin_space_left_action is not used in the role; instead, the value halt is hardcoded.

Expected Behavior
Use the variable's value rather than a hardcoded value in line.

Actual Behavior
n/a
Control(s) Affected
4.1.2.3

Environment (please complete the following information):

  • branch being used: devel

Possible Solution
Use the variable's value rather than a hardcoded value in

Duplicate variables `ubtu22cis_system_is_container` and `system_is_container`

Describe the Issue
The file defaults\main.yml contains two variables ubtu22cis_system_is_container and system_is_container;
the latter is set automatically when the role is rune, the former must be be set manually.
It seems that ubtu22cis_system_is_container should be removed and its single occurrence
in a task be replaced by system_is_container.

Expected Behavior
n/a
Actual Behavior
n/a
Control(s) Affected
Task PRELIM | Install Network-Manager is affected

Environment (please complete the following information):

  • branch being used: devel

Additional Notes
I will provide a pull request.

Possible Solution
ubtu22cis_system_is_container should be removed and its single occurrence
in a task be replaced by system_is_container.

Serveral rules appear in different tasks with different rule texts

Describe the Issue

  • Rule 1.1.2.3 listed with two different titles, namely 'Ensure noexec option set on /tmp partition ' and 'Ensure nosuid option set on /tmp partition
  • Rule 1.1.2.4 listed with two different titles, namely 'Ensure nosuid option set on /tmp partition ' and 'Ensure noexec option ntp'
  • Rule 2.1.4.3 listed with two different titles, namely 'Ensure ntp is enabled and running' and 'Ensure ntp is running as user ntp

Expected Behavior
Rule titles should be consistent

Actual Behavior
Rule titles are not consistent

Control(s) Affected
1.1.2.3, 1.1.2.4 and 2.1.4.3

Environment (please complete the following information):
n/a

Additional Notes
n/a

Possible Solution
Adjust the rule titles according to the CIS source.

Step 5.4.2 fails

When running the playbook (the devel branch) against ubuntu 22.04 server, I get the following:

qemu.ubuntu22: TASK [ansible-lockdown.ubuntu22-cis : 5.4.2 | PATCH | Ensure lockout for failed password attempts is configured | Set faillock common-auth] ***
    qemu.ubuntu22: changed: [ubuntu22-test7-acceptance] => (item=***'regexp': 'auth\\s+required\\s+pam_faillock.so', 'line': 'auth    required            pam_faillock.so preauth', 'before': '^.*pam_unix.so'***)
    qemu.ubuntu22: changed: [ubuntu22-test7-acceptance] => (item=***'regexp': 'auth\\s+[default=die]\\s+pam_faillock.so', 'line': 'auth    [default=die]       pam_faillock.so authfail', 'after': '^.*pam_unix.so'***)
    qemu.ubuntu22: fatal: [ubuntu22-test7-acceptance]: FAILED! => ***"msg": "Incorrect sudo password"***

I'm running this in packer against a standard ubuntu iso. Previous versions (cis_release branch) seemed to work with the same configuration.
There is nothing configured, only extra allowed_sshd users.

Issue in section "5.4.2 | PATCH | Ensure lockout for failed password attempts is configured | Set faillock common-auth"

Describe the Issue
This part will lock me out of the device and it fails halfway with the error " incorrect sudo password"

Expected Behavior
It should properly modify the /etc/pam.d/common-auth file without breaking authentication

Actual Behavior
It breaks the authentication because it fails to add the line with authsucc needed to pass authentication

Control(s) Affected
Authentication

Environment (please complete the following information):

  • Ansible Version: 2.13.11
  • Host Python Version: 3.8.2
  • Ansible Server Python Version: Python 3.10.6
  • Additional Details: I'm running on raspberry pi 4 ubuntu server 22.04

Possible Solution
I have modified the play in this section to temporarily comment out the pam_deny.so line to all changes can go through then enabling it back.

Here is the modification:

  • name: "5.4.2 | PATCH | Ensure lockout for failed password attempts is configured | temporarily disable pam_deny.so"
    ansible.builtin.replace:
    path: /etc/pam.d/common-auth
    regexp: '^auth\s+requisite\s+pam_deny.so'
    replace: '#auth requisite pam_deny.so'

    • name: "5.4.2 | PATCH | Ensure lockout for failed password attempts is configured | Set faillock common-auth"
      ansible.builtin.lineinfile:
      path: /etc/pam.d/common-auth
      regexp: '^{{ item.regexp }}'
      line: "{{ item.line }}"
      insertbefore: "{{ item.before | default(omit) }}"
      insertafter: "{{ item.after | default(omit) }}"
      loop:
      - { regexp: 'auth\s+required\s+pam_faillock.so', line: 'auth required pam_faillock.so preauth', before: '^.*pam_unix.so'}
      - { regexp: 'auth\s+sufficient\s+pam_faillock.so', line: 'auth sufficient pam_faillock.so authsucc', before: '^.*pam_deny.so'}
      - { regexp: 'auth\s+[default=die]\s+pam_faillock.so', line: 'auth [default=die] pam_faillock.so authfail', after: '^.*pam_unix.so'}

    • name: "5.4.2 | PATCH | Ensure lockout for failed password attempts is configured | enable pam_deny.so"
      ansible.builtin.replace:
      path: /etc/pam.d/common-auth
      regexp: '^#auth\s+requisite\s+pam_deny.so'
      replace: 'auth requisite pam_deny.so'

1.8.4 - Ensure GDM screen locks when the user is idle - session profile issues

Describe the Issue
There are, in my opinion, some things potentially incorrect regarding a particular subtask(Create the session profile file), part of the 1.8.4. rule block:

  1. The elements of the loop should not be surrounded with quotes, as they'd be treated as strings(instead of hashes)
  2. There's a typo in the 2nd list item, namely a double single quote in the end of this piece of code: 'system-db: {{ ubtu22cis_dconf_db_name }}''
  3. Their subkey definition is not consistent, maybe due to a typo which caused also 2), respectively line attribute is missing from 2nd array element, despite being used as the value of line option of the lineinfile task.
  4. Ansible returns an error if the file is not created, but my fix was to add create: yes to the lineinfile task.
  5. Even after fixing 1-4, a subtle thing has to be done to correctly create session file / make CIS return a Pass: removing empty spaces in regexp/line values.

Expected Behavior
After installing gdm3 on the target system, if this rule would be implemented by the role, it'll have a Pass status on CIS assessments.

Actual Behavior
After installing gdm3 on the target system, if this rule would be implemented by the role, it returns a Fail status on CIS assessments.

Control(s) Affected
1.8.4

Environment (please complete the following information):

  • branch being used: [e.g. devel]

Additional Notes
Where I inspired myself from, doc-wise

Possible Solution
I'll add a PR.

2.1.3.1 Fails when time_sync_tool is systemd-timesyncd

Describe the Issue
Role fails when ubtu22cis_time_sync_tool: "systemd-timesyncd" with error "'state' cannot be specified on a template"

Expected Behavior
Role completes without errors

Actual Behavior
Role fails

Control(s) Affected
2.1.3.1

Environment (please complete the following information):

  • branch being used: devel
  • Ansible Version: 2.15.3
  • Host Python Version: 3.10.12
  • Ansible Server Python Version: 3.10.12
  • Additional Details:

Additional Notes

TASK [UBUNTU22-CIS : 2.1.3.1 | PATCH | Ensure systemd-timesyncd configured with authorized timeserver | sources] ***************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
failed: [misp] (item=etc/systemd/timesyncd.conf.d/50-timesyncd.conf) => {"ansible_loop_var": "item", "changed": false, "item": "etc/systemd/timesyncd.conf.d/50-timesyncd.conf", "msg": "'state' cannot be specified on a template"}

Possible Solution
Remove state: present from config

Rule 1.3.x AIDE config

Describe the Issue
One of the parts of 1.3.1 installs AIDE and then configures it with aide init. However, this doesn't run if aide was already existing in the packages which can be the case. Remembering that Ansible modules are predominantly idempotent this check isn't necessary as it won't install if its already there. Also, the configure aide task doesn't run which causes the whole lot to fail an audit as AIDE hasn't initialised the db.

Also with this task it calls the command aide init This for my version of ubuntu (22.04) did not work. My version of aide is: 0.17.4-1

When I try the aide init option I get:
aide: extra parameter: 'init'

When I try the --init option I get:
aide --init
ERROR: missing configuration (use '--config' '--before' or '--after' command line parameter)

The command aide --config /etc/aide/aide.conf --init works
Also the command: aideinit works

Also, another issue is 1.3.2 that the file /usr/bin/aide.wrapper is not found and may be from an older version of AIDE. So the cron job variable for AIDE called: ubtu22cis_aide_cron needs to be updated to just /usr/bin/aide

Expected Behavior
AIDE is installed if required or initialised and configured correctly if already there.

Actual Behavior
AIDE isn't configured as it is already installed and also the cron job is calling the wrong file. Additionally, the command it runs is incorrect and fails. but the task continues on.

Control(s) Affected
1.3.x

Possible Solution
Remove condition in
when: - "'aide' not in ansible_facts.packages or
'aide-common' not in ansible_facts.packages"
As the modules are idempotent anyway so won't install.
Move the 1.3.1 | PATCH | Ensure AIDE is installed | Configure AIDE outside of the block as it should be separate to installing.
Change the aide init command to aide --config /etc/aide/aide.conf --init
Change cron job to call /usr/bin/aide

rsyslog service not working after applying playbook

Describe the Issue
rsyslog service is not working after applying the playbook, rsyslog.conf seems messed up.

Expected Behavior
rsyslog service up & running

Actual Behavior
rsyslog service not starting

Environment (please complete the following information):

  • branch being used: Ubuntu22-cis_v1.0.0
  • Ansible Version: 2.15.4

Possible Solution
replace config with default one

Unable to run playbook getting issue with block

while running the playbook I get the below error:

ERROR! 'notify' is not a valid attribute for a Block

The error appears to be in '/home/devops/ansible/ansible-lockdown/UBUNTU22-CIS-main/tasks/section_1/cis_1.8.x.yml': line 67, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  • name: "1.8.4 | PATCH | Ensure GDM screen locks when the user is idle"
    ^ here

Control(s) Affected
1.8.x

Environment (please complete the following information):
ansible 2.10.8
config file = /home/devops/ansible/common/ansible.cfg
configured module search path = ['/home/devops/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0

Audit tasks don't have tag: always

Describe the Issue
The audit tasks that are listed in the main.yml file don't have the always tag to them. This causes them not to run when using a specific tag like level1-server.

Expected Behavior
Audit tasks should run even if calling playbook with tags.

Actual Behavior
The audit playbook doesn't run.

Possible Solution
Add the - always tag to the associated tasks in main.yml. They are still manually controlled by a variable, so it is okay for them to have the always attached.

cis_1.5.x duplication

Describe the Issue
Two tasks are used for 1.5.1 both performing the same job but using different methods.

Expected Behavior
Only one task should be managing this, preferably with the sysctl module to avoid idempotency issues.

Actual Behavior
Idempotency issues occur due to both tasks performing the same job with a different method.

Control(s) Affected
1.5.1

Possible Solution
Use one task, preferably the sysctl module one. Also make that sysctl more like the other controls that modify sysctl.conf

The variable `ubtu22cis_save_iptables_cis_rules` is not used anymore and should be removed from `defaults/main.yml`

Describe the Issue

The variable ubtu22cis_save_iptables_cis_rules is not used anymore and should be removed from defaults/main.yml

ubtu22cis_save_iptables_cis_rules: true

Expected Behavior
n/a

Actual Behavior
n/a

Control(s) Affected
n/a

Environment (please complete the following information):

  • branch being used: devel

Possible Solution
Remove the variable and the comments above it.

Path issues when calling audit script

Describe the Issue
The run_audit.sh script used in the auditing repo is called from this playbook however the path locations of the audit files and binaries etc that the script needs to know are not passed to it. This causes the script to use the default values which may not align to the actual values that are set with ansible variables.

Related to this is also the variables used to dictate the audit paths of the files when copying over. There are multiple ways the playbook can get the audit files, however they each use different vars and some vars aren't even defined in the defaults/main.yml. Check the pre_remediation_audit.yml playbook and have a look at the variables used throughout that playbook.

Expected Behavior
The script should be called with the variables that it supports being modified to align with what is set in Ansible.
Variables that should be changeable by Ansible (excerpt from run_audit.sh):
# Variables in upper case tend to be able to be adjusted
# lower case variables are discovered or built from other variables
# Goss host Variables
AUDIT_BIN="${AUDIT_BIN:-/usr/local/bin/goss}" # location of the goss executable
AUDIT_FILE="${AUDIT_FILE:-goss.yml}" # the default goss file used by the audit provided by the audit configuration
AUDIT_CONTENT_LOCATION="${AUDIT_CONTENT_LOCATION:-/opt}" # Location of the audit configuration file as available to the OS

Actual Behavior
The script fails to run because it can't find the files it needs if they are copied to a different directory than what is in the script. Also, there are redundant vars used to define the paths on the Ansible side which causes vars reporting as undefined.

Possible Solution
Call the script with environment variables that match the variables in the script shown above. Then use these the variables set in defaults/main.yml as the value for the three environment variables. Use the environment: keyword attached to the calling shell task.

To fix the redundant vars in the pre_remediation_audit.yml playbook, go through all the vars used there and try to combine and check they are defined somewhere else.

How to disable "Grub User root has no password" checks?

Describe the Issue

I'm targeting an environment where we cannot enforce this control, and want to disable this setting.

Setting the following variables will toggle the checks for these related settings:

ubtu22cis_rule_1_4_1: false # changed spb - secure boot
ubtu22cis_rule_1_4_2: false # changed spb - secure boot
ubtu22cis_rule_1_4_3: false # changed spb - secure boot

However, the block will fall through to this area:

- name: Check ubtu22cis_grub_user password variable has been changed

which leads to this code block:

- name: Check ubtu22cis_grub_user password variable has been changed | if password blank or incorrect type and not being set

causing a failure.

Expected Behavior
The code block isn't execute if the rules ubtu22cis_rule_1_4_* are set to false.

Actual Behavior

TASK [UBUNTU22-CIS : Check ubtu22cis_grub_user password variable has been changed | if password blank or incorrect type and not being set] ***
task path: /root/.ansible/pull/U-2I0Y50GO301BQ.portswigger.internal/roles/UBUNTU22-CIS/tasks/main.yml:81
fatal: [localhost]: FAILED! => {
    "assertion": "( ubtu22cis_password_set_grub_user.stdout | length > 10 ) and '$y$' in ubtu22cis_password_set_grub_user.stdout",
    "changed": false,
    "evaluated_to": false,
    "msg": "Grub User root has no password set or incorrect encryption"
}

Control(s) Affected
1.4.x

Environment (please complete the following information):

  • branch being used: main
  • Ansible Version: ansible 2.10.8
  • Host Python Version: 3.10.12
  • Ansible Server Python Version: 3.10.12 (same - being run locally)
  • Additional Details:

We are targetting AWS WorkSpaces Ubuntu offering.

pre_remediation_audit.yml incorrect permissions on files

Describe the Issue
In the playbook pre_remediation_audit.yml The task "- name: Pre Audit Setup | copy to audit content files to server" copies files over from the control node to the host however sets the permissions as 644. This causes the run_audit.sh script to get a permission denied error because it doesn't have the execute bit set.

Expected Behavior
The shell script should execute normally, no matter the way the files are transferred to the target machine.

Actual Behavior
The shell script cannot execute because it doesn't have the execute bit set.

Possible Solution
Create a new task to ensure executable is set on the run_audit.sh file before it is called. This will ensure robustness for however method is used to copy the audit config over.

Question regarding use of commented-out variables

Question
defaults/main.yml contains some variables that are commented out and must be uncommented
in order to be used. For example:

# ubtu22cis_journald_systemmaxuse: SystemMaxUse=
# ubtu22cis_journald_systemkeepfree: SystemKeepFree=
# ubtu22cis_journald_runtimemaxuse: RuntimeMaxUse=
# ubtu22cis_journald_runtimekeepfree: RuntimeKeepFree
# ubtu22cis_journald_maxfilesec: MaxFileSec=

and

ubtu22cis_sshd:
    (...)
    # WARNING: make sure you understand the precedence when working with these values!!
    allow_users: "vagrant ubuntu"
    allow_groups: "vagrant ubuntu"
    # deny_users:
    # deny_groups:

Seeeing variables that are commented out always raises the question, whether these really can be used
or are maybe remants of ongoing development. Also, they make it harder to add additional information
in comments, since then what is potentially functional code and what is documentation becomes blended.

I understand the usage of this approach for the variables concerning journald, because otherwise the
current implementation of placing a commented-out line in the file as default value does not work any more.

(Though I think this could be solved by using two stages -- one for writing the respective lines and a second
one for "mopping up" the lines with empty values, replacing, e.g. ^SystemMaxUse=\s*$ with #SystemMaxUse= --
that would also remove the need to supply the full line rather than just the desired value, the only occurrence of such
an approach in this role, I think.)

But for deny_usersand deny_groups, if I don't misunderstand the implementation, also empty default values
should work:

ubtu22cis_sshd:
    (...)
    # WARNING: make sure you understand the precedence when working with these values!!
    allow_users: "vagrant ubuntu"
    allow_groups: "vagrant ubuntu"
    deny_users:
    deny_groups:

Should I make a change/pull request regarding the uncommenting of these two variables, or am I overlooking something?

Environment (please complete the following information):

  • Ansible Version: [e.g. 2.10]
  • Host Python Version: [e.g. Python 3.7.6]
  • Ansible Server Python Version: [e.g. Python 3.7.6]
  • Additional Details:

cis_5.4.x.yml logic idempotency issues

Describe the Issue
There seems to be some overwriting logic where it is set and then removed in a task later. I found this when trying to test idempotency of the whole playbook. There looks to be multiple issues with the 5.4.x section that modifies pam files. One is that some tasks are using lineinfile like 5.4.3 and others are using the pamd module like 5.4.4. Also in the 5.4.4 task, the ternary sets it to either args_present or absent. absent removes the whole line, I think this should be args_absent as per the module documentation.

Expected Behavior
The tasks should not be overwriting eachother and thus causing idempotency issues.

Actual Behavior
One task sets a value and another future task removes that value.

Control(s) Affected
5.4.x

Possible Solution
Use the pamd module for all the tasks that modify pam files and also correctly specify the state value.

Specific tags causing issues due to missing vars

Problem
When using --tags level1-server, some tasks in prelim are skipped due to not having the always tag attached. This then causes future tasks that are looking for that variable to fail. For example.

The following task fails:
TASK [ubuntu22_cis : "1.1.2.2 | PATCH | Ensure nodev option set on /tmp partition | tmp_systemd" "1.1.2.3 | PATCH | Ensure noexec option set on /tmp partition | tmp_systemd" "1.1.2.4 | PATCH | Ensure nosuid option set on /tmp partition | tmp_systemd"] *** fatal: [molecule-ubuntu2204]: FAILED! => {"msg": "The conditional check 'tmp_mnt_type == 'tmp_systemd'' failed. The error was: error while evaluating conditional (tmp_mnt_type == 'tmp_systemd'): 'tmp_mnt_type' is undefined. 'tmp_mnt_type' is undefined\n\n

Environment (please complete the following information):
The environment is being tested with molecule spinning up a vagrant VM of Ubuntu 22.04

  • Ansible Version: core 2.14.6
  • Host Python Version: 3.10.6
  • Ansible Server Python Version: 3.10.6
  • Additional Details: Molecule Version: 4.0.4

Possible Solution
Add the always tag to all the tasks in PRELIM if they are there to build facts about the system. When adding the always tag to the PRELIM task - name: PRELIM | Capture tmp mount type | discover mount tmp type It worked and continued on and then failed again a few tasks later due to the same issue.

Wrong rule text for rule 1.8.6

Describe the Issue
The rule text given in the title of the tasks concerned with R1.8.6 is wrong.

It is Ensure GDM screen locks when the user is idle but
it should be Ensure GDM automatic mounting of removable media is disabled

Expected Behavior
n/a
Actual Behavior
n/a
Control(s) Affected
R1.8.6

Environment (please complete the following information):
n/a

Additional Notes
n/a

Possible Solution
Enter the correct rule text.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.