Giter Club home page Giter Club logo

ocm-container's Introduction

ocm-container

A quick environment for accessing OpenShift v4 clusters.

A quick note

As you may have noticed we are currently in a transition period as we migrate from our old bash-based system to a golang-binary based component. While we're mostly confident that this golang-based approach is relatively stable, we are human and it is currently still being tested and worked on. If you have any stability issues, we encourage you to report them, and if it is effecting your ability to do your work please pull the v0.1.0 tag and fall back to the old version by building the container image locally.

Thank you for your patience as we make this transition.

Features

  • Uses ephemeral containers per cluster login, keeping .kube configuration and credentials separate.
  • Credentials are destroyed on container exit (container has --rm flag set)
  • Displays current cluster-name, and OpenShift project (oc project) in bash PS1
  • Infinitely extendable! Create your own Containerfile and reference FROM: ocm-container:latest and add whatever binaries you want on top

Installation

First, download the latest release for your OS/Architecture: https://github.com/openshift/ocm-container/releases

Setup the base configuration, setting your preferred container engine (Podman or Docker) and OCM Token:

ocm-container configure set engine CONTAINER_ENGINE
ocm-container configure set offline_access_token OCM_OFFLINE_ACCESS_TOKEN

Note: the OCM offline_access_token will be deprecated in the near future. OCM Container will be updated to handle this and assist in migrating your configuration.

This is all that is required to get started with the basic setup, use the OCM cli, and log into clusters with OCM Backplane.

Additional features

OCM Container has an additional feature set:

  • AWS configuration and credential mounting from your host
  • Mount Certificate Authorities from your host
  • Google Cloud configuration and credential mounting from you host
  • JIRA CLI token and configuration
  • OpsUtils directory mounting
  • OSDCTL configuration mounting
  • PagerDuty token
  • Persistent Cluster Histories
  • ~/.bashrc personalization
  • Scratch directory mounting

All features are enabled by default, though some may not do anything without additional configuration settings. Features can be disabled as desired. See feature-specific documentation below for any required settings.

Usage

Running ocm-container can be done by executing the binary alone with no flags.

ocm-container

Passing a cluster ID to the command with --cluster-id or -C will log you into that cluster after the container starts. This can be the cluster's OCM UUID, the OCM internal ID or the cluster's display name.

ocm-container --cluster-id CLUSTER_ID

By default, the container's Entrypoint is /bin/bash. You may also use the --entrypoint=<command> flag to change the container's Entrypoint as you would with a container engine. The ocm-container binary also treats trailing non-flag arguments as container CMD arguments, again similar to how a container engine does. For example, to execute the ls command as the Entrypoint and the flags -lah as the CMD, you can run:

ocm-container --entrypoint=ls -- -lah

NOTE: The standard -- delimiter between ocm-container flags and the CMD arguments must be used.

You may also change the Entrypoint and CMD for use with an initial cluster ID for login, but note you will need to handle any OCM/Cluster login yourself:

ocm-container --entrypoint=ls --cluster-id CLUSTER_ID -- -lah

Additional container engine arguments can be passed to the container using the --launch-ops flag. These will be passed as-is to the engine, and are a best-effort support. Some flags may conflict with ocm-container function.

ocm-container --launch-opts "-v /tmp:/tmp:rw -e FOO=bar"

Flags, Environment and Configuration

Options for ocm-container can be passed as CLI flags, environment variables prefixed with OCMC_, or set as key: value pairs in ~/.config/ocm-container/ocm-container.yaml. The order of precedence is:

  1. CLI Flags
  2. Environment Variables
  3. Configuration File

For example, to set a specific ocm-container image tag rather than latest:

  1. CLI Flag: ocm-container --tag=ABCD
  2. Environment Variable: OCMC_TAG=ABCD ocm-container or export OCMC_TAG=ABCD ; ocm-container (etc.. according to your shell)
  3. Configuration File: tag: ABCD to ~/.config/ocm-container/ocm-container.yaml, or ocm-container configure set tag ABCD

Configuration can be set manually in the configuration file, or set as key/value pairs by running ocm-container configure set KEY VALUE.

Migrating configuration from the bash-based ocm-container.sh and env.source files

Users of ocm-container's original bash-based ocm-container.sh can migrate easily to the new Go binary.

Some things to note:

  • You no longer need to clone the git repository
  • You no longer need to build the image yourself (though you may, see "Development" below)
  • The ocm-container bash alias is no longer needed - just execute the binary directly in your $PATH
  • The ~/.config/ocm-container/env.source file has been replaced with ~/.config/ocm-container/ocm-container.yaml, a Viper configuration file

Users of ocm-container's Go binary may import the existing configuration from ~/.config/ocm-container/env.source using the ocm-container configure init command for an interactive configuration setup:

ocm-container configure init

Or optionally, use the --assume-yes flag for a best-effort attempt to import the values:

ocm-container configure init --assume-yes

You can view the configuration in use with the ocm-container configure get subcommand:

ocm-container configure get

Example:

$ ocm-container configure get
Using config file: /home/chcollin/.config/ocm-container/ocm-container.yaml
engine: podman
offline_access_token: REDACTED
persistent_cluster_histories: false
repository: chcollin
scratch_dir: /home/chcollin/Projects
ops_utils_dir: /home/chcollin/Projects/github.com/openshift/ops-sop/v4/utils/

Sensitive values are set to REDACTED by default, and can be viewed by adding --show-sensitive-values.

Feature Set Configuration:

All of the ocm-container feature sets are enabled by default, but some may require some additional configuration information passed (via CLI, ENV or configuration file, as show above) to actually do anything.

Every feature can be disabled by adding --no-FEATURENAME or setting no_featurename: true in the configuration file, etc.

AWS configuration and credential mounting from your host

Mounts your ~/.aws/credentials and ~/.aws/config files read-only into ocm-container for use with the AWS CLI.

  • No additional configuration required
  • Can be disabled with no_aws: true

Mount Certificate Authorities from your host

Mounts additional certificate authority trust bundles from a directory on your host and adds it to the bundle in ocm-container at /etc/pki/ca-trust/source/anchors, read-only.

  • Requires ca_source_anchors: PATH_TO_YOUR_CA_ANCHORS_DIRECTORY to be set
  • Can be disabled with no_certificate_authorities: true

Google Cloud configuration and credential mounting from you host

Mounts Google Cloud configuration and credentials from ~/.config/gcloud on your host inside ocm-container, read only.

  • No additional configuration required
  • Can be disabled with no_gcloud: true

JIRA CLI token and configuration

Mounts your JIRA token and config directory from ~/.config/.jira/token.json on your host read-only into ocm-container, and sets the JIRA_API_TOKEN and JIRA_AUTH_TYPE=bearer environment variables to be used with the JIRA CLI tool.

  • No additional configuration required, other than on first-run (see below):
  • Can be disabled with no_jira: true

Generate a Personal Access Token by logging into JIRA and clicking your user icon in the top right of the screen, and selecting "Profile". Then Navigate to "Personal Access Tokens" in the left sidebar, and generate a token.

If this is your first time using the JIRA CLI, ensure that the config file exists first with mkdir -p ~/.config/.jira && touch ~/.config/.jira/config.json. You'll also need to mount the JIRA config file as writeable by setting the jira_dir_rw: true configuration (or export OCMC_JIRA_DIR_RW: true) the first time. Once you've logged in to ocm-container, run jira init to do the initial setup.

You may then remove jira_dir_rw: true on subsequent runs of ocm-container.

OpsUtils directory mounting

Red Hat SREs can mount the OPS Utils utilities into ocm-container, and can specify if the mount is read-only or read-write.

  • Requires ops_utils_dir: PATH_TO_YOUR_OPS_UTILS_DIRECTORY to be set
  • Optionally accepts ops_utils_dir_rw: true to enable read-write access in the mount
  • Can be disabled with no_ops_utils: true

OSDCTL configuration mounting

Mounts the osdctl configuration directory (~/.config/osdctl) read-only into the container.

  • No additional configuration required
  • Can be disabled with no_osdctl: true

PagerDuty token and configuration

Mounts the ~/.config/pagerduty-cli/config.json token file into the container.

  • No additional configuration required, other than on first-run (see below)
  • Can be disabled with no_pagerduty: true

In order to set up the Pagerduty CLI the first time, ensure that the config file exists first with mkdir -p ~/.config/pagerduty-cli && touch ~/.config/pagerduty-cli/config.json. You'll also need to mount the Pagerduty config file as writeable by setting the pagerduty_dir_rw: true configuration (or export OCMC_PAGERDUTY_DIR_RW: true) the first time. Once you've logged in to ocm-container, run pd login to do the initial setup.

You may then remove pagerduty_dir_rw: true on subsequent runs of ocm-container.

Persistent Cluster Histories

Stores cluster terminal history persistently in directories in your ~/.config/ocm-container directory.

  • Requires enable_persistent_histories: true; but this is toggle deprecated and will be removed in the future
  • Otherwise no additional configuration required
  • Can be disabled with no_persistent_histories: true

~/.bashrc personalization (or other)

Mounts a directory or a file (eg: ~/.bashrc or ~/.bashrc.d/, etc) from your host to ~/.config/personalizations.d (or ...personalizations.sh for a file) in the container. You may specify if it is read-only or read-write.

  • Requires personalization_file: PATH_TO_FILE_OR_DIRECTORY_TO_MOUNT
  • Optionally, personalization_dir_rw: true can be set to make the mount read-write
  • Can be disabled with no_personalization: true

Scratch directory mounting

Mounts an arbitrary directory from your host to ~/scratch. You may specific if it is read-only or read-write.

  • Requires scratch_dir: PATH_TO_YOUR_SCRATCH_DIR
  • Optionally, scratch_dir_rw: true can be set to make the mount read-write
  • Can be disabled with no_scratch: true

Personalize Your ocm-container

There are many options to personalize your ocm-container experience. For example, if you want to have your vim config passed in and available all the time, you could do something like this:

alias occ=`ocm-container -o "-v /home/your_user/.vim:/root/.vim"`

Another common option is to have additional packages available that do not come in the standard build. You can create an additional Containerfile to run after you build the standard ocm-container build:

FROM ocm-container:latest

RUN microdnf --assumeyes --nodocs update \
    && microdnf --assumeyes --nodocs install \
        lnav \
    && microdnf clean all \
    && rm -rf /var/cache/yum
NOTE: When customizing ocm-container, use caution not to overwrite core tooling or default functionality in order to keep to the spirit of reproducible environments between SREs.  We offer the ability to customize your environment to provide the best experience, however the main goal of this tool is that all SREs have a consistent environment so that tooling "just works" between SREs.

Advanced scripting with ocm-container

We've recently added the ability to run a script within the container so that you can run ocm-container within a script.

Given the following shell script saved on the local machine in ~/myproject/in-container.sh:

cat ~/myproject/in-container.sh
#!/bin/bash

# source this so we get all of the goodness of ocm-container
source /root/.bashrc

# get the version of the cluster
oc version >> report.txt

We can run that on-container with the following script which runs on the host (~/myproject/on-host.sh):

cat ~/myproject/on-host.sh
#!/bin/bash

while read -r cluster_id
do
    echo "Cluster $cluster_id Information:" >> report.txt
    ocm-container -o "-v ${HOME}/myproject/script.sh:/root/script.sh -v ${HOME}/myproject/report.txt:/root/report.txt" -e /root/script.sh $cluster_id
    echo "----"
done < clusters.txt

Would loop through all clusters listed in clusters.txt and then run oc version on the cluster, and add the output into report.txt and then it would exit the container, and move to the next container and do the same.

Troubleshooting

SSH Config

If you're on a mac and you get an error similar to:

Cluster is internal. Initializing Tunnel... /root/.ssh/config: line 34: Bad configuration option: usekeychain

you might need to add something similar to the following to your ssh config:

$ cat ~/.ssh/config | head
IgnoreUnknown   UseKeychain,AddKeysToAgent
Host *
  <snip>
  UseKeychain yes

UseKeychain is a MacOS specific directive which may cause issues on the linux container that ocm-container runs within. Adding the IgnoreUnknown UseKeychain directive tells the ssh config to ignore that directive when it's unknown so it will not throw errors.

Podman/M1 MacOS Instructions

The process is mostly the same. Assuming you have podman setup, with the following mounts on the podman machine:

brew install podman
podman machine init -v ${HOME}:${HOME} -v /private:/private
podman machine start

Then you should just be able to build the container as usual

podman build -t ocm-container:latest .

Note: the ROSA cli is not present on the arm64 version as there is no pre-built arm64 binary that can be gathered, and we've decided that we don't use that cli enough to bother installing it from source within the build step.

Development

The image for ocm-container is built nightly can by default are pulled from the registry at quay.io/app-sre/ocm-container:latest. Alternatively you can build your own image and use it to invoke ocm-container.

NOTE: This feature is currently experimental, and requires the ocm-container Github repository to be cloned, and for make to be installed on the system. It currently uses the make build target.

Building a new image can be done with the ocm-container build command. The command accepts --image and --tag flags to name the resulting image:

ocm container build --image IMAGE --tag TAG

The resulting image would be in the naming convention: "IMAGE:TAG"

Continuous Integration

Continuous Integration log: https://ci.int.devshift.net/blue/organizations/jenkins/openshift-ocm-container-gh-build-master/activity

ocm-container's People

Contributors

admbk avatar alexvulaj avatar clcollins avatar drewandersonnz avatar erdii avatar faldanarh avatar feichashao avatar georgettica avatar hlipsig avatar iamkirkbater avatar jaybeeunix avatar karthikperu7 avatar lisa avatar mrbarge avatar mrwinston avatar nikokolas3270 avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar rendhalver avatar samanthajayasinghe avatar t0masd avatar yeya24 avatar yithian avatar zmird-r avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ocm-container's Issues

podman warning when running ocm-container first time after reboot

Hi, I'm noticing strange behaviour in ocm-container.

$ ocm-container
ERRO[0000] cannot find UID/GID for user <my openshift account>: No subgid ranges found for group "<my openshift account>" in /etc/subgid - check rootless mode in man pages. 
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding sub*ids 
[~ {production} ]$ 

Exit and run again

$ ocm-container
[~ {production} ]$ 

I have tried adding to ocm-container/Dockerfile

RUN echo "<my openshift account>:100000:65536" >> /etc/subuid
RUN echo "<my openshift account>:100000:65536" >> /etc/subgid

I also added same to my host running the ocm-container.

Rebuilt, and verified /etc/subuid and /etc/subgid has above entries in the container and the house, however I'm still getting a warning upon running ocm-container for the first time

possible solution for rhash fix

ocm-container/Dockerfile

Lines 135 to 136 in f1c5161

# This is terrible, but not sure how to do this better.
RUN rhash -c checksums | grep Success:1

I found that we can

rhash -c <( grep ^yq_linux_amd64\   checksums  ) --printf "" -m "" --skip-ok -a

which returns:

--( Verifying ${TMP_FILENAME} )------------------------------------------------------
--------------------------------------------------------------------------------
Everything OK
RHash: skipping: (message)

so we can grep ^Everything OK$
@clcollins WDYT? (can wait for whenever you get to this)

I am thinking on solutions for the other cli's (so we don't need to --ignore-missing

Add additional env vars to login

When logging into a cluster I'd like some additional environment variables to be available to me, based on what type of cluster it is, and some for both:

ALL:

  • OCM_ENV - should be one of production, staging, integration
  • OCM_REGION - should default to global (this one is for future-proofing, so we can leave this off until we need it)

Classic:

  • HIVE_NS - should be formatted: uhc-$OCM_ENV-$CLUSTER_ID
  • HIVE_NAME - should be the user-readable hive name: hive-stage-01, etc
  • HIVE_ID - should be the Cluster ID of Hive

Hypershift:

  • MC_ID would be set to the Manager Cluster ID
  • SC_ID would be set to the Service Cluster ID
  • HCP_NS would be set to ocm-$OCM_ENVIRONMENT-$CLUSTER_ID-$CLUSTER_NAME
  • HC_NS would be set to ocm-$OCM_ENVIRONMENT-$CLUSTER_ID
  • KUBELET_NS would be set to kubelet-$CLUSTER_ID

Investigate using sshuttle as a service

I just noticed as I was trying to fix sshuttle on my mac that there's a blog post at the bottom of the github page that shows how to run sshuttle as a service.

The main benefit I see here is the keepalive, which is just really annoying when I'm trying to do something, get distracted or get a cup of coffee, and come back and have to kill $(cat sshuttle.pid) and set up the tunnel again to get into a new cluster.

https://perfecto25.medium.com/using-sshuttle-as-a-service-bec2684a65fe

use a different way of builing the oc binaries

I just saw https://github.com/openshift-kni/cnf-features-deploy/blob/master/cnf-tests/Dockerfile
where they pull the binary from an official docker image

ocm-container/Dockerfile

Lines 68 to 77 in f1c5161

# Install the latest OC Binary from the mirror
RUN mkdir /oc
WORKDIR /oc
# Download the checksum
RUN curl -sSLf ${OC_URL}/sha256sum.txt -o sha256sum.txt
# Download the binary tarball
RUN /bin/bash -c "curl -sSLf -O ${OC_URL}/$(awk -v asset="openshift-client-linux" '$0~asset {print $2}' sha256sum.txt)"
# Check the tarball and checksum match
RUN sha256sum --check --ignore-missing sha256sum.txt
RUN tar --extract --gunzip --no-same-owner --directory /out oc --file *.tar.gz

`cluster-login` removed, but still required by sre-login

The following error is seen upon a new build and launching ocm-container:

$ ocm-container-stg 58521e41-d139-4833-9844-54d59c6cb9f8
Logging into cluster 58521e41-d139-4833-9844-54d59c6cb9f8
Cluster ID: 1ogh4mlct6jjdodisfvovs9fvccbteti
/root/.local/bin/sre-login: line 69: cluster-login: command not found

The cluster-login command is not present on the container:

[~ {staging} ]$ cluster-login
bash: cluster-login: command not found
[~ {staging} ]$ which cluster-login
/usr/bin/which: no cluster-login in (/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin)
[~ {staging} ]$ find / -name cluster-login
find: ‘/proc/tty/driver’: Permission denied
[~ {staging} ]$ 

A recent MR #57 appears to have removed bin/utils/cluster-login, but the bin/utils/sre-login command still relies on that python script: https://github.com/openshift/ocm-container/blob/master/utils/bin/sre-login#L69

Discussion: --entrypoint vs dedicated `exec` command?

I was going through the readme for the new Golang version and I'm wondering if as an alternative to the --entrypoint logic we could add an explicit subcommand: ocm-container exec

Idea for usage would be something like this (using the same command as the README:

ocm-container exec -C $CLUSTER_ID -- ls -lah

Ideally this would function the same as the --entrypoint functionality as written today, but would (IMO) lead to more readable scripts if you are doing something that requires you to run this across multiple clusters and multiple ocm-container instances.

By default, exec would not create an interactive session, it would be a oneshot. For an interactive session we could consider using the -it flags, but I'd personally argue that we should not use those flags and if you require an interactive session then the root command itself should be used.

I'm opening this as a discussion and hoping to hear thoughts and comments here :D

Thanks!

host directory cannot be empty

With the latest commits, I cannot create a container. As whenever I do ocm-container it keeps saying host directory cannot be empty.

shellcheck found multiple issues

running shellcheck on all binary files returns:

Click to expand!

In ./build.sh line 33:
      CONTAINER_ARGS+=($@)
                       ^-- SC2206 (warning): Quote to prevent word splitting/globbing, or split robustly with mapfile or read -a.


In ./build.sh line 50:
cd $(dirname $0)
^--------------^ SC2164 (warning): Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
   ^-----------^ SC2046 (warning): Quote this to prevent word splitting.
             ^-- SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
cd $(dirname "$0") || exit


In ./build.sh line 57:
if [ ! -f ${OCM_CONTAINER_CONFIG} ]; then
          ^---------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ ! -f "${OCM_CONTAINER_CONFIG}" ]; then


In ./build.sh line 62:
source ${OCM_CONTAINER_CONFIG}
       ^---------------------^ SC1090 (warning): ShellCheck can't follow non-constant source. Use a directive to specify location.
       ^---------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
source "${OCM_CONTAINER_CONFIG}"


In ./build.sh line 72:
  $CONTAINER_ARGS \
  ^-------------^ SC2128 (warning): Expanding an array without an index only gives the first element.
  ^-------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  "$CONTAINER_ARGS" \


In ./build.sh line 73:
  -t ocm-container:${BUILD_TAG} .
                   ^----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  -t ocm-container:"${BUILD_TAG}" .


In ./init.sh line 3:
cd $(dirname $0)
^--------------^ SC2164 (warning): Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
   ^-----------^ SC2046 (warning): Quote this to prevent word splitting.
             ^-- SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
cd $(dirname "$0") || exit


In ./init.sh line 8:
	echo 'aliases_file=$(alias) '"$0 $@"
             ^----------------------^ SC2016 (info): Expressions don't expand in single quotes, use double quotes for that.
                                         ^-- SC2145 (error): Argument mixes string and array. Use * or separate argument.


In ./init.sh line 13:
  if [ ! -f ${CONFIG_DIR}/env.source ]
            ^-----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  if [ ! -f "${CONFIG_DIR}"/env.source ]


In ./init.sh line 16:
    cp env.source.sample ${CONFIG_DIR}/env.source
                         ^-----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    cp env.source.sample "${CONFIG_DIR}"/env.source


In ./init.sh line 22:
mkdir -p ${CONFIG_DIR}
         ^-----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
mkdir -p "${CONFIG_DIR}"


In ./init.sh line 31:
  read -n 1 -p "Select 1-4: " prev_config_selection
  ^--^ SC2162 (info): read without -r will mangle backslashes.


In ./init.sh line 34:
  if [ $prev_config_selection == "1" ]
       ^--------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  if [ "$prev_config_selection" == "1" ]


In ./init.sh line 37:
    mv env.source ${CONFIG_DIR}
                  ^-----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    mv env.source "${CONFIG_DIR}"


In ./init.sh line 38:
  elif [ $prev_config_selection == "2" ]
         ^--------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  elif [ "$prev_config_selection" == "2" ]


In ./init.sh line 41:
    ln -s env.source $CONFIG_DIR/env.source
                     ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    ln -s env.source "$CONFIG_DIR"/env.source


In ./init.sh line 42:
  elif [ $prev_config_selection == "3" ]
         ^--------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  elif [ "$prev_config_selection" == "3" ]


In ./init.sh line 88:
  awk -f <( echo $AWK ) "${CONFIG_DIR}/env.source"
                 ^--^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  awk -f <( echo "$AWK" ) "${CONFIG_DIR}/env.source"


In ./ocm-container.sh line 44:
      ARGS+=($1)
             ^-- SC2206 (warning): Quote to prevent word splitting/globbing, or split robustly with mapfile or read -a.


In ./ocm-container.sh line 61:
if [ ! -f ${OCM_CONTAINER_CONFIGFILE} ]; then
          ^-------------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ ! -f "${OCM_CONTAINER_CONFIGFILE}" ]; then


In ./ocm-container.sh line 68:
source ${OCM_CONTAINER_CONFIGFILE}
       ^-------------------------^ SC1090 (warning): ShellCheck can't follow non-constant source. Use a directive to specify location.
       ^-------------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
source "${OCM_CONTAINER_CONFIGFILE}"


In ./ocm-container.sh line 71:
operating_system=`uname`
                 ^-----^ SC2006 (style): Use $(...) notation instead of legacy backticks `...`.

Did you mean:
operating_system=$(uname)


In ./ocm-container.sh line 90:
if [ -f ${HOME}/${PAGERDUTY_TOKEN_FILE} ]
        ^-----^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ -f "${HOME}"/${PAGERDUTY_TOKEN_FILE} ]


In ./ocm-container.sh line 96:
if [ -d ${HOME}/.config/gcloud ]; then
        ^-----^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ -d "${HOME}"/.config/gcloud ]; then


In ./ocm-container.sh line 106:
if [ -n "$ARGS" ]
         ^---^ SC2128 (warning): Expanding an array without an index only gives the first element.


In ./ocm-container.sh line 118:
${CONTAINER_SUBSYS} run $TTY --rm --privileged \
                        ^--^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
${CONTAINER_SUBSYS} run "$TTY" --rm --privileged \


In ./ocm-container.sh line 123:
${INITIAL_CLUSTER_LOGIN} \
^----------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
"${INITIAL_CLUSTER_LOGIN}" \


In ./ocm-container.sh line 124:
-v ${CONFIG_DIR}:/root/.config/ocm-container:ro \
   ^-----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
-v "${CONFIG_DIR}":/root/.config/ocm-container:ro \


In ./ocm-container.sh line 125:
-v ${HOME}/.ssh:/root/.ssh:ro \
   ^-----^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
-v "${HOME}"/.ssh:/root/.ssh:ro \


In ./ocm-container.sh line 126:
${GOOGLECLOUDFILEMOUNT} \
^---------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
"${GOOGLECLOUDFILEMOUNT}" \


In ./ocm-container.sh line 127:
${PAGERDUTYFILEMOUNT} \
^-------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
"${PAGERDUTYFILEMOUNT}" \


In ./ocm-container.sh line 128:
${AWSFILEMOUNT} \
^-------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
"${AWSFILEMOUNT}" \


In ./ocm-container.sh line 129:
${SSH_AGENT_MOUNT} \
^----------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
"${SSH_AGENT_MOUNT}" \


In ./ocm-container.sh line 131:
${OCM_CONTAINER_LAUNCH_OPTS} \
^--------------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
"${OCM_CONTAINER_LAUNCH_OPTS}" \


In ./ocm-container.sh line 132:
ocm-container:${BUILD_TAG} ${EXEC_SCRIPT}
              ^----------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                           ^------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
ocm-container:"${BUILD_TAG}" "${EXEC_SCRIPT}"


In ./utils/bin/chgm line 59:
  if ! command -v $cmd > /dev/null
                  ^--^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  if ! command -v "$cmd" > /dev/null


In ./utils/bin/chgm line 81:
  ALERT_JSON="$(pd rest:get -e=/incidents/${PD_ALERT}/alerts 2>/dev/null)"
                                          ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  ALERT_JSON="$(pd rest:get -e=/incidents/"${PD_ALERT}"/alerts 2>/dev/null)"


In ./utils/bin/chgm line 82:
  CLUSTER_UUID="$(jq -r '.alerts[].body.details.notes' <<< $ALERT_JSON |awk '/cluster_id/ {print $2}')"
                                                           ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  CLUSTER_UUID="$(jq -r '.alerts[].body.details.notes' <<< "$ALERT_JSON" |awk '/cluster_id/ {print $2}')"


In ./utils/bin/chgm line 83:
  CLUSTER_NAME="$(jq -r '.alerts[].body.details.name | split(".") | .[0] ' <<< $ALERT_JSON)"
                                                                               ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  CLUSTER_NAME="$(jq -r '.alerts[].body.details.name | split(".") | .[0] ' <<< "$ALERT_JSON")"


In ./utils/bin/chgm line 100:
  trap "rm -fr $tmpd" EXIT
               ^---^ SC2064 (warning): Use single quotes, otherwise this expands now rather than when signalled.


In ./utils/bin/chgm line 102:
  aws-get-creds.sh $CLUSTER_UUID > $tmpd/exports 2>/dev/null
                   ^-----------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                                   ^---^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  aws-get-creds.sh "$CLUSTER_UUID" > "$tmpd"/exports 2>/dev/null


In ./utils/bin/chgm line 103:
  source $tmpd/exports
         ^-----------^ SC1091 (info): Not following: ./exports was not specified as input (see shellcheck -x).
         ^---^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  source "$tmpd"/exports


In ./utils/bin/chgm line 105:
  aws-validate-cluster.sh -r $AWS_DEFAULT_REGION -n $CLUSTER_NAME
                             ^-----------------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                                                    ^-----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  aws-validate-cluster.sh -r "$AWS_DEFAULT_REGION" -n "$CLUSTER_NAME"


In ./utils/bin/chgm line 119:
pd incident:ack -i ${PD_ALERT}
                   ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
pd incident:ack -i "${PD_ALERT}"


In ./utils/bin/chgm line 120:
osdctl servicelog post ${CLUSTER_UUID} -y -t ${SL_URL}
                       ^-------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
osdctl servicelog post "${CLUSTER_UUID}" -y -t ${SL_URL}


In ./utils/bin/chgm line 121:
pd incident:notes -i ${PD_ALERT} -n "$PD_CHGM_NOTE"
                     ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
pd incident:notes -i "${PD_ALERT}" -n "$PD_CHGM_NOTE"


In ./utils/bin/chgm line 122:
pd incident:assign -i ${PD_ALERT} -u $PD_SILENT_TEST_USER
                      ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
pd incident:assign -i "${PD_ALERT}" -u $PD_SILENT_TEST_USER


In ./utils/bin/chgm line 123:
pd incident:merge -i ${PD_ALERT} -I ${CHGM_PARENT_INCIDENT}
                     ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
pd incident:merge -i "${PD_ALERT}" -I ${CHGM_PARENT_INCIDENT}


In ./utils/bin/create-cluster line 1:
#!/usr/bin/env python3
^-- SC1071 (error): ShellCheck only supports sh/bash/dash/ksh scripts. Sorry!


In ./utils/bin/elevate.sh line 8:
oc adm groups add-users osd-sre-cluster-admins $(oc whoami)
                                               ^----------^ SC2046 (warning): Quote this to prevent word splitting.


In ./utils/bin/get-shards-clusterid line 1:
#!/usr/bin/env python3
^-- SC1071 (error): ShellCheck only supports sh/bash/dash/ksh scripts. Sorry!


In ./utils/bin/list-utils line 4:
UTIL_DIR=$(dirname $0)
                   ^-- SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
UTIL_DIR=$(dirname "$0")


In ./utils/bin/list-utils line 7:
for i in $(ls ${UTIL_DIR}/${1} | sort) ; do
           ^-----------------^ SC2012 (info): Use find instead of ls to better handle non-alphanumeric filenames.
              ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                          ^--^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
for i in $(ls "${UTIL_DIR}"/"${1}" | sort) ; do


In ./utils/bin/list-utils line 8:
	DOC=$(awk -F:  '/^# OCM_CONTAINER_DOC/ {print $2}' ${UTIL_DIR}/${i})
                                                           ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                                                                       ^--^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
	DOC=$(awk -F:  '/^# OCM_CONTAINER_DOC/ {print $2}' "${UTIL_DIR}"/"${i}")


In ./utils/bin/ocm-login line 14:
if [ "x${OFFLINE_ACCESS_TOKEN}" == "x" ]; then
     ^------------------------^ SC2268 (style): Avoid x-prefix in comparisons as it no longer serves a purpose.

Did you mean:
if [ "${OFFLINE_ACCESS_TOKEN}" == "" ]; then


In ./utils/bin/ocm-login line 24:
"${CLI}" login --token=$OFFLINE_ACCESS_TOKEN ${LOGIN_ENV}=$OCM_URL
                       ^-------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
"${CLI}" login --token="$OFFLINE_ACCESS_TOKEN" ${LOGIN_ENV}=$OCM_URL


In ./utils/bin/ocm-login line 28:
    CLUSTER_DATA=$("${CLI}" describe cluster ${CLUSTERID} --json || exit )
                                             ^----------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    CLUSTER_DATA=$("${CLI}" describe cluster "${CLUSTERID}" --json || exit )


In ./utils/bin/ocm-login line 29:
    CLUSTER_API_TYPE=$( echo ${CLUSTER_DATA} | jq --raw-output .api.listening )
                             ^-------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    CLUSTER_API_TYPE=$( echo "${CLUSTER_DATA}" | jq --raw-output .api.listening )


In ./utils/bin/ocm-login line 30:
    CONSOLE_URL=$(echo ${CLUSTER_DATA} | jq --raw-output .console.url)
                       ^-------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    CONSOLE_URL=$(echo "${CLUSTER_DATA}" | jq --raw-output .console.url)


In ./utils/bin/ocm-login line 51:
    "${CLI}" cluster login ${CLUSTERID} --username ${OCM_USER} --token  \
                           ^----------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                                                   ^---------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    "${CLI}" cluster login "${CLUSTERID}" --username "${OCM_USER}" --token  \


In ./utils/bin/ocm-login line 54:
    "${CLI}" list cluster ${OCM_LIST_ADDITIONAL_ARG}
                          ^------------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    "${CLI}" list cluster "${OCM_LIST_ADDITIONAL_ARG}"


In ./utils/bin/pause-hive-sync line 9:
        echo "Usage: $(basename ${0}) -c CLUSTER_NAME -s (on|off)"
                                ^--^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
        echo "Usage: $(basename "${0}") -c CLUSTER_NAME -s (on|off)"


In ./utils/bin/pause-hive-sync line 37:
    l ) LIST="true"
        ^--^ SC2034 (warning): LIST appears unused. Verify use (or export if used externally).


In ./utils/bin/pause-hive-sync line 49:
if test -z ${CLUSTER_DEPLOYMENT} ; then
           ^-------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if test -z "${CLUSTER_DEPLOYMENT}" ; then


In ./utils/bin/pause-hive-sync line 54:
if test -z ${PAUSE_STATE}; then
           ^------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if test -z "${PAUSE_STATE}"; then


In ./utils/bin/pause-hive-sync line 68:
CLUSTER_DEPLOYMENT_NAMESPACE="$(oc get clusterdeployment --all-namespaces --selector api.openshift.com/name=${CLUSTER_DEPLOYMENT} --output template --template='{{range .items}}{{.metadata.namespace}}{{end}}')"
                                                                                                            ^-------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
CLUSTER_DEPLOYMENT_NAMESPACE="$(oc get clusterdeployment --all-namespaces --selector api.openshift.com/name="${CLUSTER_DEPLOYMENT}" --output template --template='{{range .items}}{{.metadata.namespace}}{{end}}')"


In ./utils/bin/pause-hive-sync line 76:
        oc annotate clusterdeployment ${CLUSTER_DEPLOYMENT} -n ${CLUSTER_DEPLOYMENT_NAMESPACE} ${ANNOTATION}="true"  || _fail_exit "Something failed attempting to annotate clusterdeployment: \"oc annotate clusterdeployment ${CLUSTER_DEPLOYMENT} -n ${CLUSTER_DEPLOYMENT_NAMESPACE} ${ANNOTATION}=\"true\"\""
                                      ^-------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                                                               ^-----------------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
        oc annotate clusterdeployment "${CLUSTER_DEPLOYMENT}" -n "${CLUSTER_DEPLOYMENT_NAMESPACE}" ${ANNOTATION}="true"  || _fail_exit "Something failed attempting to annotate clusterdeployment: \"oc annotate clusterdeployment ${CLUSTER_DEPLOYMENT} -n ${CLUSTER_DEPLOYMENT_NAMESPACE} ${ANNOTATION}=\"true\"\""


In ./utils/bin/pause-hive-sync line 78:
        oc annotate clusterdeployment ${CLUSTER_DEPLOYMENT} -n ${CLUSTER_DEPLOYMENT_NAMESPACE} ${ANNOTATION}-  || _fail_exit "Something failed attempting to annotate clusterdeployment: \"oc annotate clusterdeployment ${CLUSTER_DEPLOYMENT} -n ${CLUSTER_DEPLOYMENT_NAMESPACE} ${ANNOTATION}-\""
                                      ^-------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.
                                                               ^-----------------------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
        oc annotate clusterdeployment "${CLUSTER_DEPLOYMENT}" -n "${CLUSTER_DEPLOYMENT_NAMESPACE}" ${ANNOTATION}-  || _fail_exit "Something failed attempting to annotate clusterdeployment: \"oc annotate clusterdeployment ${CLUSTER_DEPLOYMENT} -n ${CLUSTER_DEPLOYMENT_NAMESPACE} ${ANNOTATION}-\""


In ./utils/bin/sre-login line 4:
if [ -z $1 ]
        ^-- SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ -z "$1" ]


In ./utils/bin/sre-login line 19:
  if [ $(jq -r ".total" <<< $ocmjson) -eq 1 ]
       ^----------------------------^ SC2046 (warning): Quote this to prevent word splitting.
                            ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  if [ $(jq -r ".total" <<< "$ocmjson") -eq 1 ]


In ./utils/bin/sre-login line 21:
    echo $(jq ".items[0]" <<< $ocmjson)
         ^----------------------------^ SC2046 (warning): Quote this to prevent word splitting.
         ^----------------------------^ SC2005 (style): Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
                              ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    echo $(jq ".items[0]" <<< "$ocmjson")


In ./utils/bin/sre-login line 23:
  elif [ $(jq -r ".total" <<< $ocmjson) -gt 1 ]
         ^----------------------------^ SC2046 (warning): Quote this to prevent word splitting.
                              ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  elif [ $(jq -r ".total" <<< "$ocmjson") -gt 1 ]


In ./utils/bin/sre-login line 28:
    jq -r ".items[] | [.id, .name, .display_name] | @csv" <<< $ocmjson | tr -d "\"" | column -N "ID,NAME,DISPLAY_NAME" -t -s "," >&2
                                                              ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    jq -r ".items[] | [.id, .name, .display_name] | @csv" <<< "$ocmjson" | tr -d "\"" | column -N "ID,NAME,DISPLAY_NAME" -t -s "," >&2


In ./utils/bin/sre-login line 34:
  if [ $(jq -r ".total" <<< $ocmjson) -eq 1 ]
       ^----------------------------^ SC2046 (warning): Quote this to prevent word splitting.
                            ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  if [ $(jq -r ".total" <<< "$ocmjson") -eq 1 ]


In ./utils/bin/sre-login line 36:
    echo $(jq ".items[0]" <<< $ocmjson)
         ^----------------------------^ SC2046 (warning): Quote this to prevent word splitting.
         ^----------------------------^ SC2005 (style): Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
                              ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    echo $(jq ".items[0]" <<< "$ocmjson")


In ./utils/bin/sre-login line 42:
  if [ $(jq -r ".total" <<< $ocmjson) -eq 1 ]
       ^----------------------------^ SC2046 (warning): Quote this to prevent word splitting.
                            ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
  if [ $(jq -r ".total" <<< "$ocmjson") -eq 1 ]


In ./utils/bin/sre-login line 44:
    echo $(jq ".items[0]" <<< $ocmjson)
         ^----------------------------^ SC2046 (warning): Quote this to prevent word splitting.
         ^----------------------------^ SC2005 (style): Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
                              ^------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
    echo $(jq ".items[0]" <<< "$ocmjson")


In ./utils/bin/sre-login line 53:
clusterjson=$(get_cluster_json $1)
                               ^-- SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
clusterjson=$(get_cluster_json "$1")


In ./utils/bin/sre-login line 60:
if [ $cluster_listening == "internal" ]
     ^----------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ "$cluster_listening" == "internal" ]

For more information:
  https://www.shellcheck.net/wiki/SC1071 -- ShellCheck only supports sh/bash/...
  https://www.shellcheck.net/wiki/SC2145 -- Argument mixes string and array. ...
  https://www.shellcheck.net/wiki/SC1090 -- ShellCheck can't follow non-const...

Add an option to log directly into a manager or service cluster

As an SRE I'd like to be able to pass a flag such as --manager or --service to ocm-container to have the logic for logging directly into a service or managment cluster baked into the tooling.

Ideally, if I were to run ocm-container -C $CLUSTER_ID --manager it would have the following environment available for me:

  • CLUSTER_ID would be set to the CLUSTER ID that was passed
  • MC_ID would be set to the Manager Cluster ID
  • SC_ID would be set to the Service Cluster ID
  • HCP_NS would be set to ocm-$OCM_ENVIRONMENT-$CLUSTER_ID-$CLUSTER_NAME
  • HC_NS would be set to ocm-$OCM_ENVIRONMENT-$CLUSTER_ID
  • KUBELET_NS would be set to kubelet-$CLUSTER_ID
  • oc project would be auto-set to $HCP_NS

And if this were a classic ROSA cluster:

  • CLUSTER_ID would be set to the CLUSTER_ID that was passed
  • HIVE_NAME would be the display name of the hive
  • HIVE_NS would be uhc-$OCM_ENVIRONMENT-$CLUSTER_ID
  • oc project would be auto-set to $HIVE_NS

Essentially - the environment variables would be configured in a way similar to if you ran ocm-container -C $CLUSTER_ID, loaded into it and then ocm backplane login $CLUSTER_ID --manager from within the same container, so all of your env variable context is loaded for $CLUSTER_ID but you are just logged into the Management/Service Cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.