Giter Club home page Giter Club logo

ibm / community-automation Goto Github PK

View Code? Open in Web Editor NEW
25.0 11.0 52.0 6.54 MB

community-automation is meant to be a place where developers can contribute to a library of ansible playbooks related to Red Hat OpenShift Container Platform, from installing OCP to post install activities. It is the intent that these playbooks and roles can be reused by any team that is looking to automate their CI/CD pip

License: Apache License 2.0

Groovy 6.36% Shell 11.88% Python 23.13% Jinja 58.62%
ansible playbook roles

community-automation's Introduction

Red Hat OpenShift Container Platform (OCP) Community Automation

Introduction

This repo represents the Red Hat OpenShift Container Platform (OCP) Community Automation effort where teams can contribute ansible automation to be shared with other teams. It was decided early on in the project to use Jenkins and Ansible combination for our implementations. Jenkins and Ansible details below. The repo is also setup for use as an ansible collection to be included in playbooks being created outside of this repo.

This community is meant to be a place where developers can contribute to a library of ansible playbooks and/or roles related to Red Hat OpenShift Container Platform (OCP), from installing OCP to post install activities. It is the intent that these playbooks and roles can be reused by any team that is looking to automate their CI/CD pipeline.

How to run playbooks

  • clone this community-automation repository
  • Run options
    • Docker image
    • importing the ansible collection
    • Personal workspace (requires prereq scripts found in scripts folder)

Documentation

community-automation's People

Contributors

cdjohnson avatar domstorey avatar gkertasef avatar gmarcy avatar ibmrob avatar jasonthink avatar joshho avatar markvardy avatar octravis avatar prajyot-parab avatar prattyushmangal avatar rahtr avatar rayashworth avatar rickettmwork avatar stevenschader avatar tomaley avatar tomlinkusa avatar websterc87 avatar wkrapohl avatar xcliu-ca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community-automation's Issues

as an automation developer I need to create azure ansible automation playbook using ACM/HIVE to provide azure as a deployment option.

**Is your feature request related to a problem? **
Create an RedHat Ansible solution to deploy a Azure cluster using Redhat ACM/HIVE.

Describe the solution you'd like
Deploy OCP Cluster to Azure infrastructure.
Solution will need to:

follow current ansible model
create a azure yaml templete
create parameters for (azure credentials, size of VM's(Instances), RedHat pull-secret, size of cluster, cluster-name, OCP version )
Deal with Storage

Describe alternatives you've considered
na, this will replace terraform solution(s)

Additional context
Contact Ray Ashworth for details about ACM/HIVE.
ACM/HIVE login details available on request.

the image file names have changed for kernel and initramfs

Describe the bug
in the redhat image repositories they now includ "live" in filenames

To Reproduce
Steps to reproduce the behavior:

  1. Try to run the nightly custom option for fyre and OCP 4.6
  2. observe the stack will fail, working with Charlie we know that the files could not be found as coded.

Expected behavior
Cluster install start and runs successfully

Additional context
been testing a modification to the play, notice the * change below

  kernel_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{ rhcos_version_path }}/{{ rhcos_sha256sum.content.split() | select('match', '.*kernel-x86_64.*') | list | first }}"
  initramfs_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{ rhcos_version_path }}/{{ rhcos_sha256sum.content.split() | select('match', '.*initramfs.x86_64.*')  | list | first}}"

as an automation developer I need to add jenkins automation for end to end deployment of clusters and add ons to help support community automation.

Is your feature request related to a problem? Please describe.
Create Jenkins job that. offers a list of options to deploy OCP.

  • OCP version
  • Storage (csi cephfs)
  • Common Services
  • Infrastructure should be considered when creating options. (FYRE, Google, AWS, Azure, and vsphere)

Describe the solution you'd like
A jenkins job that uses existing community automation, noting updates with issues where necessary.

Describe alternatives you've considered
na, current solution is command line based.

Additional context
na

"error" not a status from deployment_status, change to "failed"

Describe the bug
ansible script does not stop if deployment fails. checking for "error" is never found, so the until loop just continues.

Expected behavior
playbook/roles ends when cluster deployment fails

Additional context
change string from "error" to "failed"

Unable to install new rook-ceph v1.5 releases using the csi-cephfs-fyre-play playbook

Describe the bug
A clear and concise description of what the bug is.
I attempted to pass in the new rook-ceph v1.5.3 release when I realized it constantly resulted in an error. Once I investigated more it is due to refactoring in rook-ceph in terms of how the custom resource definitions are applied as in prior versions all the custom resource definitions were in the common.yaml file but in the v1.5.0 release and moving forward there is a particular YAML file, the crds.yaml that needs to be applied so that all the necessary custom resource definitions exist prior to ceph install.

Attached below is a link to a file from their v1.5.0 tagged release
https://github.com/rook/rook/blob/6b566f192cb814814719babeed5ff8c31ea02e22/cluster/examples/kubernetes/ceph/cluster.yaml#L7

To Reproduce
Steps to reproduce the behavior, code is taken verbatim from Jenkins groovy script:

dir('ansible/csi-cephfs-fyre-play'){
    sh 'cp examples/inventory .'
    sh 'sed -i -e "s/fyre.root.pw/Ibm123ibm123!/g" inventory'
    sh 'sed -i -e "s/fyre.inf.node.9dot.ip/"'+DEPLOY_CLUSTER_IP+'"/g" inventory'
    sh 'cat inventory'
    sh 'ansible-playbook  -i inventory csi-cephfs.yml --extra-vars "rook_cephfs_release=v1.5.3"'
}

Expected behavior
For the installation of csi-cephfs to pass successfully and all rook-ceph pods to be in a good state.

Screenshots
Logs of the pods showed behaviour similar to the below
no matches for kind "CephCluster" in version "ceph.rook.io/v1"

Additional context
Attempting to install on a Quick Burn Openshift 4.6.6 cluster with large configuration

Logs from Jenkins job
TASK [csi-cephfs-fyre : Install csi-cephfs] ************************************ fatal: [9.30.14.148]: FAILED! => {"changed": true, "cmd": "~/setup-files/ceph-setup/csi-ceph.sh v1.5.1 vdb", "delta": "0:00:07.949826", "end": "2020-12-12 08:48:14.306149", "msg": "non-zero return code", "rc": 1, "start": "2020-12-12 08:48:06.356323", "stderr": "Cloning into 'rook'...\nNote: switching to '364989a98645d47e73eddedefcbb55c5a8a2ee82'.\n\nYou are in 'detached HEAD' state. You can look around, make experimental\nchanges and commit them, and you can discard any commits you make in this\nstate without impacting any branches by switching back to a branch.\n\nIf you want to create a new branch to retain commits you create, you may\ndo so (now or later) by using -c with the switch command. Example:\n\n git switch -c <new-branch-name>\n\nOr undo this operation with:\n\n git switch -\n\nTurn off this advice by setting config variable advice.detachedHead to false\n\nerror: unable to recognize \"rook/cluster/examples/kubernetes/ceph/cluster.yaml\": no matches for kind \"CephCluster\" in version \"ceph.rook.io/v1\"\nerror: unable to recognize \"rook/cluster/examples/kubernetes/ceph/filesystem-test.yaml\": no matches for kind \"CephFilesystem\" in version \"ceph.rook.io/v1\"\nerror: unable to recognize \"rook/cluster/examples/kubernetes/ceph/csi/rbd/storageclass-test.yaml\": no matches for kind \"CephBlockPool\" in version \"ceph.rook.io/v1\"", "stderr_lines": ["Cloning into 'rook'...", "Note: switching to '364989a98645d47e73eddedefcbb55c5a8a2ee82'.", "", "You are in 'detached HEAD' state. You can look around, make experimental", "changes and commit them, and you can discard any commits you make in this", "state without impacting any branches by switching back to a branch.", "", "If you want to create a new branch to retain commits you create, you may", "do so (now or later) by using -c with the switch command. Example:", "", " git switch -c <new-branch-name>", "", "Or undo this operation with:", "", " git switch -", "", "Turn off this advice by setting config variable advice.detachedHead to false", "", "error: unable to recognize \"rook/cluster/examples/kubernetes/ceph/cluster.yaml\": no matches for kind \"CephCluster\" in version \"ceph.rook.io/v1\"", "error: unable to recognize \"rook/cluster/examples/kubernetes/ceph/filesystem-test.yaml\": no matches for kind \"CephFilesystem\" in version \"ceph.rook.io/v1\"", "error: unable to recognize \"rook/cluster/examples/kubernetes/ceph/csi/rbd/storageclass-test.yaml\": no matches for kind \"CephBlockPool\" in version \"ceph.rook.io/v1\""], "stdout": "Login successful.\n\nYou have access to 58 projects, the list has been suppressed. You can list all projects with ' projects'\n\nUsing project \"default\".\nWelcome! See 'oc help' to get started.\nDoing clone of rook release v1.5.1\nDoing common.yaml\nnamespace/rook-ceph created\nclusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created\nserviceaccount/rook-ceph-admission-controller created\nclusterrole.rbac.authorization.k8s.io/rook-ceph-admission-controller-role created\nclusterrolebinding.rbac.authorization.k8s.io/rook-ceph-admission-controller-rolebinding created\nclusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created\nrole.rbac.authorization.k8s.io/rook-ceph-system created\nclusterrole.rbac.authorization.k8s.io/rook-ceph-global created\nclusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created\nclusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created\nserviceaccount/rook-ceph-system created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-system created\nclusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created\nserviceaccount/rook-ceph-osd created\nserviceaccount/rook-ceph-mgr created\nserviceaccount/rook-ceph-cmd-reporter created\nrole.rbac.authorization.k8s.io/rook-ceph-osd created\nclusterrole.rbac.authorization.k8s.io/rook-ceph-osd created\nclusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created\nrole.rbac.authorization.k8s.io/rook-ceph-mgr created\nrole.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created\nclusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created\nclusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created\npodsecuritypolicy.policy/00-rook-privileged created\nclusterrole.rbac.authorization.k8s.io/psp:rook created\nclusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created\nrolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created\nserviceaccount/rook-csi-cephfs-plugin-sa created\nserviceaccount/rook-csi-cephfs-provisioner-sa created\nrole.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created\nrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created\nclusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created\nclusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created\nclusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created\nclusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created\nclusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created\nclusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created\nserviceaccount/rook-csi-rbd-plugin-sa created\nserviceaccount/rook-csi-rbd-provisioner-sa created\nrole.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created\nrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created\nclusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created\nclusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created\nclusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created\nclusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created\nclusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created\nclusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created\ncommon.yaml exit 0\nDoing operator-openshift.yaml\nsecuritycontextconstraints.security.openshift.io/rook-ceph created\nsecuritycontextconstraints.security.openshift.io/rook-ceph-csi created\nconfigmap/rook-ceph-operator-config created\ndeployment.apps/rook-ceph-operator created\noperator-openshift.yaml exit 0\nDoing sed of useAllDevices false\nExit from useAllDevice 0\nDoing sed of deviceFilter\nExit from deviceFilter 0\nDoing cluster.yaml create\nExit from cluster.yaml 1\nDoing filessystem-test.yaml\nExit from filesystem-test.yaml 1\nstorageclass.storage.k8s.io/rook-cephfs created\nstorageclass.storage.k8s.io/csi-cephfs created\ndefault_storage_class is \nNo default storage class defined\nSet default storageclass to csi-cephfs\nstorageclass.storage.k8s.io/csi-cephfs patched\nstorageclass.storage.k8s.io/rook-ceph-block created", "stdout_lines": ["Login successful.", "", "You have access to 58 projects, the list has been suppressed. You can list all projects with ' projects'", "", "Using project \"default\".", "Welcome! See 'oc help' to get started.", "Doing clone of rook release v1.5.1", "Doing common.yaml", "namespace/rook-ceph created", "clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created", "serviceaccount/rook-ceph-admission-controller created", "clusterrole.rbac.authorization.k8s.io/rook-ceph-admission-controller-role created", "clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-admission-controller-rolebinding created", "clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created", "role.rbac.authorization.k8s.io/rook-ceph-system created", "clusterrole.rbac.authorization.k8s.io/rook-ceph-global created", "clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created", "clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created", "serviceaccount/rook-ceph-system created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-system created", "clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created", "serviceaccount/rook-ceph-osd created", "serviceaccount/rook-ceph-mgr created", "serviceaccount/rook-ceph-cmd-reporter created", "role.rbac.authorization.k8s.io/rook-ceph-osd created", "clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created", "clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created", "role.rbac.authorization.k8s.io/rook-ceph-mgr created", "role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created", "clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created", "clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created", "podsecuritypolicy.policy/00-rook-privileged created", "clusterrole.rbac.authorization.k8s.io/psp:rook created", "clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created", "rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created", "serviceaccount/rook-csi-cephfs-plugin-sa created", "serviceaccount/rook-csi-cephfs-provisioner-sa created", "role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created", "rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created", "clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created", "clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created", "clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created", "clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created", "clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created", "clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created", "serviceaccount/rook-csi-rbd-plugin-sa created", "serviceaccount/rook-csi-rbd-provisioner-sa created", "role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created", "rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created", "clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created", "clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created", "clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created", "clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created", "clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created", "clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created", "common.yaml exit 0", "Doing operator-openshift.yaml", "securitycontextconstraints.security.openshift.io/rook-ceph created", "securitycontextconstraints.security.openshift.io/rook-ceph-csi created", "configmap/rook-ceph-operator-config created", "deployment.apps/rook-ceph-operator created", "operator-openshift.yaml exit 0", "Doing sed of useAllDevices false", "Exit from useAllDevice 0", "Doing sed of deviceFilter", "Exit from deviceFilter 0", "Doing cluster.yaml create", "Exit from cluster.yaml 1", "Doing filessystem-test.yaml", "Exit from filesystem-test.yaml 1", "storageclass.storage.k8s.io/rook-cephfs created", "storageclass.storage.k8s.io/csi-cephfs created", "default_storage_class is ", "No default storage class defined", "Set default storageclass to csi-cephfs", "storageclass.storage.k8s.io/csi-cephfs patched", "storageclass.storage.k8s.io/rook-ceph-block created"]}

BUG: vnc role is broken on RHEL 8.3

Describe the bug
08:58:35 TASK [vnc user config] *********************************************************

08:58:36 fatal: [jmeter-schader-1.fyre.ibm.com]: FAILED! => {"changed": true, "cmd": ["cp", "/usr/lib/systemd/user/[email protected]", "~/.config/systemd/user/"], "delta": "0:00:00.008595", "end": "2020-11-11 06:58:36.094595", "msg": "non-zero return code", "rc": 1, "start": "2020-11-11 06:58:36.086000", "stderr": "cp: cannot stat '/usr/lib/systemd/user/[email protected]': No such file or directory", "stderr_lines": ["cp: cannot stat '/usr/lib/systemd/user/[email protected]': No such file or directory"], "stdout": "", "stdout_lines": []}
08:58:36

failing job -> https://hyc-ibm-automation-guild-team-jenkins.swg-devops.com/job/cluster-ops/job/Jmeter-Fyre/job/JmeterFyreVM/17/console

To Reproduce
Steps to reproduce the behavior:

  1. run this job -> https://hyc-ibm-automation-guild-team-jenkins.swg-devops.com/job/cluster-ops/job/Jmeter-Fyre/job/JmeterFyreVM/, enable vnc

Expected behavior
Fyre host with jmeter and vnc enabled.

Additional context
Add any other context about the problem here.

Error Running the Request-ocp-roks-play plabook to deploy a ROKS cluster

Describe the bug
I am using the playbook request ocp roks to deploys a ROKS cluster. I have followed the instructions on the readme of this playbook and populated the roks-vars.yaml with the details.

I am finding the error :

TASK [request-ocp-roks : get-resource-group] *********************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "", "rc": 1, "resource": {"_name": "cloudpaksvt-user-ci-cd", "_type": "ibm_resource_group", "target": "ibm_resource_group.cloudpaksvt-user-ci-cd"}, "stderr": "\nError: Error occured while fetching account user details: \"token contains an invalid number of segments\"\n\n  on ibm_resource_group_cloudpaksvt-user-ci-cd.tf line 1, in data \"ibm_resource_group\" \"cloudpaksvt-user-ci-cd\":\n   1: data ibm_resource_group \"cloudpaksvt-user-ci-cd\" {\n\n\n", "stderr_lines": ["", "Error: Error occured while fetching account user details: \"token contains an invalid number of segments\"", "", "  on ibm_resource_group_cloudpaksvt-user-ci-cd.tf line 1, in data \"ibm_resource_group\" \"cloudpaksvt-user-ci-cd\":", "   1: data ibm_resource_group \"cloudpaksvt-user-ci-cd\" {", "", ""], "stdout": "data.ibm_resource_group.cloudpaksvt-user-ci-cd: Refreshing state...\n", "stdout_lines": ["data.ibm_resource_group.cloudpaksvt-user-ci-cd: Refreshing state..."]}

PLAY RECAP *******************************************************************************************************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

This would suggest that the IBMCloud APIKey I am using to login is wrong? But when running the command ibmcloud login --apikey <apikey> the login suceeds and it shows me the required resource group.

Expected behavior
Provided the same login details, the playbook should be able to run the task to get the given resource group.

Desktop (please complete the following information):

  • OS: Linux

EPIC - rename ansible roles in order to create a gallaxy collection

List of roles that need to be renamed with '_' instead of '-' in role name

Check off as completed.

  • aws-cli-install
  • aws-route53
  • azure-cli-install
  • common-services
  • common-services-cat-src-inst
  • csi-cephfs-fyre
  • deploy-ova-vmware
  • git-install-fyre
  • google-cli-install
  • oc-client-install
  • ocp-cluster-tag
  • ocp-login
  • provision-ocp-cluster
  • python-install-fyre
  • recover-epxired-certificates
  • recover-machine-config
  • request-ocp-aws
  • request-ocp-fyre
  • request-ocp-roks
  • request-ocp4-logging
  • request-ocpplus-cluster-transfer-fyre
  • request-ocs
  • request-ocs-local-storage
  • start-aws-cluster
  • stop-aws-cluster

As a test engineer need to add a Role & Playbook for provisioning SC using nfs-subdir-external-provisioner

Is your feature request related to a problem? Please describe.
Currently we have RookCeph playbook which deploy a rook cluster on an OCP environment. This takes times and at times is unstable.

nfs-subdir-external-provisioner is a Kube SIG project(https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner) which uses the NFS server to provide automatic provisioning of Storage Classes on OCP Private clusters. This would help quick on-boarding/use of storage classes used for deploying Cloud Paks.

Describe the solution you'd like
A nfs role & playbook which nfs auto provisioner.
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

rook storage class issue on ocp 4.5

Hello,
We are using the guidance here to create storage classes on fresh ocp 4.x clusters to use with IBM redis operator.
https://playbook.cloudpaklab.ibm.com/configure-rook-cephfs-on-openshift-4/

I am not very sure how it works but it seems that the yaml files mentioned in the link above create different storage class depending on ocp cluster version. For example for 4.5 , it created these -

csi-cephfs (default)
rook-ceph-block
rook-cephfs

But on 4.3 cluster , it created these -

rook-ceph-block-internal
rook-ceph-cephfs-internal
rook-ceph-delete-bucket-internal

IBM redis has been working fine with rook-ceph-cephfs-internal on ocp 4.3.
But none of the new sc seem to be working for IBM redis instances on ocp 4.5.
Whenever i try to use any of the storage classes on 4.5 cluster , i keep getting volume mount errors -
(Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[conf data rqa-rqa-redis-token-pzbrv]: timed out waiting for the condition)

Could someone kindly provide more clarity on why there are different storage classes on ocp 4.3 and 4.5 and what's the different among those which is failing volume mount for redis instances ?

as an automation developer I want to create FYRE ansible automation to install Openshift Container Storage (OCS) so that developers can have OCS storage on fyre.

Is your feature request related to a problem? Please describe.
We currently have csi-cephfs automation for fyre. We need to create automation to install Redhat Openshift container storage (OCS) on fyre.

Describe the solution you'd like
OCP+ installs come with an extra drive available. The solution should install OCS using these extra drives.

Describe alternatives you've considered
csi-cephfs

Additional context
Some Investigation required to ensure redhat will support this install.

csi-cephfs-fyre-play appears to be broken

Describe the bug
After pulling the latest master, the csi-cephfs-fyre-play playbook no longer completes successfully.

It appears there was a widespread rename/refactor recently.

To Reproduce
Follow https://github.com/IBM/community-automation/tree/master/ansible/csi-cephfs-fyre-play#run-playbook, and the following error is seen:

TASK [csi_cephfs_fyre] ************************************************************************************************************************************************************************************************************

TASK [csi_cephfs_fyre : include_tasks] ********************************************************************************************************************************************************************************************
fatal: [9.30.227.81]: FAILED! => {"reason": "Could not find or access '/Users/cgiroua/git/github.com/IBM/community-automation/ansible/csi-cephfs-fyre-play/csi-cephfs-fyre.yaml' on the Ansible Controller."}

PLAY RECAP ************************************************************************************************************************************************************************************************************************
9.30.227.81                : ok=7    changed=2    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

Expected behavior
Playbook/ceph installation to complete successfully.

Desktop (please complete the following information):

  • OS: iOS

Ansible script fails with Bad configuration option: usekeychain\r\n/root/.ssh/config:

Describe the bug
Ansible script fails with "Failed to connect to the host via ssh: /root/.ssh/config: line 3: Bad configuration option: usekeychain\r\n/root/.ssh/config: terminating, 1 bad configuration options", "unreachable":

To Reproduce
Steps to reproduce the behavior:

  1. I ran the Ansible scripts request-ocp-fyre-play and the cluster is created successfully but the script fails with the error above.
    ansible-playbook -i inventory request-ocp-fyre-play.yml -e "clusterName=pacp4d-4" -e "ocpVersion=4.6.9" -e "fyre_ocptype=ocpplus" -e @ocp_vars.yml

  2. I ran the Ansible script request-ocp-ceph-fyre-play against the cluster I provisioned in step 1 and it failed with the same error..

ansible-playbook  -i inventory request-ocp-ceph.yml -e "clusterName=pacp4d-4" -e "ocpVersion=4.6.9"

PLAY [Install OCP 4.x onto OCP+Beta Fyre cluster] *************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : import tasks] ************************************************************************************************************************************************************************
included: /community-automation/ansible/roles/request_ocp_fyre/tasks/request-ocpplus-fyre.yml for localhost

TASK [request_ocp_fyre : Check OCP Existance] *****************************************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : debug] *******************************************************************************************************************************************************************************
skipping: [localhost]

TASK [request_ocp_fyre : Create OCPPlus in Fyre (Custom URLs)] ************************************************************************************************************************************************
skipping: [localhost]

TASK [request_ocp_fyre : Create OCPPlus in Fyre] **************************************************************************************************************************************************************
skipping: [localhost]

TASK [request_ocp_fyre : check fyre status] *******************************************************************************************************************************************************************
skipping: [localhost]

TASK [request_ocp_fyre : pause] *******************************************************************************************************************************************************************************
skipping: [localhost]

TASK [request_ocp_fyre : check fyrestatus for error] **********************************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : check for error status] **************************************************************************************************************************************************************
skipping: [localhost]

TASK [request_ocp_fyre : check that all fyre nodes have a deployed status] ************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : check for error status after loop check] *********************************************************************************************************************************************
skipping: [localhost]

TASK [request_ocp_fyre : Derive Info from Fyre Api] ***********************************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : remove new host from localhost known_hosts ip] ***************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : remove new host from localhost known_hosts] ******************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : remove new host from localhost known_hosts fqdn] *************************************************************************************************************************************
ok: [localhost]

TASK [request_ocp_fyre : Add inf host to group] ***************************************************************************************************************************************************************
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
ok: [localhost]

PLAY [Install csi-cephfs onto OCP+Beta Fyre cluster] **********************************************************************************************************************************************************

TASK [git_install_fyre : include_tasks] ***********************************************************************************************************************************************************************
included: /community-automation/ansible/roles/git_install_fyre/tasks/git_install_fyre.yaml for 9.30.227.24

TASK [git_install_fyre : Install git on fyre inf node] ********************************************************************************************************************************************************
fatal: [9.30.227.24]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: /root/.ssh/config: line 3: Bad configuration option: usekeychain\r\n/root/.ssh/config: terminating, 1 bad configuration options", "unreachable": true}

PLAY RECAP ****************************************************************************************************************************************************************************************************
9.30.227.24                : ok=1    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   
localhost                  : ok=10   changed=0    unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   

**Expected behavior**
Script completes with no failures.

**Desktop (please complete the following information):**
 - OS: [e.g. iOS] macOS Catalina 10.15.5
 - Browser [e.g. chrome, safari] n/a


**Additional context**
Add any other context about the problem here.

as an automation engineer I need to update the request-ocp-fyre-play ansible playbook to include new URL so fyre deployments can continue.

Describe the bug
fyre deployments where broken due a new URL required for deploying OCP on bare metal.

To Reproduce
Steps to reproduce the behavior:

  1. attempt fyre ocp deployment
  2. check fyre stack, and you will see failed.

Expected behavior
Successful deployment of an OCP cluster.

Additional context
Here is a sample of the new URL to be included in the ansible role

  "rootfs_url": "https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.6/46.82.202010011740-0/x86_64/rhcos-46.82.202010011740-0-live-rootfs.x86_64.img",

As an automation developer I need to create AWS ansible playbook automation to deploy AWS clusters using ACM/HIVE to offer an AWS solution via community automation.

**Is your feature request related to a problem? **
Create an RedHat Ansible solution to deploy an AWS cluster using Redhat ACM/HIVE.

Describe the solution you'd like
Deploy OCP Cluster to AWS infrastructure.
Solution will need to:

  • follow current ansible model
  • create a aws yaml templete (Sample template to be provided by Ray Ashworth)
  • create parameters for (AWS_ACCESS_ID, AWS_SECRET_ACCESS_KEY, size of VM's(Instances), RedHat pull-secret, size of cluster, cluster-name, OCP version)
  • Deal with Storage ( gp2,OCS? ), post_install activity.

Describe alternatives you've considered
na, this will replace terraform solution(s)

Additional context
Contact Ray Ashworth for details about ACM/HIVE.
ACM/HIVE login details available on request.

Missing script in request-ocs-local-storage

I'm looking at trying to use request-ocs-local-storage

The request-ocs-local-storage.yml contains this stanza:

- name: Install local storage operator
  shell: bash -lc "{{ ocs_bastion_setup_dir }}/02.install-local-storage-operator.sh"
  args:
      warn: false
  register: localstore

However the referenced script 02.install-local-storage-operator.sh does not seem to exist:

[root@kawkawlin-inf files]# ls
00.check-cpus.sh   01.install-ocs-operator.sh   04.install-storage-cluster.sh  46.local-volumes-discovery.yaml  46.local-volume-set.yaml
00.label-nodes.sh  03.install-local-volumes.sh  46.local-volumes-discovery.sh  46.local-volume-set.sh 

image

As an automation developer I need to create "vSphere" ansible playbook automation to deploy clusters using ACM/HIVE to offer a "vSphere" solution via community automation

**Is your feature request related to a problem? **
Create an RedHat Ansible solution to deploy an vSphere cluster using Redhat ACM/HIVE.

Describe the solution you'd like
Deploy OCP Cluster to vSphere infrastructure.
vSphere IPI install solution is currently being worked by Walter Krapohl and Ray Ashworth

Describe alternatives you've considered
na, this will replace terraform solution(s)

Additional context
Contact Ray Ashworth for details about ACM/HIVE.
ACM/HIVE login details available on request.

As an automation developer I would like to automatically deploy cloud databases (like PostgreSQL)

Describe the solution you'd like
The databases and ldap servers are needed as pre-requisites for CP4A deployment.
Automatic deployment of cloud databases, like PostgreSQL on Google Cloud, would be beneficial for E2E automation.

Describe alternatives you've considered
Might be that down the road the Base Pak deployment will allow selection of database to be incorporated.

Additional context

request-ocs-fyre-play fails when run against a Linux/Z fyre openshift cluster (4.6.16)

I provisioned a cluster through fyre and it comes with no default storage provider so I tried to install one using the request-ocs-fyre-play playbook. It fails to complete "noobaa storageclass never became available". This is on Linux/Z and I need to try Linux/P sometime too. The same instructions work fine on Linux/X

To Reproduce

  # Change to the playbook to install ocs
  cd community-automation/ansible/request-ocs-fyre-play
  # We are on the inf node so oc is local, set up simple inventory
  cp examples/inventory_local inventory
  sudo /usr/local/bin/ansible-playbook  -i inventory request-ocs-fyre.yml 2>&1 | tee ${WORKSPACE}/ocs_install.log

Expected behavior
Expected OCS storage classes to become available (they do when the same playbook is run on Linux/X).

Additional context
playbook.log

Brew install sshpass on Mac not working

Describe the bug
brew install http://git.io/sshpass.rb from the INstalling Ansible readme Is producing the error:

Error: Calling Non-checksummed download of sshpass formula file from an arbitrary URL is disabled! Use 'brew extract' or 'brew create' and 'brew tap-new' to create a formula file in a tap on GitHub instead.

To Reproduce
Steps to reproduce the behavior:

  1. Go to your Mac terminal
  2. Run brew install http://git.io/sshpass.rb

Expected behavior
Installs sshpass

Desktop (please complete the following information):

  • OS: macOS Catalina 10.15.6

csi-cephfs-fyre installation fails based on two different conditions

If python executable is not found or fingerprint is not in known_hosts, both of which are issues for new clusters.
To Reproduce

  1. Make new cluster through fyre
  2. Make new attempt installation through ansible-playbook

Expected behavior
Clearly stated expectations that the infrastructure node has python and is added to known_hosts

as an openshift automation developer i need to be able to uninstall common services to be able to clean up my cluster installation

Is your feature request related to a problem? Please describe.
This automation currently allows for installing common services, and it would be nice it could also do an uninstall of common services to set the cluster back to a state with common services.

Describe the solution you'd like
Create a role for uninstalling common services

Describe alternatives you've considered
none

Additional context
n/a

ceph storage playbook failed to configure due to "unreachable=1" failure

Describe the bug
We are trying to configure rook-ceph storage and it is failing with the following error. Looks like Infrastructure node is not reachable and it fails.

TASK [csi-cephfs-fyre : Wait for ceph to finish pod bring-up] ***************************************************************************************************************************************************************************************************************************
fatal: [9.30.91.186]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 9.30.91.186 closed.", "unreachable": true}
PLAY RECAP ******************************************************************************************************************************************************************************************************************************************************************************
9.30.91.186                : ok=13   changed=6    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   
+ '[' 1 -eq 0 ']'
+ log 'ERROR: Ceph storage configuration failed.'
+ local 'msg=ERROR: Ceph storage configuration failed.'
++ date +%Y-%m-%dT%H:%M:%S%z
+ echo '2021-01-11T13:35:12+0000: ERROR: Ceph storage configuration failed.'
2021-01-11T13:35:12+0000: ERROR: Ceph storage configuration failed.

To Reproduce
Steps to reproduce the behavior:

  1. Create OCP 4.6.6 cluster
  2. Invoke ceph storage playbook and it fails with the above failure.

Expected behavior
rook-ceph storage should configure successfully.

Desktop (please complete the following information):

  • OS: [e.g. iOS] 10.15.7

Problems running csi-cephfs-fyre playbook against a new Fyre cluster (fails with not having a Red Hat subscription)

Describe the bug
I've been using playbook https://github.com/IBM/community-automation/tree/master/ansible/csi-cephfs-fyre-play to add storage to temporary Fyre clusters we use for testing.
This has been working ok for me on my workstation using a git commit from 1 October (which I haven't updated for a while).
A colleague has just recently cloned this repo and picked up some updates. He ran the playbook against a new cluster and got an error earlier today...

TASK [python-install-fyre : Install python on fyre inf node] *************************************************************************************************************************************
fatal: [9.30.209.102]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 1, "stderr": "Warning: Permanently added '9.30.209.102' (ECDSA) to the list of known hosts.\r\nShared connection to 9.30.209.102 closed.\r\n", "stderr_lines": ["Warning: Permanently added '9.30.209.102' (ECDSA) to the list of known hosts.", "Shared connection to 9.30.209.102 closed."], "stdout": "Updating Subscription Management repositories.\r\nUnable to read consumer identity\r\n\r\nThis system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.\r\n\r\nLast metadata expiration check: 0:00:18 ago on Fri 13 Nov 2020 02:07:02 AM PST.\r\nError: \r\n Problem: cannot install the best update candidate for package libidn2-2.2.0-1.el8.x86_64\r\n  - nothing provides libunistring.so.0()(64bit) needed by libidn2-2.3.0-1.el6.x86_64\r\n(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)\r\n", "stdout_lines": ["Updating Subscription Management repositories.", "Unable to read consumer identity", "", "This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.", "", "Last metadata expiration check: 0:00:18 ago on Fri 13 Nov 2020 02:07:02 AM PST.", "Error: ", " Problem: cannot install the best update candidate for package libidn2-2.2.0-1.el8.x86_64", "  - nothing provides libunistring.so.0()(64bit) needed by libidn2-2.3.0-1.el6.x86_64", "(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)"]}

I believe the playbook behaviour was changed recently to install python3 on the infra node before doing other stuff... which only works if you have an active RedHat subscription, which our Fyre clusters typically don't have (I think). I'm guessing that is what went wrong here?
Confusingly, when I tried thsi myself against a quickburn cluster, that does not seem to have shown a problem. I dont' understand why.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

As an automation developer I need to add a Role & Playbook for OpenLdap Deployment.

Is your feature request related to a problem? Please describe.
As most of the CP4BA products use LDAP, it would be good to have OpenLdap Role. Also moving forward with the common login via IAM, this ldap would be useful for test setups.

Describe the solution you'd like
A OpenLdap role & playbook which would deploy OpenLdap on OCP clusters

As an automation developer I would like to automatically install Foundation layer

Describe the solution you'd like
With the new cloud pak strategy, the cloud paks will become cartriges to be deployed on top of a foundation layer.
Having foundation automatic deployment will be very beneficial for all the cloud packs.

Describe alternatives you've considered
Could be that the cloud pak operator will automatically call the foundation operator

Additional context

"toomanyrequests" docker pull rate limit causes playbook to fail

Is your feature request related to a problem? Please describe.
Sometime late last year docker imposed a pull rate limit and once it's hit the ceph image can not be pulled and causes the playbook to fail.

Describe the solution you'd like
I was able to get past this by adding my docker credentials to the global pull secret of my cluster, though I'm sure there is a better way to do this. The "adding" of the docker creds to the global pull secret is very important as the OCP docs mention creating a global pull secret and if you do that you end up overwriting the entire thing and it cases the cluster not to be able to pull things from the redhat registry.

Describe alternatives you've considered
I'm sure there's a better alternative to this without a global pull secret though. Perhaps flag or something that can optionally add the docker creds to a pull secret.

Additional context
This is a somewhat weird failure as it doesn't show up as such when the playbook exits and if you're not paying close attention you might think it completed successfully

TASK [csi-cephfs-fyre : Viewing wait-for-csi-cephfs pods to go to Running Log] ***********************************************
ok: [9.30.13.90] => {
    "msg": [
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start",
        "Waiting for rook-ceph-mds-myfs pods to start"
    ]
}

PLAY RECAP *******************************************************************************************************************
9.30.13.90                 : ok=15   changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

I saw that because all of these show up in green text "Waiting for rook-ceph-mds-myfs pods to start",

common-service-play: Automate db2u (db2-warehouse) operator install for ICS 3.7.1

Is your feature request related to a problem? Please describe.
Would like for there to be the feature of installing db2u(db2-warehouse) operator alongside common service 3.7.1 install. This is a new feature that is possible with ICS 3.7.1 and wasn't possible in past versions of ICS.

Describe the solution you'd like
Add another option to cs_action in the common-service-play folder that allows user to specify the fact they want ICS 3.7.1 (latest as of this moment) installed with db2u. The steps in the link below show that another catalog source must be created for db2u before applying the operand-request in which we can then include ibm-db2u-operator.

Additional context
https://www.ibm.com/support/knowledgecenter/SSHKN6/installer/3.x.x/install_cs_cli.html

Create OCS play/role for installation based on the Local Storage Operater for Fyre and VMware.

  1. Create an asible role that installs OCS using the local storage operator.
  • Supports installing OCS 4.5 and Local Storage operator 4.5 on OCP 4.4 and 4.5 clusters.
  • Supports installing OCS 4.6 and Local Storage operator 4.6 using new device discovery functions on OCP 4.6 or newer clusters.
  1. Create a play that uses this role to install OCS 4.5/4.6 on Fyre OCP 4.4, 4.5 and 4.6 clusters.
  2. Create a play that uses this role to install OCS 4.5/4.6 on VMware OCP 4.4, 4.5 and 4.6 clusters that have workers with attached devices (ie /dev/sdb).

Add oc command check before downloading and installing oc client

Is your feature request related to a problem? Please describe.
update the install oc client role to check for the existence of the oc command before installing.

Describe the solution you'd like
Add a check for oc command.

Describe alternatives you've considered
na

Additional context
na

as an automation developer I need to create google ansible playbook automation using ACM/HIVE to provide google as a deployment option.

**Is your feature request related to a problem? **
Create an RedHat Ansible solution to deploy a Google cluster using Redhat ACM/HIVE.

Describe the solution you'd like
Deploy OCP Cluster to Google infrastructure.
Solution will need to:

  • follow current ansible model
  • create a google yaml templete
  • create parameters for (google credentials, size of VM's(Instances), RedHat pull-secret, size of cluster, cluster-name, OCP version )
  • Deal with Storage

Describe alternatives you've considered
na, this will replace terraform solution(s)

Additional context
Contact Ray Ashworth for details about ACM/HIVE.
ACM/HIVE login details available on request.

As an automation developer I would like to consume the shared ansible roles from a private repo using ansible collections

Is your feature request related to a problem? Please describe.
Currently I maintain separate identical copies of a number of roles including fyre ocp spinups. Any time a change is made it needs to be manually copied from one git repo to the other. Changes arent often made but having multiple versions feels smelly.

Describe the solution you'd like
I would recommend investigating https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#installing-a-collection-from-a-git-repository. Ansible 2.10 added support with consuming a collection straight from a git repo

Ideally this would let us consume the roles in another repo by referencing the collection and a given level.
I suspect it might just be adding a galaxy.xml in this repo and documenting how it can be consumed in other ansible.

New play that just install an Fyre OCP+Beta cluster with cephfs installed.

Is your feature request related to a problem? Please describe.
During a development team meeting one developer was expressing the reason from not moving to the Fyre OCP+Beta cluster was cephfs was not automatically install on these clusters, like the ember way of installing on fyre.

Create a play that installs a fyre OCP+fyre cluster and cephfs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.