Giter Club home page Giter Club logo

rosa's Introduction

Red Hat OpenShift Service on AWS (ROSA) Command Line Tool

This project contains the rosa command line tool that simplifies the use of Red Hat OpenShift Service on AWS, also known as ROSA.

Quickstart guide

Refer to the official ROSA documentation: https://access.redhat.com/products/red-hat-openshift-service-aws

  1. Follow the AWS Command Line Interface documentation to install and configure the AWS CLI for your operating system.
  2. Download the latest release of rosa and add it to your path.
  3. Initialize your AWS account by running rosa init and following the instructions.
  4. Create your first ROSA cluster by running rosa create cluster --interactive

Build from source

If you'd like to build this project from source use the following steps:

  1. Clone the repository
git clone https://github.com/openshift/rosa.git
  1. cd to the checkout out source directory
cd rosa
  1. Install the binary
make install

NOTE: If you don't have $GOPATH/bin in your $PATH you need to add it or move rosa to a standard system directory eg. for Linux/OSX:

sudo mv $GOPATH/bin/rosa /usr/local/bin

Try the ROSA cli from binary

If you don't want to build from sources you can retrieve the rosa binary from the latest image.

You can copy it to your local with this command:

podman run --pull=always --rm registry.ci.openshift.org/ci/rosa-aws-cli:latest cat /usr/bin/rosa > ~/rosa && chmod +x ~/rosa

Also you can test a binary created after a specific merged commit just using the commit hash as image tag:

podman run --pull=always --rm registry.ci.openshift.org/ci/rosa-aws-cli:f7925249718111e3e9b61e2df608a6ea9cf5b6ce cat /usr/bin/rosa > ~/rosa && chmod +x ~/rosa

NOTE: There is a side-effect of container image registry authentication which results in an auth error when your token is expired even when the image requires no authentication. In that case all you need to do is authenticate again:

$ oc registry login
info: Using registry public hostname registry.ci.openshift.org
Saved credentials for registry.ci.openshift.org

$ cat ~/.docker/config.json | jq '.auths["registry.ci.openshift.org"]'
{
  "auth": "token"
}

Secure Credentials Storage

The OCM_KEYRING environment variable provides the ability to store the ROSA configuration containing your authentication tokens in your OS keyring. This is provided as an alternative to storing the configuration in plain-text on your system. OCM_KEYRING will override all other token or configuration related flags.

OCM_KEYRING supports the following keyrings:

To ensure OCM_KEYRING is provided to all rosa commands, it is recommended to set it in your ~/.bashrc file or equivalent.

wincred keychain secret-service pass
Windows ✔️
macOS ✔️ ✔️
Linux ✔️ ✔️

Have you got feedback?

We want to hear it. Open an issue against the repo and someone from the team will be in touch.

rosa's People

Contributors

aaraj7 avatar andreadecorte avatar chenz4027 avatar ciaranroche avatar ckandag avatar davidleerh avatar gdbranco avatar hunterkepley avatar jeremyeder avatar jharrington22 avatar leseb avatar lucasponce avatar marcolan018 avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar oriadler avatar osherdp avatar pvasant avatar renan-campos avatar robpblake avatar stevekuznetsov avatar tbrisker avatar tirthct avatar tsorya avatar vkareh avatar xueli181114 avatar yasun1 avatar zgalor avatar zvikorn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rosa's Issues

Unable to find the cluster post creation

I installed a new cluster using ROSA CLI:

  1. Ran rosa init:
pranav@dragonfly mig-controller]$ rosa init
I: Logged in as '<redacted>' on 'https://api.openshift.com'
I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating SCP policies...
I: AWS SCP policies ok
I: Validating AWS quota...
I: AWS quota ok
I: Ensuring cluster administrator user '<redacted>'...
I: Admin user '<redacted>' already exists!
I: Validating SCP policies for '<redacted>'...
I: AWS SCP policies ok
I: Validating cluster creation...
I: Cluster creation valid
I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.3.3 
  1. Ran rosa cluster create:
pranav@dragonfly mig-controller]$ rosa create cluster --cluster-name pgaikwad-o4
I: Creating cluster 'pgaikwad-o4'
I: To view a list of clusters and their status, run 'rosa list clusters'
I: Cluster 'pgaikwad-o4' has been created.
I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.
I: To determine when your cluster is Ready, run 'rosa describe cluster -c pgaikwad-o4'.
I: To watch your cluster installation logs, run 'rosa logs install -c pgaikwad-o4 --watch'.
E: Failed to get cluster 'pgaikwad-o4': There is no cluster with identifier or name 'pgaikwad-o4'

However, when I try to list clusters, using rosa list clusters, I don't see this cluster in the output:

[pranav@dragonfly mig-controller]$ rosa list clusters
I: No clusters available

Also, none of the other operations work on this cluster, such as deletion or getting progress of installation:

[pranav@dragonfly mig-controller]$ rosa delete cluster --cluster=pgaikwad-o4 --region=us-east-1
? Are you sure you want to delete cluster pgaikwad-o4? Yes
E: Failed to delete cluster 'pgaikwad-o4': There is no cluster with identifier or name 'pgaikwad-o4'

I confirmed that the EC2 instances for this cluster were created, and I can see them in the EC2 console of my AWS account.

I am unsure if this is an issue with the CLI, or it's a misconfiguration on my side. Any help debugging this will be appreciated.

Add iam:CreateServiceLinkedRole to all IAM policies

We recently had cluster provisioning fail on the installer role generated by the rosa cli missing the iam:CreateServiceLinkedRole permission.

This can happen in brand new accounts where the AWS service linked roles have not been created by other AWS activity.

I believe we should include this on all of our IAM roles, since it is something that can be needed, if a service linked role has not already been created before in that account. At a minimum we should include it in our installer role.

AccessKey credentials are reset on every cluster creation

Access keys are being reset on every rosa create cluster ... call such that subsequent calls (rosa list clusters) failed due to invalid credentials. These credentials are also used when working with the ocm CLI so it is causing issues for my team since we store these. Would it be possible to not upsert the access keys on cluster creation or perhaps suggest an alternative workflow? cc'ing @jharrington22 since they wrote the code and explanation.

rosa/pkg/aws/client.go

Lines 471 to 494 in 7b3efee

// GetAWSAccessKeys uses UpsertAccessKey to delete and create new access keys
// for `osdCcsAdmin` each time we use the client to create a cluster.
// There is no need to permanently store these credentials since they are only used
// on create, the cluster uses a completely different set of IAM credentials
// provisioned by this user.
func (c *awsClient) GetAWSAccessKeys() (*AccessKey, error) {
if c.awsAccessKeys != nil {
return c.awsAccessKeys, nil
}
accessKey, err := c.UpsertAccessKey(AdminUserName)
if err != nil {
return nil, err
}
err = c.ValidateAccessKeys(accessKey)
if err != nil {
return nil, err
}
c.awsAccessKeys = accessKey
return c.awsAccessKeys, nil
}

❯ rosa list clusters
E: Failed to create AWS client: InvalidClientTokenId: The security token included in the request is invalid.
	status code: 403, request id: 77591dc5-8d54-4282-9ec9-b20xyzabc658eb

`rosa install addon ...` non-interactive install

Hi! Is there currently a way to perform add-on installation in a non-interactive way? I'm being prompted for the parameters required:

❯ rosa install addon cluster-logging-operator -c test --yes
? Use AWS CloudWatch: No
? Collect Applications logs: No
? Collect Infrastructure logs: No
? Collect Audit logs (optional): No
? CloudWatch region (optional):

Add code of conduct

literally every project that's open source and wants more than a single contributor should have a code of conduct

SCP region restriction on AWS account is not allowing to validate SCP policies

Hi,

We have SCP policies setup on AWS account which allows us to deploy resource and restricted to specific region (say we are allowed to deploy resources only to ap-southeast-2 - Sydney region). I see in initialise/cmd.go check for simulate-principal-proxy is called at the time of validating SCP policies. Even though there are no restrictions on Sydney region around what AWS resource we create, when I run rosa init --region=ap-southeast-2 gives error saying Unable to validate SCP policies. Actions not allowed with tested credentials [ec2:* autoscaling:* elasticloadbalancing:* ..]

I see similar issue was resolved on ISI and UPI installer in the past https://bugzilla.redhat.com/show_bug.cgi?id=1757244

Could you please advice on this issue ?

Add OWNERS file and bot integration

In order for peoples PRs and issued to noticed easily there needs to be an Owners file to integrate with the prow bot.
That ways prow bot can automatically assign someone from the owners file for review

`Make install` fails hard

rosa/README.md

Line 24 in 7796741

2. `cd` to the checkout out source directory

Expected

make install succeeeds and I get a nice rosa binary

Observed

[email protected] $ make install
go install ./cmd/rosa
pkg/interactive/interactive.go:25:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/AlecAivazis/survey/v2
pkg/interactive/interactive.go:26:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/AlecAivazis/survey/v2/core
pkg/interactive/interactive.go:27:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/AlecAivazis/survey/v2/terminal
pkg/aws/client.go:26:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/aws
cmd/create/accountroles/cmd.go:25:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/aws/arn
pkg/aws/client.go:28:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/aws/awserr
pkg/aws/client.go:29:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/aws/client
pkg/aws/client.go:30:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/aws/credentials
pkg/aws/client.go:31:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/aws/request
pkg/aws/client.go:32:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/aws/session
pkg/aws/client.go:33:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/cloudformation
pkg/aws/client.go:34:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface
pkg/aws/client.go:35:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/ec2
pkg/aws/client.go:36:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/ec2/ec2iface
pkg/aws/client.go:37:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/iam
pkg/aws/client.go:38:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/iam/iamiface
pkg/aws/client.go:39:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/organizations
pkg/aws/client.go:40:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/organizations/organizationsiface
pkg/aws/client.go:41:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/servicequotas
pkg/aws/client.go:42:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/servicequotas/servicequotasiface
pkg/aws/client.go:43:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/sts
pkg/aws/client.go:44:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/aws/aws-sdk-go/service/sts/stsiface
cmd/logs/install/cmd.go:25:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/briandowns/spinner
cmd/download/oc/cmd.go:27:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/dustin/go-humanize
pkg/output/output.go:28:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/ghodss/yaml
cmd/login/cmd.go:24:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/golang-jwt/jwt
pkg/ocm/config.go:31:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/golang/glog
pkg/ocm/versions.go:24:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/hashicorp/go-version
cmd/login/cmd.go:25:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/openshift-online/ocm-sdk-go
pkg/ocm/addons.go:20:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/openshift-online/ocm-sdk-go/accountsmgmt/v1
pkg/ocm/addons.go:21:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/openshift-online/ocm-sdk-go/clustersmgmt/v1
pkg/ocm/helpers.go:36:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/openshift-online/ocm-sdk-go/errors
pkg/logging/aws_logger.go:25:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/sirupsen/logrus
cmd/completion/cmd.go:22:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/spf13/cobra
cmd/docs/cmd.go:24:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/spf13/cobra/doc
pkg/aws/profile/flag.go:24:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/spf13/pflag
pkg/aws/policies.go:32:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/github.com/zgalor/weberr
pkg/logging/round_tripper.go:34:2: cannot find package "." in:
        /Users/rackow/go/pkg/mod/github.com/openshift/[email protected]/vendor/gitlab.com/c0b/go-ordered-json
make: *** [Makefile:37: install] Error 1

Environment


$ go version
go version go1.17.3 darwin/amd64
$ uname -a
Darwin MAC-FVFGH12JQ05P 21.2.0 Darwin Kernel Version 21.2.0: Sun Nov 28 20:29:10 PST 2021; root:xnu-8019.61.5~1/RELEASE_ARM64_T8101 arm64

Further information

This is most likely caused by #580

a clean version works

$ git clone [email protected]:openshift/rosa.git
Cloning into 'rosa'...
remote: Enumerating objects: 14570, done.
remote: Counting objects: 100% (1355/1355), done.
remote: Compressing objects: 100% (326/326), done.
remote: Total 14570 (delta 1152), reused 1050 (delta 1017), pack-reused 13215
Receiving objects: 100% (14570/14570), 11.09 MiB | 8.66 MiB/s, done.
Resolving deltas: 100% (9220/9220), done.
$ cd rosa/
(master)$ make install
go install ./cmd/rosa

`rosa init` should report all quota errors it encounters so a single service request can be opened.

I'm testing rosa on a limited aws subscription, when rosa init encounters a quota limitation it stop and notifies the user. But after increasing the quota to rosa's expectation, it reports the next quota limitation.

It would be great it rosa init could report all the limitations at once so the user only needs to open a single service request.

I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating SCP policies...
I: AWS SCP policies ok
I: Validating AWS quota...
E: Insufficient AWS quotas
E: Service ebs quota code L-FD252861 Provisioned IOPS SSD (io1) volume storage not valid, expected quota of at least 300, but got 50

Add contributing guide

In order for folks to contribute, assuming there will be adoption, there needs to be a "how to contribute"

New privatelink functionality not sending private_link argument to openshift API

Hi there,

The new privatelink functionality is not sending the "private_link": true, option to the /api/clusters_mgmt/v1/clusters openshift API endpoint resulting in the following errors for SingleAZ and MultiAZ deployments:

ERR: Failed to create cluster: 2 subnet ids should be specified for a Single AZ cluster, instead found: 1.
ERR: Failed to create cluster: 6 subnet ids should be specified for a Multi AZ cluster, instead found: 3.

An example command I am running for MultiAZ is below:

rosa --profile rosa --region ap-southeast-2 create cluster --cluster-name=test --disable-scp-checks --multi-az --private --subnet-ids subnet-xxxxxxx,subnet-yyyyyyy,subnet-zzzzzzz --machine-cidr 192.168.4.0/24 --service-cidr 10.141.0.0/16 --pod-cidr 10.142.0.0/16

rosa delete [operator-roles|oidc-provider] does not have a `--auto` option

version 1.1.5

example

❯ rosa delete operator-roles -c $ROSA_CLUSTER_NAME  --auto
Error: unknown flag: --auto
Usage:
  rosa delete operator-roles [flags]

Aliases:
  operator-roles, operatorrole

Examples:
  # Delete Operator roles for cluster named "mycluster"
  rosa delete operator-roles --cluster=mycluster

Flags:
  -c, --cluster string   ID of the cluster (deleted/archived) to delete the operator roles from (required).
  -h, --help             help for operator-roles
      --mode string      How to perform the operation. Valid options are:
                         auto: Operator roles will be deleted automatically using the current AWS account
                         manual: Command to delete the operator roles will be output which can be used to delete manually (default "auto")

Global Flags:
      --debug            Enable debug mode.
      --profile string   Use a specific AWS profile from your credential file.
      --region string    Use a specific AWS region, overriding the AWS_REGION environment variable.
  -y, --yes              Automatically answer yes to confirm operation.

❯ rosa delete operator-roles -c $ROSA_CLUSTER_NAME  -y
? Operator roles deletion mode:  [Use arrows to move, type to filter, ? for more help]
> auto
  manual

Cannot create a cluster in Tokyo region (ap-northeast-1)

I'd run into the issue with the following error message.

$ rosa version
1.0.4

$ rosa create cluster --cluster-name rosa-ex             
I: Creating cluster 'rosa-ex'
I: To view a list of clusters and their status, run 'rosa list clusters'
E: Failed to create cluster: Failed to create instance using AMI 'ami-0510c45ae7b5be1e6'

The AMI may not been compatible to an instance type or other configuration of the bootstrap node.

Note: I could create a cluster in the us-east-1.

rosa describe cluster missing Network Type information

Now that the rosa cli supports --network-type since this got merged, we really need rosa describe cluster to show Network Type information under Network since it is absolutely crucial to easily know on the CLI what the network back end is for a given cluster.

Failed to retreive AWS regions

rosa create cluster --cluster-name=test-os --debug

rosa 1.1.7 getting the following error when trying to create cluster:

time="2022-01-14T11:19:08-07:00" level=debug msg="Bearer token expires in 14m56.980236s"
time="2022-01-14T11:19:08-07:00" level=debug msg="Got tokens on first attempt"
time="2022-01-14T11:19:08-07:00" level=debug msg="Request method is POST"
time="2022-01-14T11:19:08-07:00" level=debug msg="Request URL is 'https://api.openshift.com/api/clusters_mgmt/v1/cloud_providers/aws/available_regions?page=1&size=100'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Request header 'Accept' is 'application/json'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Request header 'Authorization' is omitted"
time="2022-01-14T11:19:08-07:00" level=debug msg="Request header 'Content-Type' is 'application/json'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Request header 'User-Agent' is 'OCM-SDK/0.1.219'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Request body follows"
time="2022-01-14T11:19:08-07:00" level=debug msg="{\n  \"access_key_id\": \"redacted\",\n  \"secret_access_key\": \"redacted\"\n}"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response protocol is 'HTTP/2.0'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response status is '400 Bad Request'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response header 'Content-Type' is 'application/json'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response header 'Date' is 'Fri, 14 Jan 2022 18:19:08 GMT'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response header 'Server' is 'envoy'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response header 'Vary' is 'Accept-Encoding'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response header 'X-Envoy-Upstream-Service-Time' is '74'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response header 'X-Operation-Id' is '9a11a760-0b41-4109-a905-d41e41c867c3'"
time="2022-01-14T11:19:08-07:00" level=debug msg="Response body follows"
time="2022-01-14T11:19:08-07:00" level=debug msg="{\n  \"kind\": \"Error\",\n  \"id\": \"400\",\n  \"href\": \"/api/clusters_mgmt/v1/errors/400\",\n  \"code\": \"CLUSTERS-MGMT-400\",\n  \"reason\": \"Failed to get list of available aws cloud regions.\",\n  \"operation_id\": \"9a11a760-0b41-4109-a905-d41e41c867c3\"\n}"
E: Failed to retrieve AWS regions: Failed to get list of available aws cloud regions.

it seems to be a problem with the https://api.openshift.com/api/clusters_mgmt/v1/cloud_providers/aws/available_regions?page=1&size=100 endpoint. I can use the aws cli and describe-regions just fine with the IAM role I am using

Not able to get authentication token

I see:

rosa list clusters
E: Failed to create OCM connection: error creating connection. Not able to get authentication token

This repeats but sometimes succeeds. It would be nice to have a clearer trouble shooting for this flakiness.

go-install fails

$ go version 
go version go1.14.9 linux/amd64
$ go get -v -u github.com/openshift/moactl
$ go install github.com/openshift/moactl
# github.com/openshift/moactl/cmd/describe/cluster
/root/go/src/github.com/openshift/moactl/cmd/describe/cluster/cmd.go:147:22: cluster.Status().ProvisionErrorReason undefined (type *"github.com/openshift-online/ocm-sdk-go/clustersmgmt/v1".ClusterStatus has no field or method ProvisionErrorReason)
/root/go/src/github.com/openshift/moactl/cmd/describe/cluster/cmd.go:147:71: cluster.Status().ProvisionErrorType undefined (type *"github.com/openshift-online/ocm-sdk-go/clustersmgmt/v1".ClusterStatus has no field or method ProvisionErrorType)
/root/go/src/github.com/openshift/moactl/cmd/describe/cluster/cmd.go:182:20: cluster.Status().ProvisionErrorType undefined (type *"github.com/openshift-online/ocm-sdk-go/clustersmgmt/v1".ClusterStatus has no field or method ProvisionErrorType)
/root/go/src/github.com/openshift/moactl/cmd/describe/cluster/cmd.go:183:20: cluster.Status().ProvisionErrorReason undefined (type *"github.com/openshift-online/ocm-sdk-go/clustersmgmt/v1".ClusterStatus has no field or method ProvisionErrorReason)

Help text of `rosa describe cluster` implies that '-c/--cluster' is optional yet it looks to be required

Help text of rosa describe cluster implies that '-c/--cluster' is optional and we could execute rosa describe cluster mycluster, yet when I try this looks like the -c/--cluster is required.

$ ./rosa version
1.0.1

$ ./rosa describe cluster myclusterreplaced
Error: required flag(s) "cluster" not set
Usage:
  rosa describe cluster [flags]

Examples:
  # Describe a cluster named "mycluster"
  rosa describe cluster mycluster

  # Describe a cluster using the --cluster flag
  rosa describe cluster --cluster=mycluster

Flags:
  -c, --cluster string   Name or ID of the cluster to describe.
  -h, --help             help for cluster
      --region string    Use a specific AWS region, overriding the AWS_REGION environment variable.

Global Flags:
      --debug            Enable debug mode.
      --profile string   Use a specific AWS profile from your credential file.

Failed to execute root command: required flag(s) "cluster" not set

Cluster creation failed - 'create_moa_clusters' capability is not set for this account

Hello guys.

I'm trying to "init" my AWS environment using rosa client, but I'm getting this error:

❯ rosa init --region us-east-1
I: Logged in as 'petr_ruzicka@<email>.com' on 'https://api.openshift.com'
I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating SCP policies...
I: AWS SCP policies ok
I: Validating AWS quota...
I: AWS quota ok
I: Ensuring cluster administrator user 'osdCcsAdmin'...
I: Admin user 'osdCcsAdmin' already exists!
I: Validating SCP policies for 'osdCcsAdmin'...
I: AWS SCP policies ok
I: Validating cluster creation...
W: Cluster creation failed. If you create a cluster, it should fail with the following error:
'create_moa_clusters' capability is not set for this account
I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.6.16

The error 'create_moa_clusters' capability is not set for this account may be more descriptive, because it's not saying if the problem is with AWS account / RedHat account (or somewhere else).

I'm not sure how may I fix this.

Thank you...

Cluster init failure is marked as a Warning

Due to a personal misconfiguration rosa init failed to init a cluster. However, the failure was reported as a W: (warning). I am using rosa v1.0.5.

❯ rosa init
I: Logged in as 'pratis.openshift' on 'https://api.openshift.com'
I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating SCP policies...
I: AWS SCP policies ok
I: Validating AWS quota...
I: AWS quota ok
I: Ensuring cluster administrator user 'osdCcsAdmin'...
I: Admin user 'osdCcsAdmin' created successfully!
I: Validating SCP policies for 'osdCcsAdmin'...
I: AWS SCP policies ok
I: Validating cluster creation...
W: Cluster creation failed. If you create a cluster, it should fail with the following error:
In order to run Red Hat OpenShift Service on AWS you need to enable the service in the AWS Console. Please visit https://console.aws.amazon.com/rosa/
I: Verifying whether OpenShift command-line tool is available...
W: Current OpenShift Client Version: v4.7.5
W: Your version of the OpenShift command-line tool is not supported.
Run 'rosa download oc' to download the latest version, then add it to your PATH.

I would assume an E: to be returned once cluster init fails since it is a hard requirement. What do you think?

Automatic resource tagging

Currently, ROSA doesn't support AWS resource tagging. Are there any plans to include support for tagging? There are two "layers" of tagging needed.

  1. Resources created during install, VPC, subnet, *LB, etc.
  2. Resources created when using ROSA, EBS, EC2 instances, new LBs.

inconsistent flag for cluster name between create and other commands.

The command to create a cluster is rosa create cluster --cluster-name=foo

The command to delete a cluster is rosa delete cluster --cluster=foo

Similarly the --cluster flag is used for every command I can find other than create.

Rosa should have consistency in the command flags, i.e. I would suggest updating create to use --cluster (and alias --cluster-create` to it for backwards compatibility).

rosa list clusters show clusters ready when they are still installing

If you run rosa list clusters during an installation, it shows that cluster is ready:

$ rosa list clusters
ID                                NAME           STATE
1jtkm9ivtbdobgcmgog9lsn30ghb7a4u  mrnd-5gx-0001  ready

Even state is ready, as it is also shown in the webui, it could also show the status, which is installing

Do not have parameter --basedomain as exist in openshift-install CLI

I'm trying to create a cluster using the ROSA CLI but I can't find any way to set up an OpenShift using my domain on AWS Route 53 as in the openshift-install CLI tool.

$ rosa create cluster --cluster-name testing --multi-az --region eu-west-1 --version 4.7.16 --enable-autoscaling --min-replicas 3 --max-replicas 6 --compute-machine-type c5.2xlarge

VS

$ openshift-install create cluster --dir=. --log-level=info ? SSH Public Key /Users/marcelobarbosa/.ssh/id_ed25519.pub ? Platform aws INFO Credentials loaded from the "default" profile in file "/Users/marcelobarbosa/.aws/credentials" ? Region eu-west-1 ? Base Domain play.getchange.us X Sorry, your reply was invalid: Value is required ? Cluster Name dev-cluster ? Pull Secret [? for help] ****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************** INFO Creating infrastructure resources... INFO Waiting up to 20m0s for the Kubernetes API at https://api.dev-cluster.play.getchange.us:6443... INFO API v1.20.0+87cc9a4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 40m0s for the cluster at https://api.dev-cluster.play.getchange.us:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/marcelobarbosa/Documents/infrastructure-as-code/tmp/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.dev-cluster.play.getchange.us INFO Login to the console with user: "kubeadmin", and password: "tQVCY-z7vRD-xqqw2-g6npm" INFO Time elapsed: 37m12s

it will be nice if the rosa CLI can offer a parameter like to: --basedomain to deploy the cluster using this option.

Ability to use AMD-based worker instance types? (`m5a.xlarge` for example)

Hi all,

It doesn't seem possible to either create a cluster with m5a instance types, or create a machinepool after the fact with those instance types. It would be handy to have this, if only to realise the slight cost advantage of AMD-based instance types.

Do you know if this is on the roadmap?

Cheers
Dan

'--compute-nodes' Parameter Creates Extra Infra Nodes in a ROSA Cluster

Our team noticed that extra infra nodes are created when we create a ROSA cluster.

Here is the command which is used in our terraform script.

$rosa create cluster --cluster-name='${self.triggers.cluster_name}' --compute-machine-type='${var.worker_machine_type}' --compute-nodes ${var.worker_machine_count} --region ${var.region} \
    --machine-cidr='${var.machine_network_cidr}' --service-cidr='${var.service_network_cidr}' --pod-cidr='${var.cluster_network_cidr}' --host-prefix='${var.cluster_network_host_prefix}' --private=${var.private_cluster} \
    --multi-az=${var.multi_zone} --version='${var.openshift_version}' --subnet-ids='${local.subnet_ids}' --watch

Value used:
worker-machine-type="m5.4xlarge"
worker-machine-count="6"

Reference link: https://docs.openshift.com/rosa/rosa_cli/rosa-manage-objects-cli.html#rosa-create-cluster_rosa-managing-objects-cli

Expected Behavior: Only selected worker nodes should be created but extra Infra nodes should not be created.

Thanks.

Cluster admin account created via "rosa create admin" sometimes reports bad credentials

This problem does not happen all the time, but when it happens, it is immediately after we create a new cluster with the rosa CLI.

We have automation that creates the new ROSA cluster via CLI, then waits for it to become ready, and finally requests the creation of the cluster-admin account.

The automation code uses the command rosa describe cluster -c ${cluster_name} -o json ... to wait for the cluster to be ready. It keeps checking the state field to return ready.

After that point, the automation issues the command to create the administrator account:

rosa create admin --cluster="${cluster_name}"

That command returns the oc login command for the new cluster-admin account, as expected:

W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information.
I: Admin account has been added to cluster '...'.
I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user.
I: To login, run the following command:

   oc login https://api....openshiftapps.com:6443 --username cluster-admin --password ...

I: It may take up to a minute for the account to become active.

Most of the time (80%, perhaps,) that oc login commands works as expected, but some other 20% of the time, it returns an error:

oc login https://api....openshiftapps.com:6443 --username cluster-admin --password ...
Login failed (401 Unauthorized)
Verify you have provided correct credentials.

When the problem happens, we often run the oc login... command 20-60 minutes later and still get the "401" error, so we don't think the problem is not waiting long enough.

Deleting the account and recreating it fixes the problem without fail, and it takes only a few seconds for the cluster-admin account to become available.

Referenced gopath doesn't exist

rosa/README.md

Line 24 in 7796741

2. `cd` to the checkout out source directory

Expected

following documentation command to path takes me to path

Observerd

$ cd $GOPATH/src/github.com/openshift/rosa
-bash: cd: /Users/rackow/go/src/github.com/openshift/rosa: No such file or direct

Environment

$ go version
go version go1.17.3 darwin/amd64
$ uname -a
Darwin MAC-FVFGH12JQ05P 21.2.0 Darwin Kernel Version 21.2.0: Sun Nov 28 20:29:10 PST 2021; root:xnu-8019.61.5~1/RELEASE_ARM64_T8101 arm64

Further information

 go $ tree -L 1
.
├── bin
└── pkg

Scope SRE access to just ROSA resources where possible

We should further scope down the *-Support-Role
https://github.com/openshift/rosa/blob/master/templates/policies/4.7/sts_support_permission_policy.json
https://github.com/openshift/rosa/blob/master/templates/policies/4.8/sts_support_permission_policy.json
https://github.com/openshift/rosa/blob/master/templates/policies/4.9/sts_support_permission_policy.json

The following permissions can be scoped to just resources with the red-hat-managed: true AWS tag.

- Effect: Allow
  Action:
  - ec2:CreateSnapshots
  - ec2:GetAssociatedIpv6PoolCidrs
  - ec2:GetTransitGatewayAttachmentPropagations
  - ec2:GetTransitGatewayMulticastDomainAssociations
  - ec2:GetTransitGatewayPrefixListReferences
  - ec2:GetTransitGatewayRouteTableAssociations
  - ec2:GetTransitGatewayRouteTablePropagations
  - ec2:RebootInstances
  - ec2:SearchLocalGatewayRoutes
  - ec2:SearchTransitGatewayMulticastGroups
  - ec2:SearchTransitGatewayRoutes
  - ec2:StartInstances
  - ec2:TerminateInstances
  Condition:
    StringEquals:
      aws:ResourceTag/red-hat-managed: 'true'
  Resource: '*'

This will help assure Red Hat customer's that Red Hat SRE team cannot cause issues with infrastructure that is not related to ROSA.

ROSA doesn't allow oc label nodes

The auth webhook prevents node labeling. This is needed to dedicate workloads. The error is:

admission webhook "regular-user-validation.managed.openshift.io" denied the request: Prevented from accessing Red Hat managed resources. This is in an effort to prevent harmful actions that may cause unintended consequences or affect the stability of the cluster. If you have any questions about this, please reach out to Red Hat support at https://access.redhat.com/support

The user does have cluster-admin rights

we have also filed Red Hat Support case: https://access.redhat.com/support/cases/#/case/02878398

Failure: https://access.redhat.com/support/cases/#/case/02878398/discussion?attachmentId=a092K00002Py7GoQAJ

Problem: We can not install OCS storage class without labeling in the ROSA cluster.

ROSA cli should retry when aws throttling

When doing rosa create account-roles|operator-roles will occasionally hit a throttled error

There was an error creating the account roles: Throttling: Rate exceeded
            status code: 400, request id: 52f9f17f-a788-48c1-875f-84620bbcbebe

ROSA should catch this, wait, and try again rather than hard error out.

rosa 1.1.5 no longer supports `--version` for the `create account-roles` command

In #482 the --version argument was removed from rosa create account-roles which is a breaking change, using --version is in the documentation and possible in end-user automation scripts.

ROSA should deprecate it smoothly with a warning and a noop rather than just hard error if its used, I'm not sure if ROSA cli follows semver, but the semver-esque versioning implies that it does and this change broke the semver contract of breaking changes.

Not enough Quota on AWS Account I set up to test

Hi,

I am a Red Hat SA for EMEA CCSP on AWS and want to join the Thursday hack session using these tools today.

I have set everything up and it works to the point of "moactl init" because AWS refuse to allow the quota on "Running On-Demand Standard (A, C, D, H, I, M, R, T, Z)" to go beyond 64 (they say they might give me more in the future but this is it for now).

I assume you will have an account I can use for hacking today, but wanted to raise here that this could be an issue for folks.

[RFE] Add json output to rosa commands

Parse rosa outputs on scripts is difficult, as sometimes there are fields with empty values:

$ rosa list machinepool -c mrnd-snafu
ID        AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS                               TAINTS                      AVAILABILITY ZONES
Default   No           27        m5.2xlarge                                                                      us-west-2a, us-west-2b, us-west-2c
workload  No           3         m5.2xlarge     node-role.kubernetes.io/workload=    role=workload:NoSchedule    us-west-2a, us-west-2b, us-west-2c

Having the option to show results using json (as oc get -o json) will make integration easier

Missing Requirements on building

$ make 
go-bindata -nometadata -nocompress -pkg assets -o ./assets/bindata.go ./templates/...
make: go-bindata: Command not found
make: *** [Makefile:52: generate] Error 127

README should have a building section, including requirements and correct commands (make, make install, make moatctl?)

Many cluster-name-tagged t2.micro instances generated during cluster creation - AWS Anomaly triggered

Hi all,

Despite the convoluted title, I wanted to summarise that AWS Customer Support contacted me at about the same time that a ROSA cluster was being created yesterday. I had enabled multiple availability zones during the interactive installer. Around 15-20 mins into cluster install, I got the call, mentioning they think the account had been compromised due to many (10) t2.micro instances being spawned in quick succession, in different regions.

Many t2.micro instances (short-lived by the looks of it, although I did authorise customer support to terminate them - sadly I didn't get a chance to take a snapshot for post-mortem) were spawned, in many worldwide regions (so, not just multi AZ it seems). The instance tags did include the clustername and ID, so I guess this was just ROSA (or CloudFormation?) ensuring I had validation in other regions, hence why it put in for a t2.micro and nothing larger (as I would expect of, say, a cryptominer).

Have confirmed the account was not compromised, and have also dug through the rosa source code, but can't find any mention of this being needed.

Any light you could shed would be great - maybe AWS Support didn't receive the ROSA memo yet?

Cheers
Dan

`go install` moactl fails

$ go get -u github.com/openshift/moactl
$ ls ${GOROOT}/bin/moactl //does not exist
$ go install github.com/openshift/moactl

github.com/openshift/moactl/pkg/interactive
/root/go/src/github.com/openshift/moactl/pkg/interactive/interactive.go:224:11: assignment mismatch: 2 variables but
core.RunTemplate returns 3 values

https://github.com/openshift/moactl/blob/master/pkg/interactive/interactive.go#L224
the line is question.

I would guess that go get doesn't check the versions correctly and takes the latest version of https://github.com/AlecAivazis/survey/blob/master/core/template.go#L31
which returns coloredString, nonColorString, err

moactl init failed without errors

Moactl init is failing without any error , i am trying to recreate the cluster .
Moactl version is 0.0.15

AMRO % ./moactl init
I: Logged in as 'xxx' on 'https://api.openshift.com'
I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating SCP policies...
I: AWS SCP policies ok
I: Validating AWS quota...
I: AWS quota ok
I: Ensuring cluster administrator user 'osdCcsAdmin'...
I: Admin user 'osdCcsAdmin' already exists!
I: Validating cluster creation...
W: Cluster creation failed. If you create a cluster, it should fail with the following error:

I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.5.2

ROSA CLI 1.1.6 & 1.1.7 error "create_cluster_proxy"

Description
What problem/issue/behavior are you having trouble with? What do you expect to see?
Downloaded ROSA CLI 1.1.6 to run on my mac (rosa-darwin-amd64 from https://github.com/openshift/rosa/releases)

I then tried to create a ROSA cluster using the new HTTP / HTTPS PROXY flags. This failed almost immediately with an error (see below):

Command:

./rosa-darwin-amd64 create cluster --private-link --multi-az --cluster-name=mydevrosa --machine-cidr=172.25.246.0/24 --subnet-ids=subnet-034f368cf97XXXXXX,subnet-0084b9e30c8XXXXXX,subnet-086ea2deeb2XXXXXX --enable-autoscaling --min-replicas 3 --max-replicas 6 --http-proxy http://proxy.ic.xxx.xxx:3128/ --https-proxy http://proxy.ic.xxx.xxx:3128/ --profile=my-dev-ocp

Output:

W: You are choosing to use AWS PrivateLink for your cluster. Once the cluster is created, this option cannot be changed.
? Are you sure you want to use AWS PrivateLink for cluster 'mydevrosa'? Yes
I: Creating cluster 'mydevrosa'
I: To view a list of clusters and their status, run 'rosa list clusters'

E: Failed to create cluster: 'create_cluster_proxy' capability is not set for this organization

I can't find any documentation on these new flag so I don't know what organization it is talking about. Also I am a bit worried that this new functionality is trying to create a new proxy rather than using the one specified in the command-line.

Please can you explain how these are intended to work and if there are any additional environment of AWS settings required to be set...

These are previous issues I've raised relating to the use of proxies that I was hoping these new flag were added to fix...

Red Hat - Case 03011961 - Can't configure the "rosa" CLI tool to use a proxy
Red Hat - Case 03031217 - Cluster Wide noProxy setting is being ignored

Where are you experiencing the behavior? What environment?
Running 1.1.6 ROSA CLI from my MacOS laptop (OS 12.0.1)
[update: Running 1.1.7 ROSA CLI from my MacOS laptop (OS 12.1)

When does the behavior occur? Frequency? Repeatedly? At certain times?
Always

Wait for desired state or events with "wait-for" command

When provisioning a cluster on a CI/CD pipeline it is needed to wait for the cluster to be ready before executing deployment commands. Currently users need to poll status and parse the output in their scripts or actions. It would be valuable to have wait-for commands, similar to the one available in openshift-installer.

unable to login to ROSA

I've just updated my rosa binary to version 0.1.4. When I attempt to log in to the cli, I get the following error message:

$ rosa login
time="2021-01-14T09:19:37-05:00" level=error msg="OCM auth: failed to get tokens, got http code 400, will not attempt to retry. err: invalid_grant: Offline user session not found"
E: Failed to get token: invalid_grant: Offline user session not found

`--output` cli arg only implemented on some commands

I can use rosa describe cluster --output json but not rosa create cluster --output json it would be great if this was a universal switch so my automation doesn't have to parse rosa output differently based on what actions are being performed.

OIDC provider uses fixed region eu-east-1

Hi,

please enhance rosa in that way that the oidc provider for STS mode is using the default region or add the possibility to pass a region to rosa create oidc-provider.

In some companies not all AWS regions are allowed and if you provide a --region for creating a cluster it should be a consistent experience.

In my case I am trying to deploy rosa to region eu-central-1 but the OIDC provider seems to be hardcoded to region eu-east-1.

I think there is no technical necessity for doing that because we are using the same configuration with EKS and there it works.

Thanks

ERR: Unable to validate SCP policies.

rosa init
gives following error:
ERR: Unable to validate SCP policies. Make sure that an organizational SCP is not preventing this account from performing the required checks
ERR: Error simulating policy: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: XXX

The error seems to be because Organization SCP are unable to validate checks but Our Team has set SCPs correctly.

[question] using service account annotation to get role/permission to interact with AWS services

hello,

I’m starting my journey with Openshift ROSA. I have experience with EKS which I used annotation on a service accounts to allow certain pods to authenticate with the OIDC provider to delegate permission to interact with some native AWS services (like Route53, S3, …) . this was convenient to avoid putting secret/access keys into a k8s sercret. Do you know if such feature is possible with ROSA ?
bellow an example of annotation I was using with EKS on SA.

annotations:
    eks.amazonaws.com/role-arn: 'arn:aws:iam::account:role/tf-rosa-myservice-irsa'

Slack #openshift-users discussion

JSON ouput for create admin

The rosa create admin command should support the --output flag so that login can be automated without parsing the produced standard output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.