Giter Club home page Giter Club logo

pvsadm's Introduction

Overview

This is a tool to help with managing of resources in IBM Power Systems Virtual Server.

❗ There is no formal support for any problems with this repo. For issues please open a GitHub issue

Installation

Binary

  1. Go to the releases page
  2. Select the latest release and download the relevant binary under the Assets section.
  3. Run the pvsadm --help command to check the available subcommands and the options.

Homebrew

Note: Currently supported only for Mac(x86&arm) and Linux(x86)

brew install ppc64le-cloud/pvsadm/pvsadm

or

brew tap ppc64le-cloud/pvsadm
brew install pvsadm

Image Management

Sub command under the pvsadm tool to perform image related tasks like image conversion, uploading and importing into the IBM Power Systems Virtual Server instances. For more information, refer to the pvsadm image --help command.

The typical image workflow comprises of the following steps:

  1. Download the qcow2 image.
  2. Convert the downloaded qcow2 image to ova using pvsadm image qcow2ova command.
  3. Upload the ova image to IBM Cloud Object Store Bucket using pvsadm image upload command.
  4. Import the ova image to IBM Power Systems Virtual Server instances using pvsadm image import command.

'How To' Guides

  • How to convert CentOS qcow2 to ova image format - guide
  • How to convert RHEL qcow2 to ova image format - guide
  • How to convert RHCOS(Red Hat CoreOS) qcow2 to ova image format - guide
  • Advanced scenarios for Qcow2 to ova image conversion - guide
  • How to import image to PowerVS instance from COS - guide
  • How to upload image to COS bucket using pvsadm - guide
  • How to build DHCP supported centos image - guide

Samples

Please take a look at the samples folder for end-to-end examples.

For bugs/enhancement requests etc. please open a GitHub issue

pvsadm's People

Contributors

amulyam24 avatar arcprabh avatar bkhadars avatar bpradipt avatar dependabot[bot] avatar dharaneeshvrd avatar karthik-k-n avatar keerthanaap avatar kishen-v avatar ltccci avatar madhan-swe avatar mkumatag avatar poorna-gottimukkula1 avatar ppc64le-cloud-bot avatar prajyot-parab avatar prb112 avatar rajalakshmi-girish avatar rcmadhankumar avatar shilpi-das1 avatar smatzek avatar sudeeshjohn avatar varad-ahirwadkar avatar yussufsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pvsadm's Issues

AccessKeyID not available for downlading the image from s3 to powervs

Image import fails because of non-availability of the acesskeyid..

Does this occurs consistently? no

accesskey is getting deleted from the records before the image download happens. Hence the issue arises

Events:

[root@chaos-arc46-bastion pvsadm]# ./bin/pvsadm get events -n upstream-core-lon04
2020/12/07 01:51:08 the apiendpoint url for power is lon.power-iaas.cloud.ibm.com
2020/12/07 01:51:08 Calling the New Auth Method in the IBMPower Session Code
2020/12/07 01:51:08 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/07 01:51:08 the region is lon and the zone is  lon04
2020/12/07 01:51:08 the crndata is ... crn:v1:bluemix:public:power-iaas:lon04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:7845d372-d4e1-46b8-91fc-41051c984601::
+----------+--------------------------------------+-------+--------------------------------+---------------------------------------------------+----------+--------------------------+
|  ACTION  |               EVENTID                | LEVEL |            MESSAGE             |                     METADATA                      | RESOURCE |           TIME           |
+----------+--------------------------------------+-------+--------------------------------+---------------------------------------------------+----------+--------------------------+
| import   | eb9e2f32-b6d6-43a4-b96b-88ef180a5f14 | info  | Importing image file           | map[imageID:23e522cc-5446-4436-9c53-69603c599499] | image    | 2020-12-07T06:30:53.681Z |
|          |                                      |       | 'rhel-83-10032020.ova.gz'      |                                                   |          |                          |
|          |                                      |       | from cloud storage             |                                                   |          |                          |
|          |                                      |       | 'basheerbucket1320...'         |                                                   |          |                          |
| download | ff5a67c5-a540-4c9a-8cda-aea650f0cbca | info  | Downloading image file         | map[imageID:23e522cc-5446-4436-9c53-69603c599499] | image    | 2020-12-07T06:30:53.851Z |
|          |                                      |       | 'rhel-83-10032020.ova.gz'      |                                                   |          |                          |
|          |                                      |       | from cloud storage             |                                                   |          |                          |
|          |                                      |       | 'basheerbucket1320'...         |                                                   |          |                          |
| download | 37488fdc-0a6e-496c-a0ec-cb5776e2ca49 | error | Download of image file         | map[Error:download of file                        | image    | 2020-12-07T06:30:56.657Z |
|          |                                      |       | 'rhel-83-10032020.ova.gz'      | basheerbucket1320/rhel-83-10032020.ova.gz         |          |                          |
|          |                                      |       | from cloud storage             | failed, error:ERROR: S3 error: 403                |          |                          |
|          |                                      |       | 'basheerbucket1320' has        | (InvalidAccessKeyId): The AWS Access Key ID       |          |                          |
|          |                                      |       | failed.                        | you provided does not exist in our records.       |          |                          |
|          |                                      |       |                                | , image 23e522cc-5446-4436-9c53-69603c599499      |          |                          |
|          |                                      |       |                                | will be deleted                                   |          |                          |
|          |                                      |       |                                | imageID:23e522cc-5446-4436-9c53-69603c599499]     |          |                          |
+----------+--------------------------------------+-------+--------------------------------+---------------------------------------------------+----------+--------------------------+

Unable to reuse the bucket name even after deleting the bucket

pvsadm tool is throwing error while re-using bucket name which is deleted manually from UI

To reproduce:

  1. Create COS and Bucket via the pvsadm
🍏 Thursday December 10 2020 10:04:15 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶ ./pvsadm image upload --bucket new-test-bucket-test-test -o ./CentOS_83.ova.gz --resource-group ocp-cicd-resource-group
I1210 22:05:18.088904   92630 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1210 22:05:46.994887   92630 upload.go:87] bucket new-test-bucket-test-test not found in the account provided
Would You Like to use Available COS Instance for creating bucket? [y/n]: n
Would you like to create new COS Instance? [y/n]: y
Provide Name of the cos-instance:my-cos
I1210 22:05:56.595064   92630 upload.go:111] Creating a new cos my-cos instance
I1210 22:06:10.186429   92630 resource.go:116] Resource service Instance Details :{0xc0001845a0 my-cos global 65b64c1f1c29460e8c2e4bbfbd893c2c 744bfc56-d12c-4866-88d5-dac9139e0e5d  8f77c5b8f9344ff39b25352c889ef612  crn:v1:bluemix:public:cloud-object-storage:global:a/65b64c1f1c29460e8c2e4bbfbd893c2c:3ef8c38c-03ed-494e-b55e-624e50a4c9f8:: [] map[] map[] 0 active service_instance dff97f5c-bc5e-4455-b470-411c3edbe49c  0xc0005085a0 0xc000ad8ba0   /v1/resource_instances/crn:v1:bluemix:public:cloud-object-storage:global:a%2F65b64c1f1c29460e8c2e4bbfbd893c2c:3ef8c38c-03ed-494e-b55e-624e50a4c9f8::/resource_bindings /v1/resource_instances/crn:v1:bluemix:public:cloud-object-storage:global:a%2F65b64c1f1c29460e8c2e4bbfbd893c2c:3ef8c38c-03ed-494e-b55e-624e50a4c9f8::/resource_aliases  crn:v1:bluemix:public:globalcatalog::::deployment:744bfc56-d12c-4866-88d5-dac9139e0e5d%3Aglobal}
I1210 22:06:10.855812   92630 upload.go:142] Creating a new bucket new-test-bucket-test-test
I1210 22:06:13.383289   92630 s3client.go:120] Waiting for bucket "new-test-bucket-test-test" to be created...
I1210 22:06:13.822397   92630 s3client.go:130] uploading the object ./pvsadm-darwin-amd64.tar.gz
I1210 22:06:18.320264   92630 s3client.go:156] Upload completed successfully in 4.497190 seconds to location https://s3.us-south.cloud-object-storage.appdomain.cloud/new-test-bucket-test-test/pvsadm-darwin-amd64.tar.gz
🍏 Thursday December 10 2020 10:06:18 PM 🍏
╭─github.com/ocp-power-automation/full-flow
╰─▶
  1. Remove the image, bucket and cos from UI

  2. Re do 1st step

🍏 Thursday December 10 2020 10:06:18 PM 🍏
╭─github.com/ocp-power-automation/full-flow
╰─▶ ./pvsadm image upload --bucket new-test-bucket-test-test -o ./pvsadm-darwin-amd64.tar.gz --resource-group ocp-cicd-resource-group
I1210 22:07:08.089474   92688 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1210 22:07:27.046652   92688 upload.go:87] bucket new-test-bucket-test-test not found in the account provided
Would You Like to use Available COS Instance for creating bucket? [y/n]: n
Would you like to create new COS Instance? [y/n]: y
Provide Name of the cos-instance:my-cos
I1210 22:07:33.239654   92688 upload.go:111] Creating a new cos my-cos instance
I1210 22:07:47.429923   92688 resource.go:116] Resource service Instance Details :{0xc0001c1540 my-cos global 65b64c1f1c29460e8c2e4bbfbd893c2c 744bfc56-d12c-4866-88d5-dac9139e0e5d  8f77c5b8f9344ff39b25352c889ef612  crn:v1:bluemix:public:cloud-object-storage:global:a/65b64c1f1c29460e8c2e4bbfbd893c2c:35a06926-0f86-49be-80ab-4019bacc55c9:: [] map[] map[] 0 active service_instance dff97f5c-bc5e-4455-b470-411c3edbe49c  0xc00034f590 0xc000479d10   /v1/resource_instances/crn:v1:bluemix:public:cloud-object-storage:global:a%2F65b64c1f1c29460e8c2e4bbfbd893c2c:35a06926-0f86-49be-80ab-4019bacc55c9::/resource_bindings /v1/resource_instances/crn:v1:bluemix:public:cloud-object-storage:global:a%2F65b64c1f1c29460e8c2e4bbfbd893c2c:35a06926-0f86-49be-80ab-4019bacc55c9::/resource_aliases  crn:v1:bluemix:public:globalcatalog::::deployment:744bfc56-d12c-4866-88d5-dac9139e0e5d%3Aglobal}
I1210 22:07:48.125569   92688 upload.go:142] Creating a new bucket new-test-bucket-test-test
Error: Unable to create bucket "new-test-bucket-test-test", BucketAlreadyExists: Container new-test-bucket-test-test exists
	status code: 409, request id: 49a135fd-06e6-4309-a10b-8ec544c77dca, host id:
Usage:
  pvsadm image upload [flags]

Flags:
      --resource-group string   Provide Resource-Group (default "default")
      --service-plan string     Provide serviceplan type (default "standard")
  -n, --instance-name string    Instance Name of the COS to be used
  -b, --bucket string           Region of the COS instance
  -o, --object-name string      S3 object name to be uploaded to the COS
  -r, --region string           Region of the COS instance (default "us-south")
  -h, --help                    help for upload

Global Flags:
      --add_dir_header                   If true, adds the file directory to the header of the log messages
      --alsologtostderr                  log to standard error as well as files
  -k, --api-key string                   IBMCLOUD API Key(env name: IBMCLOUD_API_KEY)
      --audit-file string                Audit logs for the tool (default "pvsadm.log")
      --debug                            Enable PowerVS debug option(ATTENTION: dev only option, may print sensitive data from APIs)
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --log_file string                  If non-empty, use this log file
      --log_file_max_size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
      --skip_headers                     If true, avoid header prefixes in the log messages
      --skip_log_headers                 If true, avoid headers when opening log files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

E1210 22:07:50.073511   92688 root.go:68] Unable to create bucket "new-test-bucket-test-test", BucketAlreadyExists: Container new-test-bucket-test-test exists
	status code: 409, request id: 49a135fd-06e6-4309-a10b-8ec544c77dca, host id:
🍏 Thursday December 10 2020 10:07:50 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶

OVA file overwrite should not happen when a conversion is attempted with same image file name.

Test Case:

Identify a separate file system or dir for saving all converted OVA images.
Convert a qcow2 to OVA for image1
When another conversion is attempted for image2 and same image name is used , then the older OVA image file is getting rewritten/overwritten. It is a loss of converted OVA image.

Command used:

[root@rhel83-arc-pvsadm image]# ./pvsadm-linux-ppc64le image qcow2ova --image-name arc-centos-82 --image-dist centos --image-url https://cloud.centos.org/centos/8/ppc64le/images/CentOS-8-GenericCloud-8.2.2004-20200611.2.ppc64le.qcow2

Expected results:

The script should check for the presence of a same file name before saving it for the current conversion.
If an OVA with same file/image name is already available in the same folder, then the script should append some number or timestamp before saving it as an OVA file to the filename so that the older OVA file doesnt get overwritten.

Suggested solution:
Adding timestamp and date will make every converted OVA file/image name unique.

Got access denied while importing the new image with alpha 9 release

Command used
pvsadm image import -n ocp-cicd-sydney-04 -b shilpibucket1 --object-name rhel-83-121020.ova.gz --image-name rhel-11-test-121020 -r us-south
Output:

I1210 12:21:43.638937   46017 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1210 12:21:59.254089   46017 import.go:107] shilpibucket1 bucket found in the shilpi-test[ID:crn:v1:bluemix:public:cloud-object-storage:global:a/65b64c1f1c29460e8c2e4bbfbd893c2c:0e06e4d3-a962-4d28-8393-87b99c42acdf::] COS instance
I1210 12:21:59.646406   46017 import.go:114] rhel-83-121020.ova.gz object found in the shilpibucket1 bucket
I1210 12:22:00.092207   46017 import.go:140] Reading the existing service credential: pvsadm-service-cred
2020/12/10 12:22:00 the apiendpoint url for power is syd.power-iaas.cloud.ibm.com
2020/12/10 12:22:00 Calling the New Auth Method in the IBMPower Session Code
2020/12/10 12:22:00 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/10 12:22:00 the region is syd and the zone is  syd04
2020/12/10 12:22:00 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:6d030c4b-64a3-494d-aeed-8c453dd98903::
Error: [POST /pcloud/v1/cloud-instances/{cloud_instance_id}/images][400] pcloudCloudinstancesImagesPostBadRequest  &{Code:0 Description:bad request: the cloud storage access validation failed: ERROR: Access to bucket 'shilpibucket1' was denied
ERROR: S3 error: 403 (AccessDenied): Access Denied

When ran by providing new service credential its passing
pvsadm image import -n ocp-cicd-sydney-04 -b shilpibucket1 --object-name rhel-83-121020.ova.gz --image-name rhel-181-test-121020 -r us-south --service-credential-name newcred

pvsadm version
I1210 12:39:58.349995 46382 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
Version: v0.1-alpha.9, GoVersion: go1.15.5

Expectation: It should pass without passing --service-credential-name newcred

Add a image upload flow via access-key and secret-key

Thread from #101 (comment)

In the production, it is very likely to have a limitation of having just a service credential created for the COS bucket, the existing code needs an IAM API key with permissions assigned like resource lists etc.

This task is to add the support for the image upload operation with access-key and secret-key options.

Assumptions:
User will create the service credentials via any other mean like UI or CLI for COS
Will provide the access-key, secret-key, bucket, file options

Flow:
If access-key, secret-key provided tool should just establish the s3 connection to the bucket and upload the image.

/kind feature

Fix the dsIdentify file

This task is for fixing the dsIdentify file extension properly from /etc/cloud/ds-identify.conf to /etc/cloud/ds-identify.cfg

Remove the advanced toolchain repo post installation

Lately have seen a lot of issues while talking to the AT repository while using both CentOS and RHEL images, as we don't need this repo post-installation, hence we need to remove this repository towards the end of the prep script.

Errors:

module.prepare.null_resource.bastion_packages[0] (remote-exec): Errors during downloading metadata for repository 'Advance_Toolchain':
module.prepare.null_resource.bastion_packages[0] (remote-exec):   - Curl error (28): Timeout was reached for ftp://public.dhe.ibm.com/software/server/POWER/Linux/toolchain/at//redhat/RHEL8/repodata/repomd.xml [Connection time-out]
module.prepare.null_resource.bastion_packages[0] (remote-exec): Error: Failed to download metadata for repo 'Advance_Toolchain': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried


Error: error executing "/tmp/terraform_1200923779.sh": Process exited with status 1


[retry_terraform] Encountered below errors:
module.prepare.null_resource.bastion_packages[0] (remote-exec): Error: Failed to download metadata for repo 'Advance_Toolchain': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Error: error executing "/tmp/terraform_1200923779.sh": Process exited with status 1
[retry_terraform] ERROR: Terraform command failed after 3 attempts! Please check the log files
/home/jenkins/workspace/daily-ocp4.5-powervs-script-tokyo-p9-min/deploy/build-harness/modules/deploy/Makefile.openshift4_pvs_script:11: recipe for target 'deploy:openshift4:powervs:script' failed
make[1]: *** [deploy:openshift4:powervs:script] Error 255
Makefile:319: recipe for target 'deploy-openshift4-powervs-script' failed
make: *** [deploy-openshift4-powervs-script] Error 2
[Pipeline] echo

/kind bug
/assign
/priority critical-urgent

warning message while Qemu resize

Displaying following warning message while resizing the raw disk:

I1207 12:58:25.009589    4408 qcow2ova.go:169] Resizing the image /tmp/qcow2ova218515135/ova-img-dir/disk.raw to 20G
WARNING: Image format was not specified for '/tmp/qcow2ova218515135/ova-img-dir/disk.raw' and probing guessed raw.
         Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
         Specify the 'raw' format explicitly to remove the restrictions.
Image resized.

/kind bug
/priority important-soon
/assign

Unable to import image from newly created bucket - Access denied

🍎 Friday December 11 2020 08:54:51 AM 🍎
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶ ./pvsadm image upload --bucket new-test-bucket-3 -o ./pvsadm-darwin-amd64.tar.gz --resource-group ocp-cicd-resource-group
I1211 08:55:28.280074   65131 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1211 08:55:40.199134   65131 upload.go:87] bucket new-test-bucket-3 not found in the account provided
Would You Like to use Available COS Instance for creating bucket? [y/n]: n
Would you like to create new COS Instance? [y/n]: y
Provide Name of the cos-instance:my-cos
I1211 08:56:03.446725   65131 upload.go:111] Creating a new cos my-cos instance
I1211 08:56:10.505971   65131 resource.go:116] Resource service Instance Details :{0xc0008d8690 my-cos global 65b64c1f1c29460e8c2e4bbfbd893c2c 744bfc56-d12c-4866-88d5-dac9139e0e5d  8f77c5b8f9344ff39b25352c889ef612  crn:v1:bluemix:public:cloud-object-storage:global:a/65b64c1f1c29460e8c2e4bbfbd893c2c:a32bdd00-297a-40f7-b356-a49bdadac7f9:: [] map[] map[] 0 active service_instance dff97f5c-bc5e-4455-b470-411c3edbe49c  0xc000864a10 0xc0004e4cf0   /v1/resource_instances/crn:v1:bluemix:public:cloud-object-storage:global:a%2F65b64c1f1c29460e8c2e4bbfbd893c2c:a32bdd00-297a-40f7-b356-a49bdadac7f9::/resource_bindings /v1/resource_instances/crn:v1:bluemix:public:cloud-object-storage:global:a%2F65b64c1f1c29460e8c2e4bbfbd893c2c:a32bdd00-297a-40f7-b356-a49bdadac7f9::/resource_aliases  crn:v1:bluemix:public:globalcatalog::::deployment:744bfc56-d12c-4866-88d5-dac9139e0e5d%3Aglobal}
I1211 08:56:11.141201   65131 upload.go:142] Creating a new bucket new-test-bucket-3
I1211 08:56:13.023447   65131 s3client.go:120] Waiting for bucket "new-test-bucket-3" to be created...
I1211 08:56:13.562694   65131 s3client.go:130] uploading the object ./pvsadm-darwin-amd64.tar.gz
I1211 08:56:19.801698   65131 s3client.go:156] Upload completed successfully in 6.238141 seconds to location https://s3.us-south.cloud-object-storage.appdomain.cloud/new-test-bucket-3/pvsadm-darwin-amd64.tar.gz
🍎 Friday December 11 2020 08:56:19 AM 🍎
╭─github.com/ocp-power-automation/full-flow
╰─▶ ./pvsadm image import -n ocp-cicd-tokyo-04  -b new-test-bucket-3  -r us-south --object-name pvsadm-darwin-amd64.tar.gz --image-name pvsadm-darwin
I1211 08:57:03.972799   65234 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1211 08:57:18.203640   65234 import.go:107] new-test-bucket-3 bucket found in the my-cos[ID:crn:v1:bluemix:public:cloud-object-storage:global:a/65b64c1f1c29460e8c2e4bbfbd893c2c:a32bdd00-297a-40f7-b356-a49bdadac7f9::] COS instance
I1211 08:57:18.729575   65234 import.go:114] pvsadm-darwin-amd64.tar.gz object found in the new-test-bucket-3 bucket
I1211 08:57:19.175791   65234 import.go:140] Reading the existing service credential: pvsadm-service-cred
2020/12/11 08:57:20 the apiendpoint url for power is tok.power-iaas.cloud.ibm.com
2020/12/11 08:57:20 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 08:57:20 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 08:57:20 the region is tok and the zone is  tok04
2020/12/11 08:57:20 the crndata is ... crn:v1:bluemix:public:power-iaas:tok04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:e4bb3d9d-a37c-4b1f-a923-4537c0c8beb3::
Error: [POST /pcloud/v1/cloud-instances/{cloud_instance_id}/images][400] pcloudCloudinstancesImagesPostBadRequest  &{Code:0 Description:bad request: the cloud storage access validation failed: ERROR: Access to bucket 'new-test-bucket-3' was denied
ERROR: S3 error: 403 (AccessDenied): Access Denied
 Error:bad request Message:}
Usage:
  pvsadm image import [flags]

Flags:
      --accesskey string                 Cloud Storage access key
  -b, --bucket string                    Cloud Storage bucket name
  -h, --help                             help for import
      --image-name string                Name to give imported image
  -i, --instance-id string               Instance ID of the PowerVS instance
  -n, --instance-name string             Instance name of the PowerVS
  -o, --object-name string               Cloud Storage image filename
      --ostype string                    Image OS Type, accepted values are[aix, ibmi, redhat, sles] (default "redhat")
  -r, --region string                    COS bucket location
      --secretkey string                 Cloud Storage secret key
      --service-credential-name string   Service Credential name to be auto generated (default "pvsadm-service-cred")
      --storagetype string               Storage type, accepted values are [tier1, tier3] (default "tier3")

Global Flags:
      --add_dir_header                   If true, adds the file directory to the header of the log messages
      --alsologtostderr                  log to standard error as well as files
  -k, --api-key string                   IBMCLOUD API Key(env name: IBMCLOUD_API_KEY)
      --audit-file string                Audit logs for the tool (default "pvsadm.log")
      --debug                            Enable PowerVS debug option(ATTENTION: dev only option, may print sensitive data from APIs)
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --log_file string                  If non-empty, use this log file
      --log_file_max_size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
      --skip_headers                     If true, avoid header prefixes in the log messages
      --skip_log_headers                 If true, avoid headers when opening log files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

E1211 08:57:23.544376   65234 root.go:68] [POST /pcloud/v1/cloud-instances/{cloud_instance_id}/images][400] pcloudCloudinstancesImagesPostBadRequest  &{Code:0 Description:bad request: the cloud storage access validation failed: ERROR: Access to bucket 'new-test-bucket-3' was denied
ERROR: S3 error: 403 (AccessDenied): Access Denied
 Error:bad request Message:}
🍎 Friday December 11 2020 08:57:23 AM 🍎
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶

Terminate the pvsadm commands when a non root user attempts to invoke this tool for any of the image tasks

Test Case:

Give full permissions to the temp dir that will be used for storing the converted OVA [ as root ], so that non root user can save the converted files in that directory.
Create a non root user called as arc1 on a RHEL 83 system and login.
Download the pvsadm binary as non root user and invoke it to do the OVA conversion.

Error:

[root@rhel83-arc-pvsadm ~]# su arc1

[arc1@rhel83-arc-pvsadm root]$ whoami
arc1

[arc1@rhel83-arc-pvsadm root]$ !export
export IBMCLOUD_API_KEY=<xxx>

[arc1@rhel83-arc-pvsadm root]$ cd ~

[arc1@rhel83-arc-pvsadm ~]$  ./pvsadm-linux-ppc64le image qcow2ova --image-name arc-centos-82 --image-dist centos --image-url https://cloud.centos.org/centos/8/ppc64le/images/CentOS-8-GenericCloud-8.2.2004-20200611.2.ppc64le.qcow2 --temp-dir /data/arc1
I1204 12:00:58.903129  175194 root.go:26] Using an API key from IBMCLOUD_API_KEY environment variable
I1204 12:00:58.905358  175194 qcow2ova.go:89] Autogenerated OS root password is: wUubrf-Jjp0qPeri
I1204 12:00:58.905371  175194 validate.go:26] Checking: platform
I1204 12:00:58.905379  175194 validate.go:26] Checking: tools
I1204 12:00:58.905421  175194 tools.go:29] qemu-img found at /usr/bin/qemu-img
I1204 12:00:58.905453  175194 tools.go:29] growpart found at /usr/bin/growpart
I1204 12:00:58.905464  175194 validate.go:26] Checking: diskspace
I1204 12:00:58.905483  175194 diskspace.go:33] free: 269G, need: 170G
I1204 12:00:58.906215  175194 get-image.go:40] Downloading https://cloud.centos.org/centos/8/ppc64le/images/CentOS-8-GenericCloud-8.2.2004-20200611.2.ppc64le.qcow2 into /data/arc1/qcow2ova779358912/CentOS-8-GenericCloud-8.2.2004-20200611.2.ppc64le.qcow2
I1204 12:02:05.893976  175194 get-image.go:58] Download Completed!
I1204 12:02:05.894015  175194 qcow2ova.go:134] downloaded/copied the file at: /data/arc1/qcow2ova779358912/CentOS-8-GenericCloud-8.2.2004-20200611.2.ppc64le.qcow2
I1204 12:02:05.894179  175194 qcow2ova.go:162] Converting Qcow2(/data/arc1/qcow2ova779358912/CentOS-8-GenericCloud-8.2.2004-20200611.2.ppc64le.qcow2) image to raw(/data/arc1/qcow2ova779358912/ova-img-dir/disk.raw) format
I1204 12:02:12.183191  175194 qcow2ova.go:167] Conversion completed
I1204 12:02:12.183253  175194 qcow2ova.go:169] Resizing the image /data/arc1/qcow2ova779358912/ova-img-dir/disk.raw to 120G
I1204 12:02:18.430947  175194 qcow2ova.go:174] Resize completed
I1204 12:02:18.430977  175194 qcow2ova.go:176] Preparing the image
Error: failed while preparing the image for centos distro, err: failed to setup a loop device for file: /data/arc1/qcow2ova779358912/ova-img-dir/disk.raw, exitcode: 1, stdout: , err: losetup: cannot find an unused loop device

E1204 12:02:18.765736  175194 root.go:49] failed while preparing the image for centos distro, err: failed to setup a loop device for file: /data/arc1/qcow2ova779358912/ova-img-dir/disk.raw, exitcode: 1, stdout: , err: losetup: cannot find an unused loop device

Expected result:

Dont allow non root user to run the pvsadm binary. Publish a meaningful message saying that only root user can run this tool.

Global flag audit-file should not show up in the pvsadm image help text since it is not implemented.

Test Case:

Use the global flag audit-file in pvsadm image cli commands.

[root@rhel83-arc-pvsadm image]# ./pvsadm-linux-ppc64le image -h
PowerVS Image management

Usage:
pvsadm image [flags]
pvsadm image [command]

Available Commands:
import Import the image into PowerVS instances
qcow2ova Convert the qcow2 image to ova format
upload Upload the image to the IBM COS

Flags:
-h, --help help for image

Global Flags:
-k, --api-key string IBMCLOUD API Key(env name: IBMCLOUD_API_KEY)
--audit-file string Audit logs for the tool (default "pvsadm.log")
--debug Enable PowerVS debug option

Use "pvsadm image [command] --help" for more information about a command.

Issue:

The below command allows the use of global flag audit-file but doesnt generate it.

./pvsadm-linux-ppc64le image qcow2ova --image-name arc-centos-82 --image-dist centos --image-url https://cloud.centos.org/centos/8/ppc64le/imags/CentOS-8-GenericCloud-8.2.2004-20200611.2.ppc64le.qcow2 --temp-dir /data/ --audit-file /tmp/arc-audit.log --debug

Expected result:

Remove the global flag from pvsadm image help text if not implemented yet.

Log levels for pvsadm

This task is to set the different log level for the tool, #26 will give us the flexibility to use the different verbose log levels for the tool.

Here is the proposal for defining the log levels for the tool.

Level Purpose
1 Info
2 Debug
3 Trace level 1
4 Trace level 2
5 Function entry and exit

/kind feature

Update Readme doc

Update the readme doc with more information for the image management.

/kind feature
/assign @arcprabh
/priority important-soon

pvsadm image import is not picking the IBMCLOUD_API_KEY from environment

🍏 Friday December 11 2020 07:28:32 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶ export IBMCLOUD_API_KEY=******************
🍏 Friday December 11 2020 07:28:44 PM 🍏
╭─github.com/ocp-power-automation/full-flow
╰─▶ ./pvsadm image import --pvs-instance-name ocp-cicd-tokyo-04 --bucket my-test-bucket-10 --bucket-region us-south --object CentOS-83-11122020.ova.gz --pvs-image-name CentOS-83-11122020
Error: The ApiKey property is required but was not specified.
Usage:
  pvsadm image import [flags]

Flags:
  -n, --pvs-instance-name string   PowerVS Instance name.
  -i, --pvs-instance-id string     PowerVS Instance ID.
  -b, --bucket string              Cloud Object Storage bucket name.
  -r, --bucket-region string       Cloud Object Storage bucket location.
  -o, --object string              Cloud Object Storage object name.
      --accesskey string           Cloud Object Storage HMAC access key.
      --secretkey string           Cloud Object Storage HMAC secret key.
      --pvs-image-name string      Name to PowerVS imported image.
      --ostype string              Image OS Type, accepted values are[aix, ibmi, redhat, sles]. (default "redhat")
      --pvs-storagetype string     PowerVS Storage type, accepted values are [tier1, tier3]. (default "tier3")
      --pvs-service-cred string    Service Credential name to be auto generated. (default "pvsadm-service-cred")
  -h, --help                       help for import

Global Flags:
      --add_dir_header                   If true, adds the file directory to the header of the log messages
      --alsologtostderr                  log to standard error as well as files
  -k, --api-key string                   IBMCLOUD API Key(env name: IBMCLOUD_API_KEY)
      --audit-file string                Audit logs for the tool (default "pvsadm.log")
      --debug                            Enable PowerVS debug option(ATTENTION: dev only option, may print sensitive data from APIs)
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --log_file string                  If non-empty, use this log file
      --log_file_max_size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
      --one_output                       If true, only write logs to their native severity level (vs also writing to each lower severity level
      --skip_headers                     If true, avoid header prefixes in the log messages
      --skip_log_headers                 If true, avoid headers when opening log files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

E1211 19:28:50.723929   50173 root.go:68] The ApiKey property is required but was not specified.
🍏 Friday December 11 2020 07:28:50 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶ 
🍏 Friday December 11 2020 07:28:57 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶ ./pvsadm version
I1211 19:29:00.830006   50244 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
Version: v0.1-alpha.10-4-g57c47c5-dirty, GoVersion: go1.15.5
🍏 Friday December 11 2020 07:29:00 PM 🍏
╭─github.com/ocp-power-automation/full-flow
╰─▶

Rhel image conversion failed with v0.1-alpha.8 release

Command used
pvsadm image qcow2ova --image-name rhel-83-12082020 --image-url ./rhel-8.3-ppc64le-kvm.qcow2 --image-dist rhel --image-size 120 --rhn-user --rhn-password -t /shilpi

Error:

Generating initramfs for kernel version: 4.18.0-240.1.1.el8_3.ppc64le
Changing password for user root.
passwd: all authentication tokens updated successfully.
Unregistering from: subscription.rhsm.redhat.com:443/subscription
System has been unregistered.
All local data removed
, stderr: warning: /var/cache/dnf/rhel-8-for-ppc64le-baseos-rpms-6c9bd4f15855d59d/packages/freetype-2.9.1-4.el8_3.1.ppc64le.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Importing GPG key 0xFD431D51:
 Userid     : "Red Hat, Inc. (release key 2) <[email protected]>"
 Fingerprint: 567E 347A D004 4ADE 55BA 8A5F 199E 2F91 FD43 1D51
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Importing GPG key 0xD4082792:
 Userid     : "Red Hat, Inc. (auxiliary key) <[email protected]>"
 Fingerprint: 6A6A A7C9 7C88 90AE C6AE BFE2 F76F 66C3 D408 2792
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
dracut: Disabling early microcode, because kernel does not support it. CONFIG_MICROCODE_[AMD|INTEL]!=y
dracut: Disabling early microcode, because kernel does not support it. CONFIG_MICROCODE_[AMD|INTEL]!=y
Generating grub configuration file ...
Generating boot entries from BLS files...
done
error: package ibm-power-repo-latest.noarch is not installed

output.log

504 unknown error reported due to gateway timeouts while importing image from IBM COS to the PowerVS instance.

Test case:

  1. Download a qcow2 image and convert it to OVA image using pvsadm image qcow2ova image tool ==> Completed
  2. Upload the OVA image to IBM COS using pvsadm image upload tool ==> Completed
  3. Import the image from IBM COS to PowerVS instance using pvsadm image import tool ==> Failed with 504 unknown error

Failing command with debug enabled :

[root@chaos-arc46-bastion ~]# ./pvsadm-linux-ppc64le image import -n ocp-validation-toronto-01 -b bucket-validation-team --object-name /root/rhel83-arc-dec3.ova.gz --image-name rhel83-arc-dec3 -r tor01 --debug
I1204 01:45:14.015869  881720 root.go:26] Using an API key from IBMCLOUD_API_KEY environment variable
I1204 01:45:14.016170  881720 import.go:64] Auto Generating the COS Service credential for importing the image
I1204 01:45:19.839502  881720 import.go:112] bucket-validation-team bucket found in the cos-validation-team[ID:crn:v1:bluemix:public:cloud-object-storage:global:a/65b64c1f1c29460e8c2e4bbfbd893c2c:80104e1e-987e-4c53-9660-71783a347052::] COS instance
2020/12/04 01:45:23 the apiendpoint url for power is tor.power-iaas.cloud.ibm.com
2020/12/04 01:45:23 Calling the New Auth Method in the IBMPower Session Code
2020/12/04 01:45:23 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/04 01:45:23 the region is tor and the zone is  tor01
2020/12/04 01:45:23 the crndata is ... crn:v1:bluemix:public:power-iaas:tor01:a/65b64c1f1c29460e8c2e4bbfbd893c2c:e8f845fa-8caf-4228-bf1c-83dd62c57666:: 
POST /pcloud/v1/cloud-instances/e8f845fa-8caf-4228-bf1c-83dd62c57666/images HTTP/1.1
Host: tor.power-iaas.cloud.ibm.com
User-Agent: Go-http-client/1.1
Content-Length: 296
Accept: application/json
Authorization: Bearer xxxxxxx
Content-Type: application/json
Crn: crn:v1:bluemix:public:power-iaas:tor01:a/65b64c1f1c29460e8c2e4bbfbd893c2c:e8f845fa-8caf-4228-bf1c-83dd62c57666::
Accept-Encoding: gzip

{"accessKey":"xxxxx","bucketName":"bucket-validation-team","diskType":"tier3","imageFilename":"/root/rhel83-arc-dec3.ova.gz","imageName":"rhel83-arc-dec3","osType":"redhat","region":"tor01","secretKey":"xxxx","source":"url"}

HTTP/1.1 504 Gateway Time-out
Transfer-Encoding: chunked
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Cf-Ray: 5fc390774839ec00-BOS
Connection: keep-alive
Content-Type: text/html; charset=UTF-8
Date: Fri, 04 Dec 2020 06:49:23 GMT
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Server: cloudflare
Set-Cookie: cf_ob_info=504:5fc390774839ec00:BOS; path=/; expires=Fri, 04-Dec-20 06:49:53 GMT
Set-Cookie: cf_use_ob=443; path=/; expires=Fri, 04-Dec-20 06:49:53 GMT
X-Frame-Options: SAMEORIGIN

1598
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en-US"> <![endif]-->
<!--[if IE 7]>    <html class="no-js ie7 oldie" lang="en-US"> <![endif]-->
<!--[if IE 8]>    <html class="no-js ie8 oldie" lang="en-US"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en-US"> <!--<![endif]-->
<head>
<meta http-equiv="refresh" content="0">

<title>tor.power-iaas.cloud.ibm.com | 504: Gateway time-out</title>
<meta charset="UTF-8" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge,chrome=1" />
<meta name="robots" content="noindex, nofollow" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<link rel="stylesheet" id="cf_styles-css" href="/cdn-cgi/styles/main.css" type="text/css" media="screen,projection" />


</head>
<body>
<div id="cf-wrapper">

    

    <div id="cf-error-details" class="p-0">
        <header class="mx-auto pt-10 lg:pt-6 lg:px-8 w-240 lg:w-full mb-8">
            <h1 class="inline-block sm:block sm:mb-2 font-light text-60 lg:text-4xl text-black-dark leading-tight mr-2">
              
              <span class="cf-error-type">Error</span>
              <span class="cf-error-code">504</span>
            </h1>
            <span class="inline-block sm:block font-mono text-15 lg:text-sm lg:leading-relaxed">Ray ID: 5fc390774839ec00 &bull;</span>
            <span class="inline-block sm:block font-mono text-15 lg:text-sm lg:leading-relaxed">2020-12-04 06:49:23 UTC</span>
            <h2 class="text-gray-600 leading-1.3 text-3xl font-light">Gateway time-out</h2>
        </header>
        
        <div class="my-8 bg-gradient-gray">
            <div class="w-240 lg:w-full mx-auto">
                <div class="clearfix md:px-8">
                  
<div id="cf-browser-status" class=" relative w-1/3 md:w-full py-15 md:p-0 md:py-8 md:text-left md:border-solid md:border-0 md:border-b md:border-gray-400 overflow-hidden float-left md:float-none text-center">
  <div class="relative mb-10 md:m-0">
    <span class="cf-icon-browser block md:hidden h-20 bg-center bg-no-repeat"></span>
    <span class="cf-icon-ok w-12 h-12 absolute left-1/2 md:left-auto md:right-0 md:top-0 -ml-6 -bottom-4"></span>
  </div>
  <span class="md:block w-full truncate">You</span>
  <h3 class="md:inline-block mt-3 md:mt-0 text-2xl text-gray-600 font-light leading-1.3">Browser</h3>
  <span class="leading-1.3 text-2xl text-green-success">Working</span>
</div>

<div id="cf-cloudflare-status" class=" relative w-1/3 md:w-full py-15 md:p-0 md:py-8 md:text-left md:border-solid md:border-0 md:border-b md:border-gray-400 overflow-hidden float-left md:float-none text-center">
  <div class="relative mb-10 md:m-0">
    <span class="cf-icon-cloud block md:hidden h-20 bg-center bg-no-repeat"></span>
    <span class="cf-icon-ok w-12 h-12 absolute left-1/2 md:left-auto md:right-0 md:top-0 -ml-6 -bottom-4"></span>
  </div>
  <span class="md:block w-full truncate">Boston</span>
  <h3 class="md:inline-block mt-3 md:mt-0 text-2xl text-gray-600 font-light leading-1.3">Cloudflare</h3>
  <span class="leading-1.3 text-2xl text-green-success">Working</span>
</div>

<div id="cf-host-status" class="cf-error-source relative w-1/3 md:w-full py-15 md:p-0 md:py-8 md:text-left md:border-solid md:border-0 md:border-b md:border-gray-400 overflow-hidden float-left md:float-none text-center">
  <div class="relative mb-10 md:m-0">
    <span class="cf-icon-server block md:hidden h-20 bg-center bg-no-repeat"></span>
    <span class="cf-icon-error w-12 h-12 absolute left-1/2 md:left-auto md:right-0 md:top-0 -ml-6 -bottom-4"></span>
  </div>
  <span class="md:block w-full truncate">tor.power-iaas.cloud.ibm.com</span>
  <h3 class="md:inline-block mt-3 md:mt-0 text-2xl text-gray-600 font-light leading-1.3">Host</h3>
  <span class="leading-1.3 text-2xl text-red-error">Error</span>
</div>

                </div>
              
            </div>
        </div>

        <div class="w-240 lg:w-full mx-auto mb-8 lg:px-8">
            <div class="clearfix">
                <div class="w-1/2 md:w-full float-left pr-6 md:pb-10 md:pr-0 leading-relaxed">
                    <h2 class="text-3xl font-normal leading-1.3 mb-4">What happened?</h2>
                    <p>The web server reported a gateway time-out error.</p>
                </div>
              
                <div class="w-1/2 md:w-full float-left leading-relaxed">
                    <h2 class="text-3xl font-normal leading-1.3 mb-4">What can I do?</h2>
                    <p class="mb-6">Please try again in a few minutes.</p>
                </div>
            </div>
              
        </div>

        <div class="cf-error-footer cf-wrapper w-240 lg:w-full py-10 sm:py-4 sm:px-8 mx-auto text-center sm:text-left border-solid border-0 border-t border-gray-300">
  <p class="text-13">
    <span class="cf-footer-item sm:block sm:mb-1">Cloudflare Ray ID: <strong class="font-semibold">5fc390774839ec00</strong></span>
    <span class="cf-footer-separator sm:hidden">&bull;</span>
    <span class="cf-footer-item sm:block sm:mb-1"><span>Your IP</span>: 129.42.208.184</span>
    <span class="cf-footer-separator sm:hidden">&bull;</span>
    <span class="cf-footer-item sm:block sm:mb-1"><span>Performance &amp; security by</span> <a rel="noopener noreferrer" href="https://www.cloudflare.com/5xx-error-landing" id="brand_link" target="_blank">Cloudflare</a></span>
    
  </p>
</div><!-- /.error-footer -->


    </div>
</div>
</body>
</html>


0


Error: unknown error (status 504): {resp:0xc0008e2000} 
E1204 01:49:28.331523  881720 root.go:49] unknown error (status 504): {resp:0xc0008e2000} 

Document the release process

This task is for documenting the release process which involved tagging and publishing the release

/kind feature
/priority important-longterm

Document the The system cannot find the path specified error in windows

While running the following command on the Windows platform, failed to find the file on that

 ./pvsadm image upload --bucket bucket-validation-team --file /tmp/tmp.2iM2Ph53Ls/test --bucket-region us-south --resource-group ocp-validation-resource-group
I1215 07:55:48.582771   14036 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1215 07:55:59.301186   14036 upload.go:81] Found bucket bucket-validation-team in the cos-validation-team instance
I1215 07:55:59.301186   14036 s3client.go:130] uploading the object /tmp/tmp.2iM2Ph53Ls/test
Error: err opening file /tmp/tmp.2iM2Ph53Ls/test: open /tmp/tmp.2iM2Ph53Ls/test: The system cannot find the path specified.
Usage:
  pvsadm image upload [flags]
Flags:
      --resource-group string      Name of user resource group. (default "default")
      --cos-storageclass string    Cloud Object Storage Class type, available values are [standard, smart, cold, vault]. (default "standard")
  -n, --cos-instance-name string   Cloud Object Storage instance name.
  -b, --bucket string              Cloud Object Storage bucket name.
  -f, --file string                The PATH to the file to upload.
  -r, --bucket-region string       Cloud Object Storage bucket region. (default "us-south")
  -h, --help                       help for upload
Global Flags:
      --add_dir_header                   If true, adds the file directory to the header of the log messages
      --alsologtostderr                  log to standard error as well as files
  -k, --api-key string                   IBMCLOUD API Key(env name: IBMCLOUD_API_KEY)
      --audit-file string                Audit logs for the tool (default "pvsadm.log")
      --debug                            Enable PowerVS debug option(ATTENTION: dev only option, may print sensitive data from APIs)
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --log_file string                  If non-empty, use this log file
      --log_file_max_size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
      --one_output                       If true, only write logs to their native severity level (vs also writing to each lower severity level
      --skip_headers                     If true, avoid header prefixes in the log messages
      --skip_log_headers                 If true, avoid headers when opening log files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging
E1215 07:55:59.311261   14036 root.go:68] err opening file /tmp/tmp.2iM2Ph53Ls/test: open /tmp/tmp.2iM2Ph53Ls/test: The system cannot find the path specified.
Administrator@frenulum1 /cygdrive/c/users/Julie
$ ls -l  /tmp/tmp.2iM2Ph53Ls/test
-rw-r--r-- 1 Administrator None 0 Dec 15 06:54 /tmp/tmp.2iM2Ph53Ls/test

This is because this command is running in the cygwin environment and the actual path of the file located in <cygwin_path>/tmp/XXXXX, hence need documentation about how to use this in windows.

/kind documentation
/priority important-soon

Expose cloud-config template

Expose the cloud-config template option to the user to override it.

/kind feature
/priority important-longterm

Not able to import image even after specifying region

I used command
pvsadm image import -n ocp-cicd-tokyo-04 -b ocp4-images-bucket-retired --object-name rhel-8.3-test-200-12042020.ova.gz --image-name rhel-8.3-test-200-12042020 --storagetype tier1 -r us-south

I am getting below error

[root@shilpi-test shilpi]# pvsadm image import -n ocp-cicd-tokyo-04 -b ocp4-images-bucket-retired --object-name rhel-8.3-test-200-12042020.ova.gz  --image-name rhel-8.3-test-200-12042020 --storagetype tier1 -r us-south
I1204 13:18:42.116446   86021 root.go:26] Using an API key from IBMCLOUD_API_KEY environment variable
I1204 13:18:42.116493   86021 import.go:64] Auto Generating the COS Service credential for importing the image
I1204 13:18:46.649937   86021 import.go:112] ocp4-images-bucket-retired bucket found in the ocp4-on-power[ID:crn:v1:bluemix:public:cloud-object-storage:global:a/65b64c1f1c29460e8c2e4bbfbd893c2c:8aeefa98-c07b-4d22-aa7a-1694374ae275::] COS instance
Error: region not found for the zone, talk to the developer to add the support into the tool: tok04
E1204 13:18:55.517609   86021 root.go:49] region not found for the zone, talk to the developer to add the support into the tool: tok04

The image is present in that path. And us-south is valid region

Expected Result
The image should get imported with us-south region

pvsadm image upload is crashing for an invalid input

entered a character instead of a number . Output can be an error message

🍏 Thursday December 10 2020 09:08:13 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶ ./pvsadm image upload --bucket new-test-bucket -o ./CentOS_83.ova.gz
I1210 21:08:24.740026   88981 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1210 21:08:46.143190   88981 upload.go:87] bucket new-test-bucket not found in the account provided
Would You Like to use Available COS Instance for creating bucket? [y/n]: y
I1210 21:08:50.116350   88981 upload.go:91] Select a COS Instance
0. cos-ocp-powervs (5832edee-fa06-453c-9191-a992d3da09eb)
1. docker-on-power (c39e4b56-1f89-44d0-a4e2-3291bddddae7)
2. pvsadm-cos-instance (675e3934-630f-4e64-8de9-3ae15353d63e)
3. cos-validation-team (80104e1e-987e-4c53-9660-71783a347052)
4. ocp4-on-power (8aeefa98-c07b-4d22-aa7a-1694374ae275)
5. prow-object-store (7b6e2743-3955-489c-b435-44bbf6db6ef8)
Enter a number:n
F1210 21:08:57.062340   88981 utils.go:70] strconv.ParseInt: parsing "n": invalid syntax
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc00000e001, 0xc000148000, 0x59, 0xbb)
	/home/runner/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x207b180, 0xc000000003, 0x0, 0x0, 0xc000392070, 0x1ff2502, 0x8, 0x46, 0x0)
	/home/runner/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printDepth(0x207b180, 0x3, 0x0, 0x0, 0x1, 0xc0008bfa50, 0x1, 0x1)
	/home/runner/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:718 +0x165
k8s.io/klog/v2.(*loggingT).print(...)
	/home/runner/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:703
k8s.io/klog/v2.Fatal(...)
	/home/runner/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1443
github.com/ppc64le-cloud/pvsadm/pkg/client/s3utils.SelectCosInstance(0x6, 0x3, 0x1a2099f)
	/home/runner/work/pvsadm/pvsadm/pkg/client/s3utils/utils.go:70 +0x385
github.com/ppc64le-cloud/pvsadm/cmd/image/upload.glob..func1(0x2069bc0, 0xc0000ac580, 0x0, 0x4, 0x0, 0x0)
	/home/runner/work/pvsadm/pvsadm/cmd/image/upload/upload.go:101 +0x79b
github.com/spf13/cobra.(*Command).execute(0x2069bc0, 0xc0000ac540, 0x4, 0x4, 0x2069bc0, 0xc0000ac540)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:842 +0x47c
github.com/spf13/cobra.(*Command).ExecuteC(0x2068c00, 0x20197e0, 0xc000070778, 0xc000541f78)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
github.com/ppc64le-cloud/pvsadm/cmd.Execute()
	/home/runner/work/pvsadm/pvsadm/cmd/root.go:67 +0x31
main.main()
	/home/runner/work/pvsadm/pvsadm/main.go:8 +0x25

goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x207b180)
	/home/runner/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/home/runner/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xd8

goroutine 65 [IO wait]:
internal/poll.runtime_pollWait(0x279a758, 0x72, 0x1b7ee20)
	/opt/hostedtoolcache/go/1.15.5/x64/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000042e18, 0x72, 0x1b7ee00, 0x2014850, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000042e00, 0xc000777a80, 0x1a0c, 0x1a0c, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc000042e00, 0xc000777a80, 0x1a0c, 0x1a0c, 0x203000, 0x128a41b, 0xc0000a2be0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc00000e028, 0xc000777a80, 0x1a0c, 0x1a0c, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc00010e280, 0xc000777a80, 0x1a0c, 0x1a0c, 0x917, 0x1a07, 0xc00082a668)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc0000a2d00, 0x1b78600, 0xc00010e280, 0x100b0c5, 0x19322e0, 0x19f4680)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc0000a2a80, 0x9c97e58, 0xc00000e028, 0x5, 0xc00000e028, 0x906)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc0000a2a80, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc0000a2a80, 0xc000775000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:1252 +0x15f
net/http.(*persistConn).Read(0xc000754360, 0xc000775000, 0x1000, 0x1000, 0xc000014780, 0xc00082ac58, 0x10054b5)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1887 +0x77
bufio.(*Reader).fill(0xc00010dd40)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bufio/bufio.go:101 +0x105
bufio.(*Reader).Peek(0xc00010dd40, 0x1, 0x0, 0x0, 0x1, 0x0, 0xc000014a20)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bufio/bufio.go:139 +0x4f
net/http.(*persistConn).readLoop(0xc000754360)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:2040 +0x1a8
created by net/http.(*Transport).dialConn
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1708 +0xcb7

goroutine 66 [select]:
net/http.(*persistConn).writeLoop(0xc000754360)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:2340 +0x11c
created by net/http.(*Transport).dialConn
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1709 +0xcdc

goroutine 41 [select]:
net/http.(*persistConn).writeLoop(0xc0003605a0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:2340 +0x11c
created by net/http.(*Transport).dialConn
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1709 +0xcdc

goroutine 37 [IO wait]:
internal/poll.runtime_pollWait(0x279a670, 0x72, 0x1b7ee20)
	/opt/hostedtoolcache/go/1.15.5/x64/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000042a98, 0x72, 0x1b7ee00, 0x2014850, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000042a80, 0xc000b80000, 0x11d58, 0x11d58, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc000042a80, 0xc000b80000, 0x11d58, 0x11d58, 0x203000, 0x128a41b, 0xc000289660)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc00000e050, 0xc000b80000, 0x11d58, 0x11d58, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc00054aa60, 0xc000b80000, 0x11d58, 0x11d58, 0x320, 0x11d53, 0xc000829668)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc000289780, 0x1b78600, 0xc00054aa60, 0x100b0c5, 0x19322e0, 0x19f4680)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000289500, 0x9c97e58, 0xc00000e050, 0x5, 0xc00000e050, 0x30f)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc000289500, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc000289500, 0xc000154000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:1252 +0x15f
net/http.(*persistConn).Read(0xc000360360, 0xc000154000, 0x1000, 0x1000, 0xc000564480, 0xc000829c58, 0x10054b5)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1887 +0x77
bufio.(*Reader).fill(0xc00010cb40)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bufio/bufio.go:101 +0x105
bufio.(*Reader).Peek(0xc00010cb40, 0x1, 0x0, 0x0, 0x1, 0x0, 0xc0000146c0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bufio/bufio.go:139 +0x4f
net/http.(*persistConn).readLoop(0xc000360360)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:2040 +0x1a8
created by net/http.(*Transport).dialConn
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1708 +0xcb7

goroutine 38 [select]:
net/http.(*persistConn).writeLoop(0xc000360360)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:2340 +0x11c
created by net/http.(*Transport).dialConn
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1709 +0xcdc

goroutine 40 [IO wait]:
internal/poll.runtime_pollWait(0x279a4a0, 0x72, 0x1b7ee20)
	/opt/hostedtoolcache/go/1.15.5/x64/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00081e898, 0x72, 0x1b7ee00, 0x2014850, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00081e880, 0xc000762000, 0x1452, 0x1452, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc00081e880, 0xc000762000, 0x1452, 0x1452, 0x203000, 0x128a41b, 0xc000554160)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000112008, 0xc000762000, 0x1452, 0x1452, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc00054bc60, 0xc000762000, 0x1452, 0x1452, 0x57c, 0x1445, 0xc000c0c668)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc000554280, 0x1b78600, 0xc00054bc60, 0x100b0c5, 0x19322e0, 0x19f4680)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000554000, 0x9c97e58, 0xc000112008, 0x5, 0xc000112008, 0x56c)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc000554000, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc000554000, 0xc00074d000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/crypto/tls/conn.go:1252 +0x15f
net/http.(*persistConn).Read(0xc0003605a0, 0xc00074d000, 0x1000, 0x1000, 0xc000040600, 0xc000c0cc58, 0x10054b5)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1887 +0x77
bufio.(*Reader).fill(0xc000621020)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bufio/bufio.go:101 +0x105
bufio.(*Reader).Peek(0xc000621020, 0x1, 0x0, 0x0, 0x1, 0x0, 0xc000040360)
	/opt/hostedtoolcache/go/1.15.5/x64/src/bufio/bufio.go:139 +0x4f
net/http.(*persistConn).readLoop(0xc0003605a0)
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:2040 +0x1a8
created by net/http.(*Transport).dialConn
	/opt/hostedtoolcache/go/1.15.5/x64/src/net/http/transport.go:1708 +0xcb7
🍏 Thursday December 10 2020 09:08:57 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶

Create an Issue template

Create an Issue template to fill all the required details to debug the issue

/assing
/kind documentation
/priority important-longterm

Default region is not working for the image import command

e.g: the following command is not working:

# importing image from default region and using default storage type (service credential will be autogenerated)
pvsadm image import -n upstream-core-lon04 -b <BUCKETNAME> --object-name rhel-83-10032020.ova.gz --image-name test-image

error:

╰─$     go run . image import -v=9 --bucket ocp4-images-bucket --image-name mkumatag-image --instance-name upstream-core-lon04 --object-name rhel-83-11242020.ova.gz
I1203 20:03:47.849039    7088 root.go:30] Using an API key from IBMCLOUD_API_KEY environment variable
Error: required flag(s) "region" not set
E1203 20:03:47.849169    7088 root.go:57] required flag(s) "region" not set
exit status 1

/kind bug
/assign @bkhadars

Centos image conversion failed with release v0.1-alpha.8

command used
pvsadm image qcow2ova --image-name centos-83-12082020 --image-url ./CentOS-8-GenericCloud-8.3.2011-20201204.2.ppc64le.qcow2 --image-dist centos --image-size 120 --debug

Error

, stderr: warning: /var/cache/dnf/appstream-91c8503df9a23bb1/packages/ksh-20120801-254.el8.ppc64le.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
Importing GPG key 0x8483C65D:
 Userid     : "CentOS (CentOS Official Signing Key) <[email protected]>"
 Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
/usr/bin/dracut: line 947: rm: command not found
/usr/bin/dracut: line 3: rm: command not found

Output:

output.log

Consistent flag names across image upload and import

upload flags:

Flags:
      --instance-name string    Instance Name of the COS to be used
      --resource-group string   Provide Resource-Group (default "default")
      --service-plan string     Provide serviceplan type (default "standard")
  -b, --bucket-name string      Region of the COS instance
  -i, --image-name string       S3 object name to be uploaded to the COS
  -r, --region string           Region of the COS instance (default "us-south")
  -h, --help                    help for upload

import flags:

Flags:
      --accesskey string                 Cloud Storage access key
  -b, --bucket string                    Cloud Storage bucket name
  -h, --help                             help for import
      --image-name string                Name to give imported image
  -i, --instance-id string               Instance ID of the PowerVS instance
  -n, --instance-name string             Instance name of the PowerVS
      --object-name string               Cloud Storage image filename
      --ostype string                    Image OS Type, accepted values are[aix, ibmi, redhat, sles] (default "redhat")
  -r, --region string                    Cloud Storage Region
      --secretkey string                 Cloud Storage secret key
      --service-credential-name string   Service Credential name to be auto generated (default "pvsadm-service-cred")
      --storagetype string               Storage type, accepted values are [tier1, tier3] (default "tier3")

  • bucket-name(upload) vs bucket(import) - we can have bucket both the places
  • image-name(upload) vs object-name(import) - we can have object-name both the places

/kind bug
/priority important-soon
/assign @bkhadars

qcow2ova: Improve the image preparation time

In the image preparation, we have seen that AT repo takes lot of time to download the initial repository from the IBM site, and this in increasing the overall time take for image preparation.

Updating Subscription Management repositories.
IBM_Power_Tools                                  24 kB/s |  22 kB     00:00    
Advance Toolchain                               2.9 kB/s | 1.7 MB     10:19    <==== spending lot of time pulling AT repo based on the network speed, most of the time it is taking lot of time
Package powerpc-utils-1.3.6-11.el8.ppc64le is already installed.

Though this repository get installed with ibm-power-repo rpm but haven't seen using anywhere while image prep, so we can disable the repository just post-installation.

Feed the pvsadm tool version into ova image

Rightnow it is very difficult to make out from which version of pvsadm command the ova image created, need to figure out a way of embedding the tool version into the ova bundle.

Add logic to prevent existing image overwrite in IBM cloud object store

Test Case:

Upload an OVA image with name rhel83-arc-dec3.ova.gz into the IBM COS.
Attempt to upload another image with same name to IBM COS.

[root@arc-centos-pvsadm imagetest]# ./pvsadm image upload --bucket-name bucket-validation-team -i rhel83-arc-dec3.ova.gz --instance-name cos-validation-team
I1208 07:43:27.630776    5186 root.go:30] Using an API key from IBMCLOUD_API_KEY environment variable
I1208 07:43:33.661925    5186 s3client.go:130] uploading the object rhel83-arc-dec3.ova.gz
I1208 07:44:00.363677    5186 s3client.go:156] Upload completed successfully in 26.701701 seconds to location https://s3.us-south.cloud-object-storage.appdomain.cloud/bucket-validation-team/rhel83-arc-dec3.ova.gz

Issue:

Older image in IBM COS is getting overwritten.

Expected results:

Image overwrite should not happen. Either the image should get saved with a random character or date/timestamp appended to the name or the upload should fail with a message saying that same image name is already available.

Improve the help messages for the pvsadm image upload steps

There is a mismatch between feature vs description for the image upload feature.

Flags:
--instance-name string Instance Name of the COS to be used
--resource-group string Provide Resource-Group (default "default")
--service-plan string Provide service plan type (default "standard")
-b, --bucket-name string Region of the COS instance
-i, --image-name string S3 object name to be uploaded to the COS
-r, --region string Region of the COS instance (default "us-south")
-h, --help help for upload

pvsadm image upload --bucket-name travis-powervs-objstorage-cos-standard-oay -^Cmage-name CentOS_83.ova.gz --instance-name CentOS_83.ova.gz

Mismatch between the actual binary name pvsadm-linux-ppc64le used to run the command and the pvsadm command in the help text.

Test - Validate the pvsadm cli tool.

The pvsadm tool is downloaded and saved as pvsadm-linux-ppc64le. The commands are also run using the same binary name.

Issue - But the help text for all commands are referring to pvsadm as the command and not pvsadm-linux-ppc64le.

[root@chaos-arc46-bastion ~]# ./pvsadm-linux-ppc64le -h
Power Systems Virtual Server projects deliver flexible compute capacity for Power Systems workloads.
Integrated with the IBM Cloud platform for on-demand provisioning.

This is a tool built for the Power Systems Virtual Server helps managing and maintaining the resources easily

Usage:
  pvsadm [command]

Available Commands:
  get         Get the resources
  help        Help about any command
  image       PowerVS Image management
  purge       Purge the powervs resources
  version     Print the version number

Flags:
  -k, --api-key string      IBMCLOUD API Key(env name: IBMCLOUD_API_KEY)
      --debug               Enable PowerVS debug option
      --audit-file string   Audit logs for the tool (default "pvsadm.log")
  -h, --help                help for pvsadm

Use "pvsadm [command] --help" for more information about a command.

Expected results:

We need to align the help text command pvsadm with the binary name pvsadm-linux-ppc64le so that commands also run with pvsadm and not with pvsadm-linux-ppc64le.

Discussed with Manju and he proposed a solution for this.

Purge vm throws an error message when vm is in BUILD state

Hitting with the following error while vm is in ERROR state:

+-------------------------------+---------------------------+--------------------------------------+------+-----+------------------+--------------------------+
|             NAME              |       IP ADDRESSES        |                IMAGE                 | CPUS | RAM |      STATUS      |      CREATION DATE       |
+-------------------------------+---------------------------+--------------------------------------+------+-----+------------------+--------------------------+
| k8s-cluster-1601707345-master | External:                 | d0b1f8a4-016e-4819-a395-3ce1d7a7c3a2 |    0 |   0 | Status: BUILD    | 2020-10-03T06:43:19.000Z |
|                               | Private:                  |                                      |      |     | Health: PENDING  |                          |
| k8s-cluster-1601740430-master | External: 130.198.103.13  | d0b1f8a4-016e-4819-a395-3ce1d7a7c3a2 |  0.5 |  32 | Status: ACTIVE   | 2020-10-03T15:54:49.000Z |
|                               | Private: 192.168.155.13   |                                      |      |     | Health: OK       |                          |
| k8s-cluster-1601710944-master | External:                 | d0b1f8a4-016e-4819-a395-3ce1d7a7c3a2 |    0 |   0 | Status: ERROR    | 2020-10-03T07:43:17.000Z |
|                               | Private:                  |                                      |      |     | Health: CRITICAL |                          |
| k8s-cluster-1601591969-master | External: 130.198.103.139 | d0b1f8a4-016e-4819-a395-3ce1d7a7c3a2 |  0.5 |  32 | Status: ACTIVE   | 2020-10-01T22:40:16.000Z |
|                               | Private: 192.168.155.139  |                                      |      |     | Health: OK       |                          |
+-------------------------------+---------------------------+--------------------------------------+------+-----+------------------+--------------------------+
Deleting all the above instances, instances can't be claimed back once deleted. Do you really want to continue?[yes/no]: yes█
I1005 18:34:52.469476   50877 vms.go:61] Deleting the k8s-cluster-1601707345-master, and ID: 187a731b-17c3-4d38-860b-d0a81e913d51
2020/10/05 18:34:52 Calling the Power PVM Delete Method
2020/10/05 18:34:52 Calling the New Auth Method in the IBMPower Session Code
2020/10/05 18:34:52 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/10/05 18:34:52 the region is syd and the zone is  syd04
2020/10/05 18:34:52 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a::
Error: {"description":"bad request: pvm-instances can only be deleted in the active, shutoff, or error state","error":"bad request"}
E1005 18:34:53.183318   50877 root.go:38] {"description":"bad request: pvm-instances can only be deleted in the active, shutoff, or error state","error":"bad request"}
╭─manjunath@Manjunaths-MacBook-Pro-2 ~/ppc64le-cloud/src/github.com/ppc64le-cloud/pvsadm ‹master›

Move PR testing to Prow

Right now we are using the GH actions for both PR testing, for cutting the release(tags), and merge as well. This will create a problem when we exceed the free quota, hence PR testing and anything which is not needed for binary publication needs to be moved to Prow infra

/kind cleanup
/priority important-longterm

Add a wait for image import command

At present image import process is async, just imports the image and exits irrespective of the state and final result of it. This behavior is good for a few cases but not really suitable for the e2e automation, hence a watch call is required for waiting for the image to uploaded properly or fail.

/kind feature
/assign
/priority important-soon

import is failing with Access Denied error

Hitting with the following error while running the import operation:

2020/12/11 22:12:18 the region is lon and the zone is  lon04
2020/12/11 22:12:18 the crndata is ... crn:v1:bluemix:public:power-iaas:lon04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:7845d372-d4e1-46b8-91fc-41051c984601:: 
Error: [POST /pcloud/v1/cloud-instances/{cloud_instance_id}/images][400] pcloudCloudinstancesImagesPostBadRequest  &{Code:0 Description:bad request: the cloud storage access validation failed: ERROR: Access to bucket 'ocp4-images-bucket' was denied
ERROR: S3 error: 403 (AccessDenied): Access Denied
 Error:bad request Message:}
Usage:
  pvsadm image import [flags]

Root cause:

Service credentials are global in nature and you will get a list of them when you run the /v2/resource_keys GET api, but the import will succeed only when the service credential is really tied to the cos instance from where we are trying to pull the image.

In the above case, we are fetching the service credential which already exists but mapped to the different cos instance and we are trying to use it for a different bucket for import operation.

Fix:

More robust code needed to find the existing service credential in the cos
For new keys - use cos instance name in cred name instead of the USERID

/kind bug
/priority important-soon
/assign

Workaround

Choose a unique service credential name to create,

continuation to fix made in #68

Enabled klog logging options

This will help to add verbose levels for the logging.

This will enable the user to mention the log level by -v options and many more, here is the complete set of flags this will expose:

      --add_dir_header                   If true, adds the file directory to the header of the log messages
      --alsologtostderr                  log to standard error as well as files
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --log_file string                  If non-empty, use this log file
      --log_file_max_size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
      --skip_headers                     If true, avoid header prefixes in the log messages
      --skip_log_headers                 If true, avoid headers when opening log files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

Add test: e2e run - qcow2ova, upload, import

Add e2e test

Flow:

  • Convert the qcow2 to ova
  • Upload the image
  • Import the image from bucket to PowerVS instance

Flavors need to be covered:

  • RHEL
  • CentOS
  • RHCOS

/kind feature
/priority important-longterm

"pvsadm purge vms" is failing when GET of an instance errors out

pvsadm purge vms is failing when GET of one of the vms in the list throws an error.

+ pvsadm purge vms --instance-id 013b8948-6cf4-4fae-9b70-71ddadd4f91a --before 4h --ignore-errors --no-prompt
2020/12/11 00:41:43 the apiendpoint url for power is syd.power-iaas.cloud.ibm.com
2020/12/11 00:41:43 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 00:41:43 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 00:41:43 the region is syd and the zone is  syd04
2020/12/11 00:41:43 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a:: 
2020/12/11 00:41:46 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 00:41:46 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 00:41:46 the region is syd and the zone is  syd04
2020/12/11 00:41:46 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a:: 
2020/12/11 00:41:49 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 00:41:49 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 00:41:49 the region is syd and the zone is  syd04
2020/12/11 00:41:49 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a:: 
2020/12/11 00:41:52 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 00:41:52 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 00:41:52 the region is syd and the zone is  syd04
2020/12/11 00:41:52 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a:: 
2020/12/11 00:41:55 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 00:41:55 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 00:41:55 the region is syd and the zone is  syd04
2020/12/11 00:41:55 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a:: 
2020/12/11 00:41:58 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 00:41:58 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 00:41:58 the region is syd and the zone is  syd04
2020/12/11 00:41:58 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a:: 
2020/12/11 00:42:01 Calling the New Auth Method in the IBMPower Session Code
2020/12/11 00:42:01 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2020/12/11 00:42:01 the region is syd and the zone is  syd04
2020/12/11 00:42:01 the crndata is ... crn:v1:bluemix:public:power-iaas:syd04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:013b8948-6cf4-4fae-9b70-71ddadd4f91a:: 
2020/12/11 00:42:01 Failed to perform the operation... [GET /pcloud/v1/cloud-instances/{cloud_instance_id}/pvm-instances/{pvm_instance_id}][404] pcloudPvminstancesGetNotFound  &{Code:0 Description:pvm-instance does not exist, id: f2fb0f34-6008-41a3-a6fb-2f4100cf9eb8 Error:pvm-instance not found Message:}
Error: failed to get the instance: {"description":"pvm-instance does not exist, id: f2fb0f34-6008-41a3-a6fb-2f4100cf9eb8","error":"pvm-instance not found"}
E1211 00:42:01.817516      19 root.go:38] failed to get the instance: {"description":"pvm-instance does not exist, id: f2fb0f34-6008-41a3-a6fb-2f4100cf9eb8","error":"pvm-instance not found"}

The purge vms should continue if failure to get one of the vm so that script will proceed and exit the overall script with that error.

Run time error during qcow2 to OVA image conversion due to old version of libgcrypt rpm on RHEL 8.2

System - RHEL 82 bastion VM

Test Case - Download a qco2 KVM Guest OS and convert it to OVA using pvsadm tool.

Issue seen :

[root@chaos-arc46-bastion ~]# ./pvsadm-linux-ppc64le image qcow2ova  --image-name rhel83-arc-dec3 --image-url /root/rhel-8.3-ppc64le-kvm.qcow2 --image-dist rhel --image-size 120 --rhn-user <rhn_uid> --rhn-password <rhn_pwd> --os-password powervc1 -t /export/ovaimage -k <cloud api key>

I1203 08:43:48.384794  846616 validate.go:26] Checking: platform
I1203 08:43:48.384877  846616 validate.go:26] Checking: tools
I1203 08:43:48.384909  846616 tools.go:29] qemu-img found at /usr/bin/qemu-img
I1203 08:43:48.384928  846616 tools.go:29] growpart found at /usr/bin/growpart
I1203 08:43:48.384935  846616 validate.go:26] Checking: diskspace
I1203 08:43:48.384953  846616 diskspace.go:33] free: 279G, need: 170G
I1203 08:43:48.385383  846616 get-image.go:29] Copying /root/rhel-8.3-ppc64le-kvm.qcow2 into /export/ovaimage/qcow2ova852912890/rhel-8.3-ppc64le-kvm.qcow2
I1203 08:43:54.284159  846616 get-image.go:33] Copy Completed!
I1203 08:43:54.284208  846616 qcow2ova.go:134] downloaded/copied the file at: /export/ovaimage/qcow2ova852912890/rhel-8.3-ppc64le-kvm.qcow2
I1203 08:43:54.285457  846616 qcow2ova.go:162] Converting Qcow2(/export/ovaimage/qcow2ova852912890/rhel-8.3-ppc64le-kvm.qcow2) image to raw(/export/ovaimage/qcow2ova852912890/ova-img-dir/disk.raw) format
Error: failed to convert Qcow2(/export/ovaimage/qcow2ova852912890/rhel-8.3-ppc64le-kvm.qcow2) image to RAW(/export/ovaimage/qcow2ova852912890/ova-img-dir/disk.raw) format, exited with: 1, out: , err: qemu-img: Unable to initialize gcrypt

E1203 08:43:54.605927  846616 root.go:49] failed to convert Qcow2(/export/ovaimage/qcow2ova852912890/rhel-8.3-ppc64le-kvm.qcow2) image to RAW(/export/ovaimage/qcow2ova852912890/ova-img-dir/disk.raw) format, exited with: 1, out: , err: qemu-img: Unable to initialize gcrypt

[root@chaos-arc46-bastion ~]# rpm -qa | grep libgcrypt
libgcrypt-1.8.3-4.el8.ppc64le

[root@chaos-arc46-bastion ~]# qemu-img
qemu-img: Unable to initialize gcrypt

Resolution :

After upgrading the libgcrypt rpm to 1.8.5-4.el8 and rerunning the OVA conversion completed.

[root@chaos-arc46-bastion ~]# yum update libgcrypt -y
Updating Subscription Management repositories.
Last metadata expiration check: 1:56:25 ago on Thu 03 Dec 2020 06:58:49 AM EST.
Dependencies resolved.
======================================================================================================================================================
 Package                       Architecture                Version                           Repository                                          Size
======================================================================================================================================================
Upgrading:
 libgcrypt                     ppc64le                     1.8.5-4.el8                       rhel-8-for-ppc64le-baseos-rpms                     446 k

Transaction Summary
======================================================================================================================================================
Upgrade  1 Package

Total download size: 446 k
Downloading Packages:
libgcrypt-1.8.5-4.el8.ppc64le.rpm                                                                                     377 kB/s | 446 kB     00:01    
------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                 377 kB/s | 446 kB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                              1/1 
  Upgrading        : libgcrypt-1.8.5-4.el8.ppc64le                                                                                                1/2 
  Running scriptlet: libgcrypt-1.8.5-4.el8.ppc64le                                                                                                1/2 
  Cleanup          : libgcrypt-1.8.3-4.el8.ppc64le                                                                                                2/2 
  Running scriptlet: libgcrypt-1.8.3-4.el8.ppc64le                                                                                                2/2 
  Verifying        : libgcrypt-1.8.5-4.el8.ppc64le                                                                                                1/2 
  Verifying        : libgcrypt-1.8.3-4.el8.ppc64le                                                                                                2/2 
Installed products updated.

Upgraded:
  libgcrypt-1.8.5-4.el8.ppc64le                                                                                                                       

Complete!

Verification :

[root@chaos-arc46-bastion ~]# ./pvsadm-linux-ppc64le image qcow2ova  --image-name rhel83-arc-dec3 --image-url /root/rhel-8.3-ppc64le-kvm.qcow2 --image-dist rhel --image-size 120 --rhn-user <rhn_uid> --rhn-password <rhn_pwd> --os-password powervc1 -t /export/ovaimage -k <cloud api key>

I1203 08:57:07.260039  847890 validate.go:26] Checking: platform
I1203 08:57:07.260128  847890 validate.go:26] Checking: tools
I1203 08:57:07.260165  847890 tools.go:29] qemu-img found at /usr/bin/qemu-img
I1203 08:57:07.260187  847890 tools.go:29] growpart found at /usr/bin/growpart
I1203 08:57:07.260195  847890 validate.go:26] Checking: diskspace
I1203 08:57:07.260220  847890 diskspace.go:33] free: 276G, need: 170G
I1203 08:57:07.260710  847890 get-image.go:29] Copying /root/rhel-8.3-ppc64le-kvm.qcow2 into /export/ovaimage/qcow2ova375848007/rhel-8.3-ppc64le-kvm.qcow2
I1203 08:57:14.966073  847890 get-image.go:33] Copy Completed!
I1203 08:57:14.966127  847890 qcow2ova.go:134] downloaded/copied the file at: /export/ovaimage/qcow2ova375848007/rhel-8.3-ppc64le-kvm.qcow2
I1203 08:57:14.966448  847890 qcow2ova.go:162] Converting Qcow2(/export/ovaimage/qcow2ova375848007/rhel-8.3-ppc64le-kvm.qcow2) image to raw(/export/ovaimage/qcow2ova375848007/ova-img-dir/disk.raw) format
I1203 08:57:21.690959  847890 qcow2ova.go:167] Conversion completed
I1203 08:57:21.691011  847890 qcow2ova.go:169] Resizing the image /export/ovaimage/qcow2ova375848007/ova-img-dir/disk.raw to 120G
I1203 08:57:22.837248  847890 qcow2ova.go:174] Resize completed
I1203 08:57:22.837288  847890 qcow2ova.go:176] Preparing the image
I1203 09:00:12.563302  847890 qcow2ova.go:181] Preparation completed
I1203 09:00:12.563938  847890 qcow2ova.go:183] Creating an OVA bundle
I1203 09:03:13.630214  847890 qcow2ova.go:188] OVA bundle creation completed: /export/ovaimage/qcow2ova375848007/rhel83-arc-dec3.ova
I1203 09:03:13.649438  847890 qcow2ova.go:190] Compressing an OVA file
I1203 09:06:44.776037  847890 qcow2ova.go:196] OVA file Compression completed


Successfully converted Qcow2 image to OVA format, find at /root/rhel83-arc-dec3.ova.gz
OS root password: powervc1

We should update the README file with specific version of rpm to be = or > a specific level/version of libgcrypt that works.

Windows Support

This task is for supporting the windows platform

/kind feature
/priority important-longterm

COS instance not found

Hi, I'm trying to copy an OVA image directly from my PowerVC VM (RHEL ppc64le) to my COS bucket. However I'm getting an error saying it cannot find my COS instance.

[root@sc-pvc-lab57 ova]# pvsadm image upload --bucket eu-gb-bucket1 --file AIX7.2.2.1.ova.gz --cos-instance-name Cloud-Object-Storage-d9
I1216 16:42:20.982302 24351 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
Error: instance: Cloud-Object-Storage-d9 not found

I have confirmed this is my instance name, and the API Key is correct (if I change the API key to an invalid key, I get an auth error)

COS instance type: free
Bucket region: eu-gb

pvsadm image upload - defaulting to `default` resource group

It would be great if the tool ask for the resource group also instead of defaulting to default resource group

🍏 Thursday December 10 2020 09:08:57 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶ ./pvsadm image upload --bucket new-test-bucket -o ./CentOS_83.ova.gz
I1210 21:08:58.591747   89029 root.go:29] Using an API key from IBMCLOUD_API_KEY environment variable
I1210 21:09:16.165495   89029 upload.go:87] bucket new-test-bucket not found in the account provided
Would You Like to use Available COS Instance for creating bucket? [y/n]: n
Would you like to create new COS Instance? [y/n]: y
Provide Name of the cos-instance:new-test-bucket
I1210 21:09:57.076590   89029 upload.go:111] Creating a new cos new-test-bucket instance
Error: ResourceGroupDoesnotExist: Given resource Group : "default" doesn't exist
Usage:
  pvsadm image upload [flags]

Flags:
      --resource-group string   Provide Resource-Group (default "default")
      --service-plan string     Provide serviceplan type (default "standard")
  -n, --instance-name string    Instance Name of the COS to be used
  -b, --bucket string           Region of the COS instance
  -o, --object-name string      S3 object name to be uploaded to the COS
  -r, --region string           Region of the COS instance (default "us-south")
  -h, --help                    help for upload

Global Flags:
      --add_dir_header                   If true, adds the file directory to the header of the log messages
      --alsologtostderr                  log to standard error as well as files
  -k, --api-key string                   IBMCLOUD API Key(env name: IBMCLOUD_API_KEY)
      --audit-file string                Audit logs for the tool (default "pvsadm.log")
      --debug                            Enable PowerVS debug option(ATTENTION: dev only option, may print sensitive data from APIs)
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --log_file string                  If non-empty, use this log file
      --log_file_max_size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
      --skip_headers                     If true, avoid header prefixes in the log messages
      --skip_log_headers                 If true, avoid headers when opening log files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

E1210 21:10:00.966621   89029 root.go:68] ResourceGroupDoesnotExist: Given resource Group : "default" doesn't exist
🍏 Thursday December 10 2020 09:10:01 PM 🍏
╭─github.com/ocp-power-automation/full-flow                                                                                                          ⍉
╰─▶

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.