helm / helm-2to3 Goto Github PK
View Code? Open in Web Editor NEW⚠️(OBSOLETE) This is a Helm v3 plugin which migrates and cleans up Helm v2 configuration and releases in-place to Helm v3
License: Apache License 2.0
⚠️(OBSOLETE) This is a Helm v3 plugin which migrates and cleans up Helm v2 configuration and releases in-place to Helm v3
License: Apache License 2.0
To make less confusions and make the plugin more fool prove we should retain v2 releases by default.
And also change flag --keep-v2-releases
to --delete-v2-releases
https://github.com/hickeyma/helm-2to3/blob/master/go.mod#L21
Is that done for the reason?
Building locally and also go get -d github.com/hickeyma/helm-2to3
fails with:
go: parsing /home/usr1/go/src/helm.sh/helm/go.mod: open /home/usr1/go/src/helm.sh/helm/go.mod: no such file or directory
OS: macOS
After a helm3 move config
, a helm3 dep update
bails due to the error:
Error: open /Users/username/Library/Caches/helm/repository/local-index.yaml: no such file or directory
Copying the .helm/repopsitory/cache/local-index.yaml
to the specified location results in a successful dependency update.
The 2to3 plugin should copy this file.
If I want to run it with ansible, how to answer yes for all questions when doing migration to helm v3?
# helm3 2to3 move config
2019/11/14 08:53:51 WARNING: Helm v3 configuration maybe overwritten during this operation.
2019/11/14 08:53:51
[Move Config/confirm] Are you sure you want to move the v2 configration? [y/N]:
I think the current stage of the plugin is good, we should move it to helm org, then cut a first release
I'm looking at doing in-place migrations of helm 2 releases to helm 3 via Jenkins. Moving the configuration and release works with no issue, but when I try to dry-run 2to3 clean-up I'm getting an error with no specific reasons why?
+ helm3 2to3 cleanup --release-cleanup --config-cleanup -t tiller --dry-run HELM_RELEASE_NAME
2019/11/19 17:06:10 NOTE: This is in dry-run mode, the following actions will not be executed.
2019/11/19 17:06:10 Run without --dry-run to take the actions described below:
2019/11/19 17:06:10
WARNING: "Helm v2 Configuration" "Release Data" will be removed.
This will clean up all releases managed by Helm v2. It will not be possible to restore them if you haven't made a backup of the releases.
Helm v2 may not be usable afterwards.
[Cleanup/confirm] Are you sure you want to cleanup Helm v2 data? [y/N]: 2019/11/19 17:06:10
Helm v2 data will be cleaned up.
2019/11/19 17:06:10 [Helm 2] Releases will be deleted.
Error: plugin "2to3" exited with error
With Helm 2 deployments we've had to use --tiller-namespace tiller --tls
options. Is it possible to get more error output from 2to3 or is there something I'm doing wrong?
Any input would be helpful.
Windows 10, try to install plugin and I got:
helm version:
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
install command:
helm plugin install https://github.com/helm/helm-2to3 --debug
Error:
Error: The Remote does not match the VCS endpoint
helm.go:76: [debug] The Remote does not match the VCS endpoint
Turns out I had a temp directory C:\Users\ME\AppData\Local\Temp\helm
that must have had bad data in it. I deleted that helm directory and that fixed the issue.
Closing this issue so it exists more as documentation for searches.
Hello,
I have a bunch of clusters and merging kubeconfig files.
Looks like it is not working with this approach.
I have tested only with one kubeconfig file and it is working fine.
helm 2to3 convert dummy-dev --dry-run
NOTE: This is in dry-run mode, the following actions will not be executed.
Run without --dry-run to take the actions described below:
Release "dummy-dev" will be converted from Helm 2 to Helm 3.
[Helm 3] Release "dummy-dev" will be created.
2019/09/04 17:54:20 stat /Users/osema/.bluemix/plugins/container-service/clusters/Helm-2-3/kube-config-Helm-2-3.yml:/Users/osema/.bluemix/plugins/container-service/clusters/wx-dev-001/kube-config-wx-dev-001.yml:/Users/osema/.bluemix/plugins/container-service/clusters/playground-AutomationQA/kube-config-playground-AutomationQA.yml:/Users/osema/.bluemix/plugins/container-service/clusters/play_openshift/kube-config-play_openshift.yml:/Users/osema/.bluemix/plugins/container-service/clusters/wx-dev-003/kube-config-wx-dev-003.yml: no such file or directory
Error: plugin "2to3" exited with error
When prompting for information, a timestamp should not be part of the output.
e.g.
><> helm 2to3 cleanup
2019/10/30 10:45:28 WARNING: "Helm v2 Configuration" "Release Data" "Tiller Deployment" will be removed.
2019/10/30 10:45:28 This will clean up all releases managed by Helm v2. It will not be possible to restore them if you haven't made a backup of the releases.
2019/10/30 10:45:28 Helm v2 may not be usable afterwards.
2019/10/30 10:45:28
2019/10/30 10:45:28 [Cleanup/confirm] Are you sure you want to cleanup Helm v2 data? [y/N]:
y
2019/10/30 10:45:36
Helm v2 data will be cleaned up.
2019/10/30 10:45:36 [Helm 2] Releases will be deleted.
2019/10/30 10:45:36 [Helm 2] no deployed releases for namespace: kube-system, owner: OWNER=TILLER
2019/10/30 10:45:36 [Helm 2] Releases deleted.
2019/10/30 10:45:36 [Helm 2] Tiller in "kube-system" namespace will be removed.
2019/10/30 10:45:36 [Helm 2] Tiller "deploy" in "kube-system" namespace will be removed.
2019/10/30 10:45:37 [Helm 2] Tiller "deploy" in "kube-system" namespace was removed successfully.
2019/10/30 10:45:37 [Helm 2] Tiller "service" in "kube-system" namespace will be removed.
2019/10/30 10:45:37 [Helm 2] Tiller "service" in "kube-system" namespace was removed successfully.
2019/10/30 10:45:37 [Helm 2] Tiller in "kube-system" namespace was removed.
2019/10/30 10:45:37 [Helm 2] Home folder "/home/bacongobbler/.helm" will be deleted.
2019/10/30 10:45:37 [Helm 2] Home folder "/home/bacongobbler/.helm" deleted.
2019/10/30 10:45:37 Helm v2 data was cleaned up successfully.
When the CLI is prompting me for a y/n answer, there shouldn't be any timestamp present. The rest is fine.
It is possible to not throw error if u already convert release?
for ci/cd it would be useful avoid use try/catch or equivalent.
maybe with additional flag --ignore-converted
I was testing helm3 on an arm64 cluster, used
$ helm plugin install https://github.com/helm/helm-2to3
to install the plugin but got a ELF 64-bit LSB executable, x86-64.
Not sure if helm checks the cpu architecture and its just a question of making an arm build or if helm is just ignoring the cpu arch.
i think it would make a sense to cut first plugin release with binaries for linux, mac and windows, so users don’t have to do that themselves and it also would lover the entrance bar for the plugin usage and testing.
goreleaser would be good candidate to automate that :)
Hello,
after move releases from v2 to v3, the helm operator isn't able to apply the changes done at helmreleases and is logs the error:
caller=helm.go:78 component=helm error="error creating helm client: services "tiller-deploy" not found"
It's possible to fix this?
Best Regards
Spending today testing the Helm 2 to Helm 3 migration path using the 2to3 plugin!
When attempting to install 2to3 on a fresh install, I see this on Ubuntu:
><> helm plugin install https://github.com/helm/helm-2to3
Downloading and installing helm-2to3 v0.1.6 ...
https://github.com/helm/helm-2to3/releases/download/v0.1.6/helm-2to3_0.1.6_linux_amd64.tar.gz
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Error: plugin install hook for "2to3" exited with error
It looks like the install script refers to a release that hasn't been released yet.
Add fix for Helm issue helm/helm#6354. Fixes issue with helm show values
.
***Note: This issue is blocked until helm/helm#6841 lands which should be available in Helm 3 RC-2. ***
Migration setp
1.install plugin
helm plugin install https://github.com/helm/helm-2to3
2. move config
$ export HELM_V2_HOME=$HOME/.helm $ export HELM_V3_CONFIG=$HOME/.helm3 $ export HELM_V3_DATA=$HOME/.helm3 $ helm 2to3 move config
3.clean up
helm 2to3 cleanup
4.helm list There's no information
helm list
helm3 2to3 move config
does not keep file permission;
especially it does not keep executable file permission, so we may not execute plugins.
❯ helm2 plugin list
NAME VERSION DESCRIPTION
diff 2.11.0+3 Preview helm upgrade changes as a diff
secrets 2.0.1 This plugin provides secrets values encryption for Helm charts secure storing
❯ helm3 2to3 move config
2019/12/16 11:02:52 WARNING: Helm v3 configuration maybe overwritten during this operation.
2019/12/16 11:02:52
[Move Config/confirm] Are you sure you want to move the v2 configration? [y/N]: y
2019/12/16 11:02:53
Helm v2 configuration will be moved to Helm v3 configration.
2019/12/16 11:02:53 [Helm 2] Home directory: /Users/skaji/.helm
2019/12/16 11:02:53 [Helm 3] Config directory: /Users/skaji/Library/Preferences/helm
2019/12/16 11:02:53 [Helm 3] Data directory: /Users/skaji/Library/helm
2019/12/16 11:02:53 [Helm 3] Cache directory: /Users/skaji/Library/Caches/helm
2019/12/16 11:02:53 [Helm 3] Create config folder "/Users/skaji/Library/Preferences/helm" .
2019/12/16 11:02:53 [Helm 3] Config folder "/Users/skaji/Library/Preferences/helm" created.
2019/12/16 11:02:53 [Helm 2] repositories file "/Users/skaji/.helm/repository/repositories.yaml" will copy to [Helm 3] config folder "/Users/skaji/Library/Preferences/helm/repositories.yaml" .
2019/12/16 11:02:53 [Helm 2] repositories file "/Users/skaji/.helm/repository/repositories.yaml" copied successfully to [Helm 3] config folder "/Users/skaji/Library/Preferences/helm/repositories.yaml" .
2019/12/16 11:02:53 [Helm 3] Create cache folder "/Users/skaji/Library/Caches/helm" .
2019/12/16 11:02:53 [Helm 3] cache folder "/Users/skaji/Library/Caches/helm" created.
2019/12/16 11:02:53 [Helm 3] Create data folder "/Users/skaji/Library/helm" .
2019/12/16 11:02:53 [Helm 3] data folder "/Users/skaji/Library/helm" created.
2019/12/16 11:02:53 [Helm 2] plugins "/Users/skaji/.helm/cache/plugins" will copy to [Helm 3] cache folder "/Users/skaji/Library/Caches/helm/plugins" .
2019/12/16 11:02:53 [Helm 2] plugins "/Users/skaji/.helm/cache/plugins" copied successfully to [Helm 3] cache folder "/Users/skaji/Library/Caches/helm/plugins" .
2019/12/16 11:02:53 [Helm 2] plugin symbolic links "/Users/skaji/.helm/plugins" will copy to [Helm 3] data folder "/Users/skaji/Library/helm" .
2019/12/16 11:02:53 [Helm 2] plugin links "/Users/skaji/.helm/plugins" copied successfully to [Helm 3] data folder "/Users/skaji/Library/helm" .
2019/12/16 11:02:53 [Helm 2] starters "/Users/skaji/.helm/starters" will copy to [Helm 3] data folder "/Users/skaji/Library/helm/starters" .
2019/12/16 11:02:53 [Helm 2] starters "/Users/skaji/.helm/starters" copied successfully to [Helm 3] data folder "/Users/skaji/Library/helm/starters" .
2019/12/16 11:02:53 Helm v2 configuration was moved successfully to Helm v3 configration.
❯ helm3 plugin list
NAME VERSION DESCRIPTION
2to3 0.2.1 migrate and cleanup Helm v2 configuration and releases in-place to Helm v3
diff 2.11.0+3 Preview helm upgrade changes as a diff
secrets 2.0.1 This plugin provides secrets values encryption for Helm charts secure storing
❯ helm3 diff
Error: fork/exec /Users/skaji/Library/helm/plugins/helm-diff/bin/diff: permission denied
❯ cd ~/Library/Caches/helm/plugins/https-github.com-databus23-helm-diff
❯ git diff
diff --git a/install-binary.sh b/install-binary.sh
old mode 100755
new mode 100644
diff --git a/scripts/setup-apimachinery.sh b/scripts/setup-apimachinery.sh
old mode 100755
new mode 100644
The plugin normally limits the number of versions migrated. It can happen that it chooses the wrong versions to migrate, and in some cases it can even skip the currently deployed version.
I believe the problem comes from the fact that release versions are sorted alphabetically by the plugin instead of numerically. So between version 9 and version 10, the plugin believes version 9 is the most recent.
Here is an example when the deployed version is not migrated:
$ h2 history nginx
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Wed Nov 6 15:19:52 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Install complete
2 Wed Nov 6 15:19:54 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
3 Wed Nov 6 15:19:56 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
4 Wed Nov 6 15:19:58 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
5 Wed Nov 6 15:20:01 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
6 Wed Nov 6 15:20:03 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
7 Wed Nov 6 15:20:05 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
8 Wed Nov 6 15:20:07 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
9 Wed Nov 6 15:20:09 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
10 Wed Nov 6 15:20:11 2019 SUPERSEDED nginx-ingress-1.24.5 0.26.1 Upgrade complete
11 Wed Nov 6 15:23:19 2019 DEPLOYED nginx-ingress-1.24.5 0.26.1 Upgrade complete
$ h2 2to3 convert --release-versions-max 5 nginx
2019/11/06 15:28:15 Release "nginx" will be converted from Helm v2 to Helm v3.
2019/11/06 15:28:15 [Helm 3] Release "nginx" will be created.
2019/11/06 15:28:15
2019/11/06 15:28:15 NOTE: The max release versions "5" is less than the actual release versions "11".
2019/11/06 15:28:15 This means only "5" of the latest release versions will be converted.
2019/11/06 15:28:15
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v5" will be created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v5" created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v6" will be created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v6" created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v7" will be created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v7" created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v8" will be created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v8" created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v9" will be created.
2019/11/06 15:28:15 [Helm 3] ReleaseVersion "nginx.v9" created.
2019/11/06 15:28:15 [Helm 3] Release "nginx" created.
2019/11/06 15:28:15 Release "nginx" was converted successfully from Helm v2 to Helm v3.
2019/11/06 15:28:15 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2019/11/06 15:28:15 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over.
$ h3 list --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$ h3 history nginx
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
5 Wed Nov 6 20:20:01 2019 superseded nginx-ingress-1.24.5 0.26.1 Upgrade complete
6 Wed Nov 6 20:20:03 2019 superseded nginx-ingress-1.24.5 0.26.1 Upgrade complete
7 Wed Nov 6 20:20:05 2019 superseded nginx-ingress-1.24.5 0.26.1 Upgrade complete
8 Wed Nov 6 20:20:07 2019 superseded nginx-ingress-1.24.5 0.26.1 Upgrade complete
9 Wed Nov 6 20:20:09 2019 superseded nginx-ingress-1.24.5 0.26.1 Upgrade complete
All the migrated charts doesn't appear in helm ls -n ${ns}
in Helm v.3.0.0-beta.1
Could you please upgrade the plugin?
I did helm3 2to3 convert for migration. Seems the migration was successful.
[root@ray-cwes-03 ~]# helm3 2to3 convert cert-manager
2019/10/17 06:17:36 Release "cert-manager" will be converted from Helm v2 to Helm v3.
2019/10/17 06:17:36 [Helm 3] Release "cert-manager" will be created.
2019/10/17 06:17:36 [Helm 3] ReleaseVersion "cert-manager.v1" will be created.
2019/10/17 06:17:36 [Helm 3] ReleaseVersion "cert-manager.v1" created.
2019/10/17 06:17:36 [Helm 3] Release "cert-manager" created.
2019/10/17 06:17:36 Release "cert-manager" was converted successfully from Helm v2 to Helm v3.
2019/10/17 06:17:36 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2019/10/17 06:17:36 v2 release information should only be removed using helm 2to3
cleanup and when all releases have been migrated over.
But there is nothing in helm3 list.
[]# helm3 list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
[]# helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
app-api 1 Thu Oct 17 02:49:24 2019 DEPLOYED ncm-app-1.8.1 1.5.64 ncms
cbur-master 1 Thu Oct 17 02:49:58 2019 DEPLOYED cbur-1.3.2 1.9-06-1466 ncms
cert-manager 1 Thu Oct 17 02:49:35 2019 DEPLOYED cert-manager-v0.8.0 v0.8.0 ncms
[]# helm3 repo list
NAME URL
stable http://tiller-repo:8879
local http://127.0.0.1:8879/charts
Implemented in #20
I have tried to migrate all the release from Helm v2 to Helm v3 using the plugin, only two out of 25 get migrated, when i tried to migrate them again the plugin fails with error stating that the release already exist, using helm3 ls -a dose not show it !
i can't provide any information about helm 2, because i made clean up, but helm3 still exist.
Just ran into this bug. Turns out you need to set your default context to the one you're interested in, because the plugin ignores the --kube-context
parameter (which is a global param on the helm
CLI).
This is actually kinda dangerous (more so than #11) because you could easily accidentally interact with the wrong cluster and screw things up, thinking your normal global Helm parameters work as expected. Thankfully, I was just messing with a dev environment when I bumped into this.
I just installed helm3 with 2to3 v0.1.1 and wanted to convert a simple helm2 release to helm3. However it fails:
$ helm3 2to3 convert -t ns1 xxx
Release "xxx" will be converted from Helm 2 to Helm 3.
[Helm 3] Release "xxx" will be created.
[Helm 3] ReleaseVersion "xxx.v1" will be created.
Error: release: already exists
Error: plugin "2to3" exited with error
I tried another release and it fails too with the same error message. In dry-run mode it doesn't complain about anything:
$ helm3 2to3 convert --dry-run -t ns1 xxx
NOTE: This is in dry-run mode, the following actions will not be executed.
Run without --dry-run to take the actions described below:
Release "xxx" will be converted from Helm 2 to Helm 3.
[Helm 3] Release "xxx" will be created.
[Helm 3] ReleaseVersion "xxx.v1" will be created.
There are no helm3 releases yet:
$ helm3 ls -a
NAME NAMESPACE REVISION UPDATED STATUS CHART
Add plugin install, upgrade hooks to call a shell script which will download binary for currently used OS.
Helm v3 by default keeps last 10 releases, plugin should be default also only migrate 10 last releases.
--releases-number
flag can be added if user wants to migrate more releases.
There is a use cases where multiple configuration files are used for accessing different Kubernetes cluster.
This can be configured by setting KUBECONFIG
variable as follows:
export KUBECONFIG=cluster1_config
It should be documented how the plugin handles this configuration.
It only works if tiller
is running in the cluster, would be good to support tillerless helm
too where tiller runs locally.
Dry run using service account on in a specified namespace fails due to secrets request outside of set namespace
kubectl config set-cluster kube --server="$CLUSTER_SERVER" --insecure-skip-tls-verify=true
kubectl config set-credentials "helm-deployer" --token="$HELM_TOKEN"
kubectl config set-context $CLUSTER_NAMESPACE-deploy --cluster=kube --namespace=$CLUSTER_NAMESPACE --user=helm-deployer
kubectl config use-context $CLUSTER_NAMESPACE-deploy
helm 2to3 convert helm3t-nginx --dry-run --tiller-out-cluster
Release "helm3t-nginx" will be created.
Error: secrets is forbidden: User "system:serviceaccount:test:helm-deployer" cannot list secrets in the namespace "kube-system": no RBAC policy matched
Error: plugin "2to3" exited with error
Expected behavior - only request resources in namespace set in kube-context.
Installed a release with helm2 along with tiller
plugin. This creates the release in secrets my-releasee-name.v1
format already. Thus, the plugin fails to migrate anything from v2 to v3.
$ helm3 2to3 convert --tiller-out-cluster --tiller-ns my-ns consul
Release "consul" will be converted from Helm 2 to Helm 3.
[Helm 3] Release "consul" will be created.
[Helm 3] ReleaseVersion "consul.v1" will be created.
Error: release: already exists
Error: plugin "2to3" exited with error
Implemented in #22
Ok so I converted a cert-manager release from vv2 to v3:
❯ helm 2to3 convert --tiller-ns tiller cert-manager
Release "cert-manager" will be converted from Helm 2 to Helm 3.
[Helm 3] Release "cert-manager" will be created.
[Helm 3] ReleaseVersion "cert-manager.v1" will be created.
[Helm 3] ReleaseVersion "cert-manager.v1" created.
[Helm 3] Release "cert-manager" created.
Release "cert-manager" was converted successfully from Helm 2 to Helm 3. Note: the v2 releases still remain and should be removed to avoid conflicts with the migrated v3 releases.
Release existed in v2 & v3 as expected:
# helm2
❯ helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
anchore 7 Tue Apr 16 13:12:03 2019 DEPLOYED anchore-engine-0.12.1 0.3.3 default
cert-manager 1 Mon Apr 15 17:16:11 2019 DEPLOYED cert-manager-v0.7.0 v0.7.0 cert-manager
#helm3
❯ helm3 ls -n cert-manager
NAME NAMESPACE REVISION UPDATED STATUS CHART
cert-manager cert-manager 1 2019-04-15 16:16:11.590230902 +0000 UTC deployed cert-manager-v0.7.0
So I attempted to delete the v2 release hoping that it would go from v2, remain in v3 and the pod would survive:
❯ helm delete cert-manager
Error: deletion completed with 1 error(s): release "cert-manager": object "" not found, skipping delete
❯ helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
anchore 7 Tue Apr 16 13:12:03 2019 DEPLOYED anchore-engine-0.12.1 0.3.3 default
❯ helm3 ls -n cert-manager
NAME NAMESPACE REVISION UPDATED STATUS CHART
cert-manager cert-manager 1 2019-04-15 16:16:11.590230902 +0000 UTC deployed cert-manager-v0.7.0
But unfortunately the actual pods were gone:
❯ kgp -n cert-manager
No resources found.
So I had the v3 release but no pods living under it (in any namespace).
As I wasn't so bothered and didn't have much custom config, I decided to upgrade the v3 release. I had to upgrade the chart from v0.7.0 to v0.10.0 as before I did that I got this:
❯ helm3 upgrade cert-manager ./jetstack-cert-manager -n cert-manager
Error: validation: chart.metadata is required
On that basis I assumed that the older version of the chart did not meet the new requirements for helmv3.
For anyone who has similar this is my upgrade path:
helm3 search hub cert-manager
helm3 pull jetstack/cert-manager
helm3 upgrade cert-manager jetstack/cert-manager -n cert-manager --dry-run --debug
# actul upgrade
❯ helm3 upgrade cert-manager jetstack/cert-manager -n cert-manager
Release "cert-manager" has been upgraded. Happy Helming!
NAME: cert-manager
LAST DEPLOYED: 2019-09-27 15:49:02.86515 +0100 BST
NAMESPACE: cert-manager
STATUS: deployed
NOTES:
cert-manager has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://docs.cert-manager.io/en/latest/reference/issuers.html
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://docs.cert-manager.io/en/latest/reference/ingress-shim.html
# check release
helm3 ls -n cert-manager
NAME NAMESPACE REVISION UPDATED STATUS CHART
cert-manager cert-manager 2 2019-09-27 15:49:02.86515 +0100 BST deployed cert-manager-v0.10.0
Once I've used helm 2to3 convert ...
and have a release in both v2 & v3 - should I be able to delete the v2 release? Should the pods transfer ownership to the v3 release?
Or does that only happen when I use the --delete-v2-releases
flag?
Finally note my delete command errored but still seemed to delete the release which was confusing. Any advice appreciated.
From @rimusz in helm/helm#6154 (comment);
Regarding the Delete the v2 release versions as Secrets or ConfigMaps we should have an extra flag to keep the v2 release or vice versa to delete v2 release.
From @rimusz in helm/helm#6154 (comment):
+1 for the flag --keep-v2-releases and we should document that very clearly that by default v2 releases get deleted if that flag is not used.
I do not see a point for the delete all releases be part of the plugin, we should just document that option with good examples.
e.g. even if only --release-cleanup
flag is used it still shows the message:
2019/10/03 09:30:30 WARNING: Helm v2 Configuration, Release Data and Tiller Deployment will be removed.
It show show only:
Helm v2 Release Data will be removed.
This issue is to decide on how the plugin should behave when an error occurs during an operation.
This is the proposal:
I had previously migrated a jetstack cert-manager installation to helm3 with the helm3 beta5 release (pretty sure).
Because I'm on 0.7.0 of the chart, I need to upgrade to > 0.8.0 of the application.
Upon trying to upgrade I get the following issue (with beta 5):
❯ helm3 upgrade cert-manager jetstack/cert-manager -n cert-manager --dry-run --debug
upgrade.go:79: [debug] preparing upgrade for cert-manager
Error: UPGRADE FAILED: "cert-manager" has no deployed releases
helm.go:81: [debug] "cert-manager" has no deployed releases
helm.sh/helm/v3/pkg/storage.(*Storage).DeployedAll
/home/circleci/helm.sh/helm/pkg/storage/storage.go:142
helm.sh/helm/v3/pkg/storage.(*Storage).Deployed
/home/circleci/helm.sh/helm/pkg/storage/storage.go:113
helm.sh/helm/v3/pkg/action.(*Upgrade).prepareUpgrade
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:122
helm.sh/helm/v3/pkg/action.(*Upgrade).Run
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:80
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:130
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
UPGRADE FAILED
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:132
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
I tried upgrading to the new rc.1 version and that is a little better, but same underlying issue:
❯ helm3 upgrade cert-manager jetstack/cert-manager --version v0.11.0 --dry-run --debug
upgrade.go:79: [debug] preparing upgrade for cert-manager
upgrade.go:369: [debug] copying values from cert-manager (v1) to new release.
upgrade.go:87: [debug] performing update for cert-manager
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Deployment, namespace: cert-manager, name: cert-manager-cainjector
helm.go:76: [debug] existing resource conflict: kind: Deployment, namespace: cert-manager, name: cert-manager-cainjector
rendered manifests contain a new resource that already exists. Unable to continue with update
helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:216
helm.sh/helm/v3/pkg/action.(*Upgrade).Run
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:88
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:130
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:75
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
UPGRADE FAILED
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:132
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:75
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
I noticed the following:
helm3 ls
shows the releasehelm3 ls -n cert-manager
shows nothingcert-manager
namespace.default
namespace.❯ k get secret -n default sh.helm.release.v1.cert-manager.v1
NAME TYPE DATA AGE
sh.helm.release.v1.cert-manager.v1 helm.sh/release.v1 1 2d5h
❯ k get secret -n cert-manager sh.helm.release.v1.cert-manager.v1
Error from server (NotFound): secrets "sh.helm.release.v1.cert-manager.v1" not found
❯ helm3 ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 1 2019-04-17 08:47:44.546412876 +0000 UTC deployed cert-manager-v0.7.0 v0.7.0
I think it's likely that I used the following command (from history) to convert the release:
helm 2to3 convert --tiller-ns tiller cert-manager
I have no v2 installed now though, so unsure if I can ask helm3 upgrade
to upgrade the objects in the cert-manager namespace but say the release is in the default namespace.
Or is there an easy way to copy the release config over... get the secret out to yaml and recreate in the next namespace?
Running helm3 plugin install
on windows seems to download and extract the package, but the installer seems to want to use a shell script which won't work on windows:
~ $ helm3 plugin install https://github.com/helm/helm-2to3 --debug
[debug] updating https://github.com/helm/helm-2to3
[debug] symlinking C:\Users\nlowe\AppData\Local\Temp\helm\plugins\https-github.com-helm-helm-2to3 to C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
plugin_install.go:75: [debug] loading plugin from C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
plugin.go:60: [debug] running install hook: &{sh [sh -c cd $HELM_PLUGIN_DIR; scripts/install_plugin.sh] [] <nil> <nil> <nil> [] %!s(*syscall.SysProcAttr=<nil>) %!s(*os.Process=<nil>) <nil> <nil> %!s(*exec.Error=&{sh 0xc00008d5c0}) %!s(bool=false) [] [] [] [] %!s(chan error=<nil>) %!s(chan struct {}=<nil>)}
Error: exec: "sh": executable file not found in %PATH%
helm.go:81: [debug] exec: "sh": executable file not found in %PATH%
~ $ helm3 plugin list
NAME VERSION DESCRIPTION
2to3 0.1.3 migrate and cleanup Helm v2 configuration and releases in-place to Helm v3
~ $ helm3 2to3 --help
Error: exec: "C:\\Users\\nlowe\\AppData\\Roaming\\helm\\plugins\\helm-2to3/bin/2to3": file does not exist
The plugin clearly exists:
$ ls C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
Directory: C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 9/30/2019 4:53 PM .circleci
d----- 9/30/2019 4:53 PM cmd
d----- 9/30/2019 4:53 PM pkg
d----- 9/30/2019 4:53 PM scripts
-a---- 9/30/2019 4:53 PM 15 .gitignore
-a---- 9/30/2019 4:53 PM 265 .goreleaser.yml
-a---- 9/30/2019 4:53 PM 137 code-of-conduct.md
-a---- 9/30/2019 4:53 PM 105 CONTRIBUTING.md
-a---- 9/30/2019 4:53 PM 2975 go.mod
-a---- 9/30/2019 4:53 PM 60681 go.sum
-a---- 9/30/2019 4:53 PM 34781 helm-2to3.png
-a---- 9/30/2019 4:53 PM 11357 LICENSE
-a---- 9/30/2019 4:53 PM 743 main.go
-a---- 9/30/2019 4:53 PM 365 Makefile
-a---- 9/30/2019 4:53 PM 209 OWNERS
-a---- 9/30/2019 4:53 PM 367 plugin.yaml
-a---- 9/30/2019 4:53 PM 7543 README.md
Downloading a windows tarball from the releases page and trying to helm3 plugin install
that doesn't work either:
~ $ helm3 plugin install C:\Users\nlowe\Downloads\helm-2to3\ --debug
[debug] symlinking C:\Users\nlowe\Downloads\helm-2to3 to C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
plugin_install.go:75: [debug] loading plugin from C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
plugin.go:60: [debug] running install hook: &{sh [sh -c cd $HELM_PLUGIN_DIR; scripts/install_plugin.sh] [] <nil> <nil> <nil> [] %!s(*syscall.SysProcAttr=<nil>) %!s(*os.Process=<nil>) <nil> <nil> %!s(*exec.Error=&{sh 0xc00008d5c0}) %!s(bool=false) [] [] [] [] %!s(chan error=<nil>) %!s(chan struct {}=<nil>)}
Error: exec: "sh": executable file not found in %PATH%
helm.go:81: [debug] exec: "sh": executable file not found in %PATH%
Removing the hooks and patching the command
seems to fix it:
# plugin.yaml
name: "2to3"
version: "0.1.3"
usage: "migrate and cleanup Helm v2 configuration and releases in-place to Helm v3"
description: "migrate and cleanup Helm v2 configuration and releases in-place to Helm v3"
command: "$HELM_PLUGIN_DIR/2to3.exe"
~ $ helm3 plugin remove 2to3
Removed plugin: 2to3
~ $ helm3 plugin install C:\Users\nlowe\Downloads\helm-2to3\ --debug
[debug] symlinking C:\Users\nlowe\Downloads\helm-2to3 to C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
plugin_install.go:75: [debug] loading plugin from C:\Users\nlowe\AppData\Roaming\helm\plugins\helm-2to3
Installed plugin: 2to3
~ $ helm3 2to3 --help
Migrate and Cleanup Helm v2 configuration and releases in-place to Helm v3
Usage:
2to3 [command]
Available Commands:
cleanup cleanup Helm v2 configuration, release data and Tiller deployment
convert migrate Helm v2 release in-place to Helm v3
help Help about any command
move migrate Helm v2 configuration in-place to Helm v3
Flags:
-h, --help help for 2to3
Use "2to3 [command] --help" for more information about a command.
When check after cleanup, this remains:
$ kubectl get all --all-namespaces
[...]
kube-system service/tiller-deploy ClusterIP 10.111.188.252 <none> 44134/TCP 40s
[...]
I'm very new to Helm, and I've inherited a Helm 2 project using CI via CircleCI and AWS EKS. As the repos etc are all on CircleCI, how should this upgrade process work? Does the upgrade from 2 to 3 affect any files that are in source control, or is this solely to not break deployment artifacts? Do I even need to do this upgrade at all or can I simply arbitrarily use Helm 3 and then rely on the CI to run from an older commit/image/cache, or in doing such would that be a foot gun moment? Does the upgrade process need to be pushed through Circle CI via committing some procedure?
that should to show more detailed output
helm/helm#6866 introduced a breaking change to the storage backend which will affect how 2to3 creates releases. A larger explanation/justification for the breaking change was provided in helm/helm#6881 (comment). The 2to3 plugin should cut a new release compiled against Helm 3.0.0-rc.3 once it's been released later today to be compatible with Helm 3.0.0.
I'm currently using Helm with Tillerless option. When I tried to convert a release, I got Error: RELEASE_NAME has no deployed releases
error, regardless whether I used --dry-run
or not
$ helm3 2to3 convert --dry-run grafana --tiller-out-cluster
2019/11/26 16:58:34 NOTE: This is in dry-run mode, the following actions will not be executed.
2019/11/26 16:58:34 Run without --dry-run to take the actions described below:
2019/11/26 16:58:34
2019/11/26 16:58:34 Release "grafana" will be converted from Helm v2 to Helm v3.
2019/11/26 16:58:34 [Helm 3] Release "grafana" will be created.
Error: grafana has no deployed releases
Error: plugin "2to3" exited with error
Versions:
Helm: 2.16.1
Helm-tillerless plugin: 0.9.3
(just upgraded from 0.8.1
after I got the error)
Raised in helm/helm#5892 (comment)
I ran the migration steps as intended and ended up with broken plugins. It seems that the 2to3 move
step creates symlinks to existing plugins (which are later cleaned up) in the new plugin directory.
lrwxr-xr-x 1 tfall staff 72 Dec 3 09:22 helm-2to3 -> /Users/tfall/Library/Caches/helm/plugins/https-github.com-helm-helm-2to3
lrwxr-xr-x 1 tfall staff 69 Dec 3 16:08 helm-diff -> /Users/tfall/.helm/cache/plugins/https-github.com-databus23-helm-diff
lrwxr-xr-x 1 tfall staff 68 Dec 3 16:08 helm-edit -> /Users/tfall/.helm/cache/plugins/https-github.com-mstrzele-helm-edit
lrwxr-xr-x 1 tfall staff 68 Dec 3 16:08 helm-env -> /Users/tfall/.helm/cache/plugins/https-github.com-adamreese-helm-env
lrwxr-xr-x 1 tfall staff 65 Dec 3 16:08 helm-gcs -> /Users/tfall/.helm/cache/plugins/https-github.com-nouney-helm-gcs
lrwxr-xr-x 1 tfall staff 74 Dec 3 16:08 helm-github -> /Users/tfall/.helm/cache/plugins/https-github.com-technosophos-helm-github
lrwxr-xr-x 1 tfall staff 71 Dec 3 16:08 helm-gpg -> /Users/tfall/.helm/cache/plugins/https-github.com-technosophos-helm-gpg
lrwxr-xr-x 1 tfall staff 70 Dec 3 16:08 helm-hashtag -> /Users/tfall/.helm/cache/plugins/https-github.com-balboah-helm-hashtag
lrwxr-xr-x 1 tfall staff 75 Dec 3 16:08 helm-keybase -> /Users/tfall/.helm/cache/plugins/https-github.com-technosophos-helm-keybase
lrwxr-xr-x 1 tfall staff 69 Dec 3 16:08 helm-last -> /Users/tfall/.helm/cache/plugins/https-github.com-adamreese-helm-last
lrwxr-xr-x 1 tfall staff 66 Dec 3 16:08 helm-logs -> /Users/tfall/.helm/cache/plugins/https-github.com-maorfr-helm-logs
lrwxr-xr-x 1 tfall staff 69 Dec 3 16:08 helm-nuke -> /Users/tfall/.helm/cache/plugins/https-github.com-adamreese-helm-nuke
lrwxr-xr-x 1 tfall staff 67 Dec 3 16:08 helm-s3 -> /Users/tfall/.helm/cache/plugins/https-github.com-hypnoglow-helm-s3
lrwxr-xr-x 1 tfall staff 75 Dec 3 16:08 helm-secrets -> /Users/tfall/.helm/cache/plugins/https-github.com-futuresimple-helm-secrets
lrwxr-xr-x 1 tfall staff 63 Dec 3 16:08 helm-stop -> /Users/tfall/.helm/cache/plugins/https-github.com-IBM-helm-stop
lrwxr-xr-x 1 tfall staff 76 Dec 3 16:08 helm-template -> /Users/tfall/.helm/cache/plugins/https-github.com-technosophos-helm-template
lrwxr-xr-x 1 tfall staff 71 Dec 3 16:08 helm-tiller -> /Users/tfall/.helm/cache/plugins/https-github.com-adamreese-helm-tiller
lrwxr-xr-x 1 tfall staff 73 Dec 3 16:08 helm-tiller-info -> /Users/tfall/.helm/cache/plugins/https-github.com-maorfr-helm-tiller-info
lrwxr-xr-x 1 tfall staff 74 Dec 3 16:08 helm-whatup -> /Users/tfall/.helm/cache/plugins/https-github.com-bacongobbler-helm-whatup
You can see newly installed plugins (2to3
) are installed to the correct location, but plugin binaries are not moved for existing plugins.
At this point helm plugins list
will show all plugins, as intended. Plugins can also be used.
When running the 2to3 clean
step, the ~/.helm
directory is removed, along with previously installed plugin binaries. This leaves the symlinks in the new plugins directory pointing to nothing and the user can no longer access their (old) plugins.
Additionally, if the user runs helm plugin list
now they won't see any of the previously installed plugins. Without knowing the cause, if the user runs helm plugin install <url>
in an attempt to reinstall the "missing" plugins, the command will error out symlink exists
.
Edit: helm plugin remove
will also error out. The symlinks must be removed manually in order for the plugin to be reinstalled with helm plugin install
.
By default --dry-run
is set, so no cleanup can be done without providing --disable-dry-run
flag.
We do not want users by mistake to run the full clean up.
It would be handy to have the dry run option like the other operation in the plugin.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.