Giter Club home page Giter Club logo

cluster-api-provider-bringyourownhost's Introduction

Kubernetes Cluster API Provider Bring Your Own Host (BYOH)


What is Cluster API Provider BYOH

Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management.

BYOH is a Cluster API Infrastructure Provider for already-provisioned hosts running Linux. This provider allows operators to adopt Cluster API for deploying and managing kubernetes nodes without also having to adopt a specific infrastructure service. This enables users to decouple kubernetes node provisioning from host and infrastructure provisioning.

BYOH Glossary

Host - A host is a running computer system. It could be physical or virtual. It has a kernel and some base operating system

BYO Host - A Linux host provisioned and managed outside of Cluster API

BYOH Capacity Pool - A set of BYO Hosts registered in a management cluster & authorized for usage as a capacity for deploying Kubernetes nodes

Kubernetes Node - A Kubernetes Node that runs on top of a Host. There is a 1-to-1 relationship between nodes and hosts (every host has zero or one nodes). Node provisioning and lifecycle management is a Cluster API responsibility

Kubernetes Host Components - The components that run uncontainerized on the host and are required to bootstrap a Kubernetes node. Typically, this is at least kubelet, containerd and kubeadm, but different OS might require different components in this category

Features

  • Native Kubernetes manifests and API
  • Support for single and multi-node control plane clusters
  • Support already provisioned Linux VMs with Ubuntu 20.04

Getting Started

Check out the getting_started guide for launching a BYOH workload cluster

Community, discussion, contribution, and support

The BringYourOwnHost provider is developed in the open, and is constantly being improved by our users, contributors, and maintainers. If you have questions or want to get the latest project news, you can connect with us in the following ways:

  • Chat with us on the Kubernetes Slack in the #cluster-api channel
  • Subscribe to the SIG Cluster Lifecycle Google Group for access to documents and calendars
  • Join our Cluster API Provider for BringYourOwnHost working group sessions where we share the latest project news, demos, answer questions, and triage issues

Pull Requests and feedback on issues are very welcome! See the issue tracker if you're unsure where to start, especially the Good first issue and Help wanted tags, and also feel free to reach out to discuss.

See also our contributor guide and the Kubernetes community page for more details on how to get involved.

Project Status

This project is currently a work-in-progress, in an Alpha state, so it may not be production ready. There is no backwards-compatibility guarantee at this point. For more details on the roadmap and upcoming features, check out the project's issue tracker on GitHub.

Getting involved and contributing

Launching a Kubernetes cluster using BYOH source code

Check out the developer guide for launching a BYOH cluster consisting of Docker containers as hosts.

More about development and contributing practices can be found in CONTRIBUTING.md.

Implement Custom Installer controller

An installer controller is responsible to provide the installation and uninstallation scripts for k8s dependencies, prerequisites and components on each BYOHost.
If someone wants to implement their own installer controller then they need to follow the contract defined in installer doc.


Compatibility with Cluster API

  • BYOH is currently compatible wth Cluster API v1beta1 (v1.0)

Supported OS and Kubernetes versions

Operating System Architecture Kubernetes v1.24.* Kubernetes v1.25.* Kubernetes v1.26.*
Ubuntu 20.04.* amd64

NOTE: The '*' in OS means that all Ubuntu 20.04 patches are supported.

NOTE: The '*' in the K8s version means that the K8s minor release is supported but it may happen that a BYOH bundle for a specific patch may not exist in the OCI registry.

BYOH in News

cluster-api-provider-bringyourownhost's People

Contributors

4everming avatar akasurde avatar ankitasw avatar anusha94 avatar dependabot[bot] avatar dharmjit avatar fabriziopandini avatar georgievavmw avatar gwang550 avatar himanshu007-creator avatar huchen2021 avatar jamiemonserrate avatar jaybatra26 avatar jeuxdemains avatar khannakshat7 avatar lwmqwer avatar madhur97 avatar mayur-tolexo avatar nilanjandaw avatar oscr avatar pshail avatar ratanasovvmw avatar raunaksingwi avatar sachinkumarsingh092 avatar shamsher31 avatar shivi28 avatar shubham14bajpai avatar vibhorchinda avatar vmwghbot avatar yixingjia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-api-provider-bringyourownhost's Issues

Create labels

We should create a set of labels for this repository.

I suggest we start with small set of labels consistent with the labels defined in the main CAPI repository, e.g.

  • help wanted
  • good first issue
  • do-not-merge/work-in-progress
  • kind/bug
  • kind/cleanup
  • kind/design
  • kind/documentation
  • kind/failing-test
  • kind/api-change
  • kind/feature
  • kind/support
  • priority/awaiting-more-evidence
  • priority/backlog
  • priority/critical-urgent
  • priority/important-longterm
  • priority/important-soon

Plus area specific labels

  • area/controller
  • area/node-agent

Integrate host agent with the installer

  • host agent should exit if the OS is not supported (i.e., don't wait for host registration)
  • call install / uninstall methods of the installer by passing the k8sversion and bundleTag params
  • write units tests wherever applicable

Create the BYOHost API type

We should create the BYOHost API type providing a representation of a BYO Host that enters in the cluster-api-provider-byoh capacity pool, setup the generation of the corresponding CRD type, and configure the related validation web-hook endpoint.

A BYO Host is an host provisioned outside of Cluster API, with the entire stack up to the OS already configured; the Kubernetes host components (e.g. containers, kubelet, kubeadm etc.) might be pre-provisioned (like the OS) or managed by this infrastructure provider.

In the initial version of the Host kind

  • BYOHost will be a global resource (not namespaced)
  • We are going to use labels for documenting:
    • System architecture (e.g ARM)
    • Host operating system (e.g Windows)
    • Bespoke devices (e.g. GPU, FGPA)
    • Fault tolerance criterias (e.g. Location, Failure Domain)
    • In case of the Kubernetes binaries are user provided, the installed Kubernetes version
    • Any other concept that can be used for defining resource pools (e.g. tenant/team, project etc.)
  • The BYOHost.Spec section will contain a bare minimal set of information required to connect to the host in an insecure way.
    • Note: In the following iterations this set must be extended for supporting secure communications; eventually we could also consider shifting to a different communication protocol, with the node polling/watching the API server/a service endpoint.
    • Note: we are keeping host connection info in the Spec in order to make it easier to test host creation manually; in the future we might consider to move those info in the status and use the node agent as the authoritative source.
  • The BYOHost.Status section will contain a bare minimal set of information required to handle host reservation.
    • Note: In the following iterations most probably this set will be extended for:
      • including host health check information/other information generated by the node agent
      • supporting host maintenance workflows
      • etc.

Refactor the Install/Uninstall methods of the installer

Describe the solution you'd like
The Install/Uninstall methods currently modify the state of the installer - which is a bad design. See discussion here

This code was written when we assumed that we would have the bundleRepo Address at startup time. This assumption is not true, and we will only know this information at Install/Uninstall time.

So let's refactor the installer to reflect this updated assumption.

Add Governance file

GOVERNANCE directive is added with leadership defined, community engagement, maintainers, expected maturity or growth patterns, and commitment or non-commitment to open governance, and architecture decisions.

Refer to Antrea Governance.md

Only run [PR-Blocking] tests for PRs

Describe the solution you'd like
We should only run the [PR-Blocking] tests as part of each PR. This will help improve the feedback time to folks raising PRs.

We should also have the ability to run the rest of the suite on demand. Preferably via github /commands

Initial scaffolding for the infrastructure provider

Adds the initial bare-bones scaffolding for the infrastructure provider.

At high level this is similar to

kubebuilder init --domain infrastructure.cluster.x-k8s.io
kubebuilder create api --group infrastructure --version v1alpha3 --kind BYOMachine
kubebuilder create api --group infrastructure --version v1alpha3 --kind BYOMachineTemplate

Plus applying some CAPI specific customization:

Setup presubmits checks

We should setup some presubmit check for ensuring a minimal code quality baseline before new code merges in the repo.

#6 is adding a makefile with a set of targets that can be used for this purposes:

  • make lint Run all the lint targets
  • make verify Run all the verify targets
  • make test Run tests

Add check for broken and invalid links.

Describe the solution you'd like

As the project grows in size, It becomes harder and harder to update all the links and references to internal and external docs. Which is an obstacle for any new user or developer to try BYOH on their machine if they are not able to find proper references.

It will be good to add automated checks using Github actions to verify broken links and report to fix them as quickly as possible.

Anything else you would like to add:
[Miscellaneous information that will assist in implementing the feature.]

We can use markdown-link-check
https://github.com/marketplace/actions/markdown-link-check

Update README to include more details

Currently, our README does not provide enough details on what the provider is, what it does and what are the use cases. Add more details to make the provider more user friendly.

Define a target repository for generated images

Given that we are going to generate image artefacts for infrastructure provider, we should define the target repository and fix the manifests under /config, the Makefile, and the tilt configuration accordingly.

Nb. For the sake of testing we are currently using gcr.io/cluster-api-provider-byoh/release/manager, following the CAPI convention, but I'm not sure this convention applies to this repo too.

Create Release Targets

Create make targets that will be used for release activities

  • host-agent-binary
  • infrastructure-components.yaml
  • cluster-template.yaml
  • metadata.yaml
  • [ ] byoh controller image (build and push)
  • [ ] imgpkg k8s bundle (build and push)

e2e tests exercise the installer

Describe the solution you'd like
At the moment, we skip running the installer in our e2e tests. This is because the containers need to run in privileged mode, and they end up making kernel changes on the system they are running on.

We should have the tests spin up a VM, and then run the BYOH host containers in this VM, so that the users system is untouched. Once we do that, we should enable the installer to run in our e2e tests.

Implement BYOMachine reconciler loop

We should start to sketch out the provider reconciler loop

  • The overall approach should be consistent with the other Cluster API controllers (e.g. reconcileNormal, reconcileDelete)
  • There should be the usual stuff implemented by an infrastructure provider (finaliser, wait for cluster infrastructure etc)
  • There should be the skeleton for BYO specific workflow, that now should consider #9 (comment)

TBD if the implementation of the BYO specific workflow should in a single PR or in a separated PR

Add demo video of how BYOH works

I have noticed some short demo videos on how BYOH works, I think it would be beneficial for new folks landing on the project (repo, readme) to see similar short demo videos to get a quick glimpse of BYOH. These video(s) could be linked in the repo readme

Remove hardcoded IP for ControlPlaneEndpoint

What steps did you take and what happened:
Currently, the ControlPlaneEndpoint IP is hardcoded in the cluster-template file


cluster-template.yaml is part of release artifacts and hardcoding the IP breaks the getting-started guide

What did you expect to happen:
Use a variable that can be provided by the user. Refer to the getting-started guide.
For the e2e tests, we need to be able to pass the variable in the suite or in individual tests.

Implement the Host Agent

The host agent will be responsible for the host registration process (see #10) and for creating/deleting a Kubernetes node on the host.

In order to validate the proposed workflow, the implementation builds on top of what describe in #10 with the following additional considerations:

  • During the POC we are going to use a custom cloud init interpreter similar to the one implemented in CAPD. This will include limited support for write_files and runcmd directives only.
  • We are going to implement a very simple integration with kubeadm (shell out).

Add clusterctl support for BYOH Provider

BYOH provider has to be added to the list of providers that clusterctl currently supports.

$ clusterctl config repositories
NAME           TYPE                     URL                                                                                          FILE
cluster-api    CoreProvider             https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              core-components.yaml
aws-eks        BootstrapProvider        https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest/                 eks-bootstrap-components.yaml
kubeadm        BootstrapProvider        https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              bootstrap-components.yaml
talos          BootstrapProvider        https://github.com/talos-systems/cluster-api-bootstrap-provider-talos/releases/latest/       bootstrap-components.yaml
aws-eks        ControlPlaneProvider     https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest/                 eks-controlplane-components.yaml
kubeadm        ControlPlaneProvider     https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              control-plane-components.yaml
nested         ControlPlaneProvider     https://github.com/kubernetes-sigs/cluster-api-provider-nested/releases/latest/              control-plane-components.yaml
talos          ControlPlaneProvider     https://github.com/talos-systems/cluster-api-control-plane-provider-talos/releases/latest/   control-plane-components.yaml
aws            InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest/                 infrastructure-components.yaml
azure          InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-azure/releases/latest/               infrastructure-components.yaml
digitalocean   InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/releases/latest/        infrastructure-components.yaml
docker         InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              infrastructure-components-development.yaml
gcp            InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-gcp/releases/latest/                 infrastructure-components.yaml
maas           InfrastructureProvider   https://github.com/spectrocloud/cluster-api-provider-maas/releases/latest/                   infrastructure-components.yaml
metal3         InfrastructureProvider   https://github.com/metal3-io/cluster-api-provider-metal3/releases/latest/                    infrastructure-components.yaml
nested         InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-nested/releases/latest/              infrastructure-components.yaml
openstack      InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-openstack/releases/latest/           infrastructure-components.yaml
packet         InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-packet/releases/latest/              infrastructure-components.yaml
sidero         InfrastructureProvider   https://github.com/talos-systems/sidero/releases/latest/                                     infrastructure-components.yaml
vsphere        InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/latest/             infrastructure-components.yaml

This will allow us to use clusterctl and its features out of the box. At the moment, we are manually adding BYOH provider to clusterctl.yaml for clusterctl support. Check out the getting started guide - Configuring clusterctl for BYOH support

To understand what changes are needed, refer to kubernetes-sigs/cluster-api#5189

Test package naming convention

Currently, we have a mix of test package name conventions - same package name and package_test name. We should adhere to a standard practice of using only package_test name.

See some examples -



Why?

  • It enforces blackbox testing - i.e., testing only the exported methods / functions, and that's what an end user would do as well.
  • avoids import cycles

How to test helper functions?
In that case, we can benefit from this Golang trick of export_test.go - Read more here

How to ensure future code is also consistent?
Enable testpackage linter in golangci-lint

References:

Fix the getting_started to work with the new release

What steps did you take and what happened:
At the moment, the steps in the getting started don't work for a fresh user. Problems we noticed

  • There is no guidance for folks who want to try this out on docker. So maybe add a section for folks to start docker containers as hosts.
  • The kubernetes version hardcodes v1.21.2+vmware.1 in it. We should take that out.

Management Cluster should not need network reachability to each host

Describe the solution you'd like
At the moment, we assume that the management cluster can reach each host registered. There are 2 places that we know of at the moment where its true

  1. We try to retrieve the ProviderID from the host
  2. The Machine Health Checks must be connecting to the host to find out if it needs remediation.

We should remove this dependency. As @yixingjia put it

we should always try to avoid access workload cluster directly from mgmt cluster. and let agent report it’s status instead. Most edge style solution just follow this kind of method.

This will allow users to deploy a management cluster in the public cloud, and deploy hosts in a private network, and not have to punch firewall holes to get this to work.

We should refactor the agent cli output

We currently have a bunch of flags in the agent command, some of them should be move to a sub command section.

root@byohnode02:/home/ecl# ./byoh-hostagent-linux-amd64 -h
Usage of ./byoh-hostagent-linux-amd64:
  -add_dir_header
    	If true, adds the file directory to the header
  -alsologtostderr
    	log to standard error as well as files
  -bundle-repo string
    	BYOH Bundle Repository (default "projects.registry.vmware.com")
  -cache-path string
    	Path to the local bundle cache (default ".")
  **-detect
    	Detects the current operating system**
  -downloadpath string
    	File System path to keep the downloads (default "/var/lib/byoh/bundles")
  **-install
    	Install a BYOH Bundle**
  -k8s string
    	Kubernetes version (default "1.22.1")
  -kubeconfig string
    	Paths to a kubeconfig. Only required if out-of-cluster.
  -label value
    	labels to attach to the ByoHost CR in the form labelname=labelVal for e.g. '--label site=apac --label cores=2'
  **-list-supported
    	List all supported OS, Kubernetes versions and BYOH Bundle names**
  -log_backtrace_at value
    	when logging hits line file:N, emit a stack trace
  -log_dir string
    	If non-empty, write log files in this directory
  -log_file string
    	If non-empty, use this log file
  -log_file_max_size uint
    	Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
  -logtostderr
    	log to standard error instead of files (default true)
  -metricsbindaddress string
    	metricsbindaddress is the TCP address that the controller should bind to for serving prometheus metrics.It can be set to "0" to disable the metrics serving (default ":8080")
  -namespace string
    	Namespace in the management cluster where you would like to register this host (default "default")
  -os string
    	OS. If used with install/uninstall, override os detection
  -preview-os-changes
    	Preview the install and uninstall changes for the specified OS
  -skip-installation
    	If you want to skip installation of the kubernetes component binaries
  -skip_headers
    	If true, avoid header prefixes in the log messages
  -skip_log_headers
    	If true, avoid headers when opening log files
  -stderrthreshold value
    	logs at or above this threshold go to stderr (default 2)
  -tag string
    	BYOH Bundle tag
  **-uninstall
    	Unnstall a BYOH Bundle**
  -v value
    	number for the log level verbosity
  -vmodule value
    	comma-separated list of pattern=N settings for file-filtered logging

When we try to list the supported bundles the output is as follow: seems it try to start the agent instead of just list the bundles.

root@byohnode02:/home/ecl# ./byoh-hostagent-linux-amd64 -kubeconfig management.conf -list-supported
I1118 06:19:39.662245    6236 host_registrar.go:37] Registering ByoHost
I1118 06:19:39.666148    6236 host_registrar.go:71] Add Network Info
I1118 06:19:41.827142    6236 deleg.go:130] controller-runtime/metrics "msg"="metrics server is starting to listen"  "addr"=":8080"
I1118 06:19:41.828111    6236 deleg.go:130]  "msg"="starting metrics server"  "path"="/metrics"
I1118 06:19:41.828476    6236 controller.go:178] controller/byohost "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{}}}
I1118 06:19:41.828570    6236 controller.go:186] controller/byohost "msg"="Starting Controller" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" 
I1118 06:19:41.930364    6236 controller.go:220] controller/byohost "msg"="Starting workers" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" "worker count"=1
I1118 06:19:41.930733    6236 host_reconciler.go:49] controller/byohost "msg"="Reconcile request received" "name"="byohnode02" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" 
I1118 06:19:41.934513    6236 host_reconciler.go:88] controller/byohost "msg"="Machine ref not yet set" "name"="byohnode02" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" 
^CI1118 06:54:30.215121    6236 controller.go:240] controller/byohost "msg"="Shutdown signal received, waiting for all workers to finish" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" 
I1118 06:54:30.215303    6236 controller.go:242] controller/byohost "msg"="All workers finished" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"

Add Adopters file

As the project grows and starts to be consumed by different users / companies, having an Adopters file is a good place to collect user success stories. We can start with an empty file with minimum instructions to contact the maintainers in case anybody wants to add their logo / stories to the file.

Refer to the Adopters file by velero

We should add ARM64 support for BYOH

The current workaround is :

  • Compile the agent code with arm64
    GOARCH=amd64

  • add the
    -skip-installation true
    flag when start the agent.

In the long term we should

  • Multi-arch support for BYOH controller image
  • Provide agent binary for ARM64 platform
  • Add ARM support in installer code.

Create issue templates

We should create issue templates for:

  • Bug report
  • Feature request
  • (TBD) Security vulnerability report

Implement host registration

The host agent will be responsible for the host registration process (more responsibilities will be added in future).

In the first iteration of this component:

  • The host agent will be a CLI program, to be run as a daemon or as a foreground process for demo purposes
  • Host agent requires basic logging capabilities (log to file, log to stdout/stderr).
  • We are targeting only linux/amd64, Ubuntu, without any special devices.
  • We are not implementing support for auto discovery of host feature sets; host features will be manually passed to the host agent via a --label flag mapping the list of labels defined in #9
  • host registration will only take care of creating the corresponding BYOHost object, without addressing any further requirement WRT to secure attestation.
  • In order to create the corresponding BYOHost object a KubeConfig file should be provided to the host agent via the --kubeconfig flag; management/lifecycle of the identity used in the above kubeconfig is out of scope of this issue

Add repo description

The repo could use a repo description so that people can quickly read a one-liner on the project. This would also show up in the repo previews I believe when sharing links on different mediums

Screenshot 2021-11-11 at 3 24 54 PM

Link previews of GitHub repos on slack -

Screenshot 2021-11-11 at 3 27 08 PM

Add Prow like actions for BYOH

Describe the solution you'd like

Prow makes it easier to interact with PR's and issues. Unless we have Prow set up for BYOH, we can use https://github.com/jpmcb/prow-github-actions to start experiencing the virtues of Prow.

Got the idea from #242 (comment)

Anything else you would like to add:
[Miscellaneous information that will assist in implementing the feature.]
Also, add a labeler https://github.com/actions/labeler which will be useful when the real Prow instance is integrated with BYOH.

Add more ldflags for host agent binary

Describe the solution you'd like
Add ldflags=-w -s to build host agent binary.

Anything else you would like to add:
Currently $(LDFLAGS) variable is empty, we can use more ldflags to optimise the host agent binary size.

$ make host-agent-binaries
RELEASE_BINARY=./byoh-hostagent GOOS=linux GOARCH=amd64 HOST_AGENT_DIR=./agent /Library/Developer/CommandLineTools/usr/bin/make host-agent-binary
docker run \
                --rm \
                -e CGO_ENABLED=0 \
                -e GOOS=linux \
                -e GOARCH=amd64 \
                -v "$(pwd):/workspace" \
                -w /workspace \
                golang:1.16.6 \
                go build -a -ldflags " -extldflags '-static'" \
                -o ./bin/byoh-hostagent-linux-amd64 ./agent

Create GitHub workflow for Draft Release

Define a workflow for creating a Draft Release with all the assets uploaded. Assets will be generated from #180
You can refer to a sample draft release action here

We need to

  • Ensure that the cluster-template.yaml does not have CNI_RESOURCES in it. At the moment, it does, because that's what our e2e tests use.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.