kyma-incubator / terraform-provider-gardener Goto Github PK
View Code? Open in Web Editor NEWTerraform Provider for Gardener
License: Apache License 2.0
Terraform Provider for Gardener
License: Apache License 2.0
Description
At the moment a new release needs to get created manual, especially the binary is build on a local machine and uploaded.
Have an automation based on prow for building and uploading the release artifacts, similar to the CLI project.
Have the process documented in a dedicated MD file
Reasons
Humans are doing mistakes, every local machine is different, reproducability
Attachments
Description
With one provider definition, I want to deploy multiple gardener cluster in the same project. As the secret binding is part of the provider definition I'm forced to define multiple provider definitions in my manifest which seems not reasonable. Please move it as a standard configuration option for a shoot resource.
The profile attribute can be even derived from the kubeconfig itself and might be removed.
Reasons
We should be close as possible to the original gardener shoot definition and do not block usage scenarios without reason (managing multiple clusters with different secrets each in one manifest)
Description
I build the current master version of the provider and want to create a cluster on Azure with an existing virtual network. As described in the schema I set the name
and resource_group
property accordingly.
Expected result
I would expect that a cluster is created, which re-uses the given virtual network instead of creating a new one.
Actual result
The following error occur, when executing terraform validate
:
The argument "cidr" is required, but no definition was found.
cidr
should not be required, when I want to re-use an existing virtual network. It should be optional.
When the cidr
is set or I modify the provider to make it optional, I get the following error, when executing terraform apply
:
Error: Shoot.core.gardener.cloud "test3" is invalid: spec.cloud.azure.networks.vnet: Invalid value: garden.AzureVNet{Name:(*string)(0xc001d14550), ResourceGroup:(*string)(nil), CIDR:(*string)(0xc001d14560)}: specifying an existing vnet require a vnet name and vnet resource group
The name and/ or resource group is not passed to Gardener.
Steps to reproduce
Configure a cluster with the following infrastructure_config
:
infrastructure_config {
azure {
networks {
vnet {
cidr = "10.250.0.0/16"
name = "foo"
resource_group = "bar"
}
workers = "10.250.0.0/19"
}
}
}
Description
As a user of the provider I would like to add service endpoints to the vnet, which is created by Gardener. This allows my worker nodes to access Azure services, e.g. Postgres databases.
This feature is also provided by the Gardener Azure API and is documented here.
Description
The provider is used in the Kyma CLI to provision a gardener cluster in a simple way for a demo/development scenarios. The indirection via terraform turned out to be cumbersome and for our scenario not bringing much value, and we replaced the setup with a simple implementation based on kubernetes API to just directly create a shoot spec.
With that the provider is not needed anymore in the kyma project. As the provider seem to be used outside of the project, we should see if there is a terraform specific replacement available.
The https://github.com/hashicorp/terraform-provider-kubernetes introduced CRD support a while ago and should in theory be a drop-in replacement.
Reasons
Attachments
Description
Gardener provides some more resources besides shoots which are customer facing an should be supported in order to be feature complete, see https://github.com/gardener/gardener/blob/master/docs/usage/configuration.md#configuration-and-usage-of-gardener-as-end-userstakeholdercustomer
Reasons
To be a full provider for gardener that resources must be supported
Hello,
The scenario looks like this,
terraform apply
and destroy with terraform destroy
for example in main.tf
metadata {
name = var.name
namespace = var.gardener_namespace
}
metadata {
name = var.name
namespace = var.gardener_namespace
annotations = {
"garden.sapcloud.io/purpose" = "development"
"dashboard.garden.sapcloud.io/no-hibernation-schedule" = true
}
}
i can add the cluster without problem by terraform apply
but when i would like to destroy the cluster, if i execute terraform destroy
i met errors, indicating only an cluster with annotation "confirmation.garden.sapcloud.io/deletion" = true
could be deleted.
So i need to do following steps
annotations = {
"garden.sapcloud.io/purpose" = "development"
"dashboard.garden.sapcloud.io/no-hibernation-schedule" = true
"confirmation.garden.sapcloud.io/deletion" = true
}
b. run terraform apply
to modify the cluster
c. run terraform destroy
to delete the cluster
So i wonder are these steps work as design? or maybe somewhere in terraform-provider-gardener can be modify hence cluster with custom annotation could be deleted directly by executing terraform destroy
once?
Thanks! any hint on the implementation idea would be much appreciated
Description
I am using version 0.0.5
of the provider and created an Azure cluster successfully. If I now change something, e.g. the Kubernetes version, I get the following error:
Error: resource name may not be empty
Expected result
I would expect, that e.g. the Kubernetes version is updated for the cluster.
Actual result
See error above.
Steps to reproduce
Troubleshooting
In my opinion this error is caused, because the metadata of the cluster object is not set unless it was changed explicitly (code).
If I change the metadata as well (e.g. add annotation), I get the following error:
Error: Shoot.core.gardener.cloud "test3" is invalid: [spec.cloud.seed: Invalid value: "null": field is immutable, spec.seedName: Invalid value: "null": field is immutable, spec.cloud.azure.networks.pods: Invalid value: "null": field is immutable, spec.cloud.azure.networks.services: Invalid value: "null": field is immutable, spec.networking.pods: Invalid value: "null": field is immutable, spec.networking.services: Invalid value: "null": field is immutable]
Description
In my opinion the current project structure does not make it easy to maintain the provider with a certain quality. Besides the missing tests (#55) the structure of the project itself makes it hard to ensure that the schema of the provider, expand and flatten are all working on the same assumptions. Therefore I propose the following new structure: https://github.com/pzamzow/terraform-provider-gardener/tree/structure/gardener
Reasons
The main tasks of the provider is basically defining a HCL schema and converting this from/ to the Gardener structure. To implement this you need to work with the files above, which are all 300+ or 700+ lines of code. So it is very hard during development to ensure that all of this matches together. It is not easy to see, that there is some mismatch.
With the new structure it is easy to see that the following is wrong:
The list above are all examples of the current state of the provider, where schema, flatten and expand do not match. It is almost impossible to figure this out with the current structure.
In addition I would recommend to put everything into a single package. This seems to be best practice also at other providers. It also makes the calculation of a test coverage easier (see #55). At the moment there are multiple packages with just a single file in it.
Testing is also easier with the new structure because you can write unit tests for the individual components instead of always testing the hole shoot
schema.
I agree, that the new structure has a lot of files, which might look more complex on a first view. But it helps a lot when developing the individual parts of the Gardener structure. To navigate between the individual files the IDEs help a lot and even GitHub can navigaten between the references (e.g. click on KubeControllerManagerResource
here) (learn more).
Please let me know, what you thing about the new structure and if we should develop into this direction.
Description
A shoot resource as implemented at the moment looks quite different than the shoot CRD of gardener project. Please adjust the schema to be the same as in the CRD and assure that it is complete.
Reasons
Avoid confusion, someone knowing the gardener API should be able to use the terraform plugin out of the box, even by copy/pasting things.
Hello,
This is Neo Liang from Gardener SRE team, currently i'm based in Shanghai China.
We Gardener SRE team use tf provider for Gardener to build shoot cluster to some of our internal customers.
We had an internal discussion to compare api/properties/configuration exposed by Gardener API and TF Provider of Gardener and we found following two items are not exposed:
This is supported in Gardener shoot build as code above but we didn't find this information in TF Provider of Gardener
In the sample yaml of building an shoot there's one configuration of cabundle but i didn't find it in TF Provider of Gardener
We hope these two items could be added.
Do let me know if i didn't understand correctly on the two issues i mentioned e.g. they are implemented in somewhere i didn't find it or i misunderstand the usage of them.....apology in advance ;)
i'm linking our leader @dansible (Daniel Pacrami [email protected]) here for more discussion.
Thanks!
-Neo
Description
The example on the main readme lists the "seed" attribute as something mandatory, indeed the customer cannot know an appropriate value for it. The gardener example for customers also do not mention it. I guess it must be an optional attribute and not listed in the readme.
Expected result
Example does not mention the attribute as I don't know the value to enter
Actual result
The attribute is listed
Steps to reproduce
Go to https://github.com/kyma-incubator/terraform-provider-gardener/blob/master/README.md
Hi,
In my test using latest terraform-provider-gardener against Canary landscape using AWS provider, i found that spec. kubernetes.kubeControllerManager.nodeCIDRMaskSize is unable to set values, whatever i set to it, the final shoot cluster set 24 (which is default) for this value.
In my main.tf file i set this value like this
spec {
kubernetes {
kube_controller_manager {
node_cidr_mask_size = 16
}
}
}
and i found in the final shoot, the value of node_cidr_mask_size
is 24
I and @dansible looked into the code, we noticed that this value is injected at
i tried to debug this line but i didn't find any error which might causing this issue.
I also tried to hard code inject value in terraform-provider-gardener
for this line for didn't help
i tried this
if in.KubeControllerManager != nil {
manager := make(map[string]interface{})
if in.KubeControllerManager.FeatureGates != nil {
// manager["node_cidr_mask_size"] = in.KubeControllerManager.NodeCIDRMaskSize
f := 16
manager["node_cidr_mask_size"] = &f
# i also tried this but still failed to set value
# manager["node_cidr_mask_size"] = f
}
att["kube_controller_manager"] = []interface{}{manager}
whatever i tried , the CIDR is 24 (default value)
Is this value an bug from terraform-provider-gardener
e.g. error format while sending API to gardener server?
Thanks!
-Neo
Hi Team,
We have created Gardener cluster via Terraform. But when we try to add additional worker group via Terraform, we're getting below error.
but the gkr-2 cluster is already exists. https://dashboard.garden.canary.k8s.ondemand.com/namespace/garden-cobalt-dev/shoots/gkr-2/
can you please check why we’re getting this error?
with gardener_shoot.shoot_cluster,
on shoot_cluster.tf line 8, in resource "gardener_shoot" "shoot_cluster":
8: resource "gardener_shoot" "shoot_cluster" {```
Description
Follow the examples and apply the plan. After cluster creation just apply the plan again. You will see that the plan contains a lot of modifications.
Expected result
No modifications are outlines to the plan as long as nothing changed locally or remote
Actual result
Looks like the generated attributes on server side are treated as modification
Steps to reproduce
Call "terraform apply" twice
Hello,
As i see #45 is merged into master branch i'm trying to use latest code to build terraform-provider-gardener to create cluster (against canary landscape).
I noticed changes from old api version to new api version but there's one thing i'm not sure whether it's bug in gardener side or terraform-provider-gardener side.
In old api version, the zones
for workers are in spec.cloud.aws.zones (take aws for example), and in new api version, cloud
part should not be exist and zones are moved to spec.provider.worker.zone.
I got above conclusion from both https://github.com/gardener/gardener/blob/master/example/90-shoot.yaml and 1a28eca#diff-e49e79ab42d1a1dfe61f66e432b308c4R101
But when i'm adding an shoot cluster to Canary landscape, for example
spec {
provider {
type = "aws"
dynamic "worker" {
for_each = var.gardener_shoot_workers
content {
minimum = 3
maximum = 5
name = lookup(worker.value, "name", var.name)
machine {
type = lookup(worker.value, "machine_type", "t3.medium")
image {
name = var.machine_image_name
version = var.machine_image_version
}
}
volume {
type = lookup(worker.value, "volume_type", "gp2")
size = lookup(worker.value, "volume_size", "30Gi")
}
zones = var.worker_zones
# max_unavailable = lookup(worker.value, "max_unavailable", 1)
# annotations = lookup(worker.value, "annotations", null)
# labels = lookup(worker.value, "labels", null)
# taints = [lookup(workers.value, "taints", null)]
}
}
}
}
and run terraform apply, the error look like below
module.aws_gardener_shoot.gardener_shoot.aws_cluster[0]: Creating...
Error: Shoot.core.gardener.cloud "neo-idefix" is invalid: spec.cloud.aws.zones: Required value: must specify at least one zone
But if i put sth like this (adding an spec.cloud.aws.zones)
spec {
# other worker related content is also present but i didn't put in this code block
cloud {
aws {
zones = var.worker_zones
}
}
}
the terraform-provider-gardener reports following error - i believe this is working as terraform-provider-gardener code design
Error: Unsupported block type
on .terraform/modules/aws_gardener_shoot/gardener-shoot.tf line 172, in resource "gardener_shoot" "aws_cluster":
172: cloud {
Blocks of type "cloud" are not expected here.
[terragrunt] 2020/02/04 12:20:48 Hit multiple errors:
exit status 1
so i see the conflicts, from https://listserv.sap.corp/pipermail/kubernetes-users/2019-October/000692.html i see all landscapes are upgraded to new api version.
BTW would you mind share your new example.tf ? i can check if i did sth wrong in my *.tf file. The https://github.com/kyma-incubator/terraform-provider-gardener/tree/master/examples/aws are still in old api version format
Thanks!
-Neo
Description
The AWS example terraform file is out of date and is no longer working after updating the provider to the newest Gardener API version.
Expected result
AWS example works out of the box.
Actual result
The following error is shown:
Error: Missing required argument
on main.tf line 32, in resource "gardener_shoot" "test_cluster":
32: spec {
The argument "cloud_profile_name" is required, but no definition was found.
Error: Missing required argument
on main.tf line 32, in resource "gardener_shoot" "test_cluster":
32: spec {
The argument "region" is required, but no definition was found.
Error: Missing required argument
on main.tf line 32, in resource "gardener_shoot" "test_cluster":
32: spec {
The argument "secret_binding_name" is required, but no definition was found.
Error: Unsupported block type
on main.tf line 33, in resource "gardener_shoot" "test_cluster":
33: cloud {
Blocks of type "cloud" are not expected here.
Steps to reproduce
Try to run the aws example.
Troubleshooting
I already generated a valid AWS terraform file with hydroform. The example TF file should look like this:
resource "gardener_shoot" "gardener_cluster" {
metadata {
name = "${var.cluster_name}"
namespace = "${var.namespace}"
}
timeouts {
create = "${var.create_timeout}"
update = "${var.update_timeout}"
delete = "${var.delete_timeout}"
}
spec {
cloud_profile_name = "${var.target_profile}"
region = "${var.location}"
secret_binding_name = "${var.target_secret}"
networking {
nodes = "${var.networking_nodes}"
pods = "${var.networking_pods}"
services = "${var.networking_services}"
type = "${var.networking_type}"
}
maintenance {
auto_update {
kubernetes_version = "true"
machine_image_version = "true"
}
time_window {
begin = "030000+0000"
end = "040000+0000"
}
}
provider {
type = "aws"
infrastructure_config {
aws {
networks {
vpc {
cidr = "${var.vnetcidr}"
}
zones {
name = "eu-west-3a"
workers = "10.250.0.0/19"
public = "10.250.32.0/20"
internal = "10.250.48.0/20"
}
}
}
}
worker {
name = "cpu-worker"
zones = "${var.zones}"
max_surge = "${var.worker_max_surge}"
max_unavailable = "${var.worker_max_unavailable}"
maximum = "${var.worker_maximum}"
minimum = "${var.worker_minimum}"
volume {
size = "${var.disk_size}Gi"
type = "${var.disk_type}"
}
machine {
image {
name = "${var.machine_image_name}"
version = "${var.machine_image_version}"
}
type = "${var.machine_type}"
}
}
}
kubernetes {
allow_privileged_containers = true
version = "${var.kubernetes_version}"
}
}
}
Description
Currently kyma-incubator/terraform-provider-gardener consumes the old and deprecated garden.sapcloud.io/v1beta1
API group which soon will reach end of life.
Since gardener/[email protected] there is core.gardener.cloud/v1alpha1
and since gardener/[email protected] there is core.gardener.cloud/v1beta1
.
Improve formatting and wording in the Terraform Provider for Gardener documentation.
Make the documentation unified.
Description
I have the below snippet in the current shoot.yaml in the gardener dashboard which includes DNS and SSL management via gardener.
addons:
kubernetesDashboard:
enabled: true
authenticationMode: basic
nginxIngress:
enabled: false
externalTrafficPolicy: Cluster
cloudProfileName: aws
dns:
domain: fmi6fdoux2.PROJ_NAME.shoot.canary.k8s-hana.ondemand.com
providers:
- domains:
include:
- example.ondemand.com
- example2.ondemand.com
secretName: projectname-staging-gardener
type: aws-route53
extensions:
- type: shoot-dns-service
- type: shoot-cert-service
providerConfig:
apiVersion: service.cert.extensions.gardener.cloud/v1alpha1
issuers:
- email: [email protected]
name: ondemand.com
server: 'https://acme-v02.api.letsencrypt.org/directory'
Reasons
Makes managing the DNS and SSL easier via gardener shoot.yaml.
How would i extend the spec to include such feature in the provider or which files do i need to edit for making it common accross all cloud providers ?
Attachments
Hello,
I'm using latest code of terraform-provider-gardener to build shoot against AWS shoots.
In https://github.com/kyma-incubator/terraform-provider-gardener/blob/master/shoot/schema_shoot.go#L203 the infrastructure_config
is only set for azure, but other cloud providers also needs this config.
see https://github.com/gardener/gardener/blob/master/example/90-shoot.yaml#L13-L20
Currently i'm unable to build shoot against aws in Canary landscape due to missing of infrastructure_config
Thanks!
during my utilization of terraform-provider-gardener i found in the spec->kubernetes->kube_api_server, the oidc_config part is implemented in
, but not exposed in schema_shoot.goDescription
Currently the support for controlplane_config
is missing for AWS. We should add this for feature completeness. Although its possible to create an AWS gardener cluster without control plane config.
Description
At the moment the provider does not have any automatic unit or acceptance tests. Terraform has a build-in testing framework, which should be adopted to keep the provider stable in the future. As a first step unit tests should be sufficient. But in the end automatic acceptance tests, which create real infrastructure during the tests, should prevent regressions.
When the automatic tests are executed, the code coverage should be measured and checked. GO has support for this build-in (see here). Unfortunately the current package structure would not allow an accurate coverage result, because GO tools are not working well with multiple packages (issue is described here). Therefore the tool mentioned in the article needs to be used or the package structure needs be adopted. In the end the CI infrastructure should not allow new code, which is not covered by a test.
Description
After #34 do the cleanup for scripts and refactor the code.
Description
Currently you must specify a path to a kubeconfig file which is good enough but sometimes inconvinient for playing around locally. Please support a default behavipur pointing to your current kubeconfig from context, as possible with config_path attribute in https://www.terraform.io/docs/providers/kubernetes/
Reasons
Better developer experience
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.