metal3-io / ip-address-manager Goto Github PK
View Code? Open in Web Editor NEWIP address Manager for Cluster API Provider Metal3
License: Apache License 2.0
IP address Manager for Cluster API Provider Metal3
License: Apache License 2.0
CAPI v1.5.0-beta.0
has been released and is ready for testing.
Looking forward to your feedback before CAPI 1.5.0 release, the 25th July 2023!
Release | Expected Date |
---|---|
v1.5.0-beta.x | Tuesday 5th July 2023 |
release-1.5 branch created (Begin [Code Freeze]) | Tuesday 11th July 2023 |
v1.5.0-rc.0 released | Tuesday 11th July 2023 |
release-1.5 jobs created | Tuesday 11th July 2023 |
v1.5.0-rc.x released | Tuesday 18th July 2023 |
v1.5.0 released | Tuesday 25th July 2023 |
What steps did you take and what happened:
when integrate with capi 0.3.7+, controller-runtime 0.6.0 cause some conflict:
$ make manager
go build -o bin/manager .
# sigs.k8s.io/cluster-api/util
../../go/pkg/mod/sigs.k8s.io/[email protected]/util/util.go:422:48: undefined: controllerutil.Object
../../go/pkg/mod/sigs.k8s.io/[email protected]/util/util.go:433:46: undefined: controllerutil.Object
../../go/pkg/mod/sigs.k8s.io/[email protected]/util/util.go:457:47: undefined: controllerutil.Object
Also client-go is using 1.19-alpha2, which couldn't upgrade to 1.19 because it brings in a new version of klogr.
This kinds of blocking the integration to latest capi. We end up downgrade the controller-runtime and client-go to match capi 0.3.9. https://github.com/spectrocloud/ip-address-manager/pull/1/files.
Is there any specific reason to use controller-runtime 0.6.0 instead of controller-runtime 0.5.* ? Do you think it make sense to downgrade?
What did you expect to happen:
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/kind bug
Release branch release-1.7 has Golang 1.21.x in use. Golang 1.21 support ends in August 2024, when Golang 1.23 is released.
IPAM 1.7 is in support for quite a while still, so we need to bump it to Go 1.22 before next patch release.
User Story
As a operator, I would like to use specific nameservers for IPs allocated from an IPPool.
Detailed Description
correct nameservers need to be set matching the IP/network to do dns resolution.
/kind feature
CAPI v1.3.0-beta.0 is live now, and this issue tracks the adoption and testing of the new beta release in CAPM3.
Slack: https://kubernetes.slack.com/archives/C8TSNPY4T/p1667414755279279
Release: https://github.com/kubernetes-sigs/cluster-api/releases/tag/v1.3.0-beta.0
Migration guide: https://github.com/kubernetes-sigs/cluster-api/blob/v1.3.0-beta.0/docs/book/src/developer/providers/v1.2-to-v1.3.md
/kind feature
What steps did you take and what happened:
Tried to replay the example as given in
Line 35 in 2d8ba7f
Used a fresh k3s 1.25 on a virtual Ubuntu 22.04. as base.
The git commit was 2d8ba7f from a recent state of branch main.
(I tried tags v1.4.0 and api/v1.4.0 but both broke apart even earlier than described here.)
Issues:
In
Line 50 in 2d8ba7f
metal3-ipam-controller-manager
but must read ipam-controller-manager
Step make run
at go run ./main.go
fails with
I0517 10:08:42.475992 21439 listener.go:44] "controller-runtime/metrics: Metrics server is starting to listen" addr="localhost:8080"
I0517 10:08:42.476725 21439 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPPool" path="/mutate-ipam-metal3-io-v1alpha1-ippool"
I0517 10:08:42.476838 21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/mutate-ipam-metal3-io-v1alpha1-ippool"
I0517 10:08:42.476970 21439 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPPool" path="/validate-ipam-metal3-io-v1alpha1-ippool"
I0517 10:08:42.477018 21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/validate-ipam-metal3-io-v1alpha1-ippool"
I0517 10:08:42.477133 21439 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPAddress" path="/mutate-ipam-metal3-io-v1alpha1-ipaddress"
I0517 10:08:42.477183 21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/mutate-ipam-metal3-io-v1alpha1-ipaddress"
I0517 10:08:42.477296 21439 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPAddress" path="/validate-ipam-metal3-io-v1alpha1-ipaddress"
I0517 10:08:42.477354 21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/validate-ipam-metal3-io-v1alpha1-ipaddress"
I0517 10:08:42.477467 21439 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPClaim" path="/mutate-ipam-metal3-io-v1alpha1-ipclaim"
I0517 10:08:42.477544 21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/mutate-ipam-metal3-io-v1alpha1-ipclaim"
I0517 10:08:42.477682 21439 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPClaim" path="/validate-ipam-metal3-io-v1alpha1-ipclaim"
I0517 10:08:42.477735 21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/validate-ipam-metal3-io-v1alpha1-ipclaim"
I0517 10:08:42.477885 21439 main.go:137] "setup: starting manager"
I0517 10:08:42.477948 21439 server.go:217] "controller-runtime/webhook/webhooks: Starting webhook server"
I0517 10:08:42.478061 21439 internal.go:369] "Starting server" path="/metrics" kind="metrics" addr="127.0.0.1:8080"
I0517 10:08:42.478142 21439 internal.go:369] "Starting server" kind="health probe" addr="[::]:9440"
I0517 10:08:42.478223 21439 internal.go:581] "Stopping and waiting for non leader election runnables"
I0517 10:08:42.478247 21439 internal.go:585] "Stopping and waiting for leader election runnables"
I0517 10:08:42.478339 21439 controller.go:186] "Starting EventSource" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool" source="kind source: *v1alpha1.IPPool"
I0517 10:08:42.478361 21439 controller.go:186] "Starting EventSource" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool" source="kind source: *v1alpha1.IPClaim"
I0517 10:08:42.478374 21439 controller.go:194] "Starting Controller" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool"
E0517 10:08:42.479060 21439 source.go:148] "controller-runtime/source: failed to get informer from cache" err="Timeout: failed waiting for *v1alpha1.IPClaim Informer to sync"
I0517 10:08:42.479140 21439 controller.go:228] "Starting workers" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool" worker count=10
I0517 10:08:42.479162 21439 controller.go:248] "Shutdown signal received, waiting for all workers to finish" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool"
E0517 10:08:42.479429 21439 source.go:148] "controller-runtime/source: failed to get informer from cache" err="Timeout: failed waiting for *v1alpha1.IPPool Informer to sync"
I0517 10:08:42.479483 21439 controller.go:250] "All workers finished" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool"
I0517 10:08:42.479505 21439 internal.go:591] "Stopping and waiting for caches"
I0517 10:08:42.479728 21439 internal.go:595] "Stopping and waiting for webhooks"
I0517 10:08:42.479833 21439 internal.go:599] "Wait completed, proceeding to shutdown the manager"
E0517 10:08:42.479904 21439 main.go:139] "setup: problem running manager" err="open /tmp/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"
What did you expect to happen:
The example from the README works, or the README is more explicit about the environment the example is supposed to work within.
Anything else you would like to add:
Environment:
kubectl version
): v1.25.9+k3s1/kind bug
User Story
As a developer/user I would like to get the total IP count and available IP count of an IPPool CR by checking its status field.
Detailed Description
[A clear and concise description of what you want to happen.]
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
User Story
As a user I would like to be able to use Netbox to manage my IP addresses allocation to integrate with my current solutions.
Detailed Description
Some investigation is needed to figure out if it would be possible to use Netbox as a backend for the IP address generation. If possible, a design proposal should be submitted to propose the changes needed to do that. Then implementation could follow. This is a high level issue to track work in this area.
Anything else you would like to add:
I believe that Netbox is interesting to focus on since it is an open source project with an active community.
/kind feature
What steps did you take and what happened:
Cloned the repo, run: make kind-create
Output:
$make kind-create
./hack/kind_with_registry.sh
bash: ./hack/kind_with_registry.sh: No such file or directory
make: *** [Makefile:279: kind-create] Error 127
We have some scripts under root hack
folder where some of them are used from the Makefile. The problem is, with kind-create
Makefile target where it expects to find the kind_with_registry.sh
script under the root hack
directory. However, it does not exist there but does exist here https://github.com/metal3-io/ip-address-manager/tree/main/hack/hack
What did you expect to happen:
Makefile target to find the script properly
Anything else you would like to add:
We have to move kind_with_registry.sh
to root hack
folder as Makefile expects to find it, and get rid of the extra hack
folder inside the root hack
folder π
The fix should land in the main and needs to be backported to release branches (release-1.1/2)
Environment:
kubectl version
): N/A/kind bug
Once the community agrees on the naming, we need to change our default development branch to main, and check that any tools or references to it are changed as well.
CAPI and a couple of provider's repos already did this (CAPA, CAPZ for example)
Partially addresses this.
/kind feature
This issue is reserved for the KubeCon EU 2024 contrib-fest. Please do not work on it before 2024-03-20.
Add a make target for running the unit tests and checking the test coverage. There is a target like this in BMO that you can use for inspiration.
The goal is to be able to run make unit-cover
. This should run all unit tests and print test coverage results.
User Story
As an operator I would like to bump CAPI to v1.5.0.
So providers that depend on ip-address-manager can use the new features of CAPI v1.5.0.
ip-address-manager v1.4.2 and CAPI v1.5.0 will encounter the following error:
/kind feature
User Story
As a userI would like to be able to use this project without any dependencies on installing cluster API
Detailed Description
The code should be modified to not rely on the cluster API cluster object and CRD, but use unstructured object instead, so that there is no requirement to have Cluster API CRDs deployed for the controller to start properly.
Anything else you would like to add:
The cluster is already not compulsory.
/kind feature
User Story
As a developer/user/operator I would like to use pre-built IPAM images on multiple architectures because it is convenient (compared to building everything yourself).
Detailed Description
We currently only publish amd64 container images, but there are other popular architectures. Go has good support for building for other architectures also, so there is no need for special hardware for testing this.
I suggest we look at how CAPI/CAPM3 has structured the Makefile with a docker-build-all target for building all supported architectures. This should be simple to implement for us as well.
Since we do sha pinning in the Dockerfile, we will also need to override these. This is not needed. The BASE_IMAGE is already pointing to a multi-arch manifest.
Once we can build these images, it should be fairly trivial to also make CI build and publish them together with a multi-arch manifest for easy consumption.
Anything else you would like to add:
/kind feature
What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]
What did you expect to happen:
Anything else you would like to add:
Looks like it doesn't handle the delete request: https://github.com/metal3-io/ip-address-manager/blob/main/api/v1alpha1/ippool_webhook.go#L137
Environment:
kubectl version
):/kind bug
What steps did you take and what happened:
Create an IPPool with a single IP
Create 2 IPClaims
As expected the IPAM controller errors out with "Exhausted IP Pools". The getIndexes() function updates the LastUpdatedTime in the IPPool Status. This triggers a new Reconcile, which errors out and also updates status timestamp.
As a result the reconcile loop triggers every second and errors out.
β I0817 22:30:20.743083 1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} β
β I0817 22:30:20.743193 1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0" β
β E0817 22:30:20.747409 1 controller.go:258] controller-runtime/controller "msg"="Reconciler error" "error"="Failed to create the missing data: Exhausted IP Pools" "controller"="ippool" "request"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} β
β I0817 22:30:21.747737 1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} β
β I0817 22:30:21.747842 1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0" β
β E0817 22:30:21.751692 1 controller.go:258] controller-runtime/controller "msg"="Reconciler error" "error"="Failed to create the missing data: Exhausted IP Pools" "controller"="ippool" "request"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} β
β I0817 22:30:22.751999 1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} β
β I0817 22:30:22.752098 1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0" β
β E0817 22:30:22.755869 1 controller.go:258] controller-runtime/controller "msg"="Reconciler error" "error"="Failed to create the missing data: Exhausted IP Pools" "controller"="ippool" "request"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} β
β I0817 22:30:23.756166 1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} β
β I0817 22:30:23.756266 1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0"
What did you expect to happen:
The IPAM controller errors out with "Exhausted IP Pools" and waits for the next reconcile.
Anything else you would like to add:
Environment:
/kind bug
User Story
As a user I would like to be able to use infoblox to manage my IP addresses allocation to integrate with my current solutions.
Detailed Description
Some investigation is needed to figure out if it would be possible to use infoblox as a backend for the IP address generation. If possible, a design proposal should be submitted to propose the changes needed to do that. Then implementation could follow. This is a high level issue to track work in this area.
/kind feature
Not sure if that's expected behaviour but we see that the IPPool gets deleted once all IPClaims are deleted. Before a first IPClaim exists the IPAM seems to wait for that event:
controllers/IPPool/IPPool-controller "msg"="Error fetching cluster. It might not exist yet, Requeuing" "metal3-ippool"={"Namespace":"default","Name":"some-ippool-name"}
Environment:
kubectl version
): 1.24.4/kind bug
Release branch release-1.6 has Golang 1.21.x in use. Golang 1.21 support ends in August 2024, when Golang 1.23 is released.
IPAM 1.6 is in support for some time still, so we need to bump it to Go 1.22 before next patch release.
User Story
As a developer I would like to IPAM using openstack neutron for network automation
Detailed Description
neturon independently like metal3-ironic would be nice
Anything else you would like to add:
In the future, if the user wants, a native cloud can be built by adding several functions of openstack neutron.
/kind feature
What steps did you take and what happened:
kubectl edit cmd will return the following error after timeout of 10 seconds:
error: ippools.ipam.metal3.io "pool4" could not be patched: Internal error occurred: failed calling webhook "validation.ippool.ipam.metal3.io": failed to call webhook: Post "https://ipam-webhook-service.capm3-system.svc:443/validate-ipam-metal3-io-v1alpha1-ippool?timeout=10s": context deadline exceeded
The same would occur if the preallocation claim is out of bound of the large ip pool subnet.
What did you expect to happen:
CR update should complete without an error
Anything else you would like to add:
Environment:
kubectl version
): 1.28.3/kind bug
Add make lint-fix
that runs golangci-lint
with the --fix
flag to automatically fix linting errors when possible.
Check how CAPI has done it by reusing the lint target and adding the flag.
What steps did you take and what happened:
Hi Team, I can't find any use-case for this project? could you share me a example how to use it?
What did you expect to happen:
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/kind bug
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.