Giter Club home page Giter Club logo

ip-address-manager's Introduction

Metal3 IP Address Manager for Cluster API Provider Metal3

CLOMonitor OpenSSF Scorecard Ubuntu daily main build status CentOS daily main build status

This repository contains a controller to manage static IP address allocations in Cluster API Provider Metal3.

For more information about this controller and related repositories, see metal3.io.

Compatibility with Cluster API

IPAM version CAPM3 version Cluster API version IPAM Release
v1alpha1 (v1.1.X) v1beta1 (v1.1.X) v1beta1 (v1.1.X) v1.1.X
v1alpha1 (v1.2.X) v1beta1 (v1.2.X) v1beta1 (v1.2.X) v1.2.X
v1alpha1 (v1.3.X) v1beta1 (v1.3.X) v1beta1 (v1.3.X) v1.3.X
v1alpha1 (v1.4.X) v1beta1 (v1.4.X) v1beta1 (v1.4.X) v1.4.X
v1alpha1 (v1.5.X) v1beta1 (v1.5.X) v1beta1 (v1.5.X) v1.5.X

Development Environment

See metal3-dev-env for an end-to-end development and test environment for cluster-api-provider-metal3 and baremetal-operator.

API

See the API Documentation for details about the objects used with this controller. You can also see the cluster deployment workflow for the outline of the deployment process.

Deployment and examples

Deploy IPAM

Deploys IPAM CRDs and deploys IPAM controllers

    make deploy

Run locally

Runs IPAM controller locally

    kubectl scale -n capm3-system \
      deployment.v1.apps/metal3-ipam-controller-manager --replicas 0
    make run

Deploy an example pool

    make deploy-examples

Delete the example pool

    make delete-examples

ip-address-manager's People

Contributors

abhinavnagaraj avatar adilghaffardev avatar as20203 avatar dependabot[bot] avatar dhellmann avatar fmuyassarov avatar furkatgofurov7 avatar jgu17 avatar jkremser avatar jwstein3400 avatar jzhoucliqr avatar kashifest avatar lekaf974 avatar lentzi90 avatar maelk avatar maxinjian avatar mboukhalfa avatar metal3-io-bot avatar mquhuy avatar nymanrobin avatar peppi-lotta avatar rozzii avatar smoshiur1237 avatar sunnatillo avatar tico88612 avatar tuminoid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ip-address-manager's Issues

Use unstructured to fetch the cluster API cluster objects

User Story

As a userI would like to be able to use this project without any dependencies on installing cluster API

Detailed Description
The code should be modified to not rely on the cluster API cluster object and CRD, but use unstructured object instead, so that there is no requirement to have Cluster API CRDs deployed for the controller to start properly.

Anything else you would like to add:
The cluster is already not compulsory.

/kind feature

The webhook should reject IPPool deletion if any IP has been allocated

What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]

What did you expect to happen:

Anything else you would like to add:
Looks like it doesn't handle the delete request: https://github.com/metal3-io/ip-address-manager/blob/main/api/v1alpha1/ippool_webhook.go#L137

Environment:

  • Cluster-api version:
  • CAPM3 version:
  • IPAM version:
  • Minikube version:
  • environment (metal3-dev-env or other):
  • Kubernetes version: (use kubectl version):

/kind bug

Continuous reconcile-error-reconcile cycle with Exhausted IP Pools

What steps did you take and what happened:
Create an IPPool with a single IP
Create 2 IPClaims
As expected the IPAM controller errors out with "Exhausted IP Pools". The getIndexes() function updates the LastUpdatedTime in the IPPool Status. This triggers a new Reconcile, which errors out and also updates status timestamp.
As a result the reconcile loop triggers every second and errors out.

│ I0817 22:30:20.743083       1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"}                                                               │
│ I0817 22:30:20.743193       1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0"                                       │
│ E0817 22:30:20.747409       1 controller.go:258] controller-runtime/controller "msg"="Reconciler error" "error"="Failed to create the missing data: Exhausted IP Pools"  "controller"="ippool" "request"={"Namespace":"cluster-123","Name":"ip-pool-pool1"}   │
│ I0817 22:30:21.747737       1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"}                                                               │
│ I0817 22:30:21.747842       1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0"                                       │
│ E0817 22:30:21.751692       1 controller.go:258] controller-runtime/controller "msg"="Reconciler error" "error"="Failed to create the missing data: Exhausted IP Pools"  "controller"="ippool" "request"={"Namespace":"cluster-123","Name":"ip-pool-pool1"}   │
│ I0817 22:30:22.751999       1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"}                                                               │
│ I0817 22:30:22.752098       1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0"                                       │
│ E0817 22:30:22.755869       1 controller.go:258] controller-runtime/controller "msg"="Reconciler error" "error"="Failed to create the missing data: Exhausted IP Pools"  "controller"="ippool" "request"={"Namespace":"cluster-123","Name":"ip-pool-pool1"}   │
│ I0817 22:30:23.756166       1 ippool_manager.go:108] controllers/IPPool/IPPool-controller "msg"="Fetching IPAddress objects" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"}                                                               │
│ I0817 22:30:23.756266       1 ippool_manager.go:293] controllers/IPPool/IPPool-controller "msg"="Getting address" "metal3-ippool"={"Namespace":"cluster-123","Name":"ip-pool-pool1"} "Claim"="test1-cp-14694-5kzmm-0"                                       

What did you expect to happen:
The IPAM controller errors out with "Exhausted IP Pools" and waits for the next reconcile.

Anything else you would like to add:

Environment:

  • Cluster-api version: v0.3.8
  • IPAM version: v0.0.4
  • Kubernetes version: 1.17.3

/kind bug

How can i use this project?

What steps did you take and what happened:
Hi Team, I can't find any use-case for this project? could you share me a example how to use it?

What did you expect to happen:

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api version:
  • CAPM3 version:
  • IPAM version:
  • Minikube version:
  • environment (metal3-dev-env or other):
  • Kubernetes version: (use kubectl version):

/kind bug

New feature

User Story

As a developer I would like to IPAM using openstack neutron for network automation

Detailed Description

neturon independently like metal3-ironic would be nice

Anything else you would like to add:

In the future, if the user wants, a native cloud can be built by adding several functions of openstack neutron.

/kind feature

ipppol webhook times out on large ipv6 size when adding a preallocation

What steps did you take and what happened:

  1. Create an ippool CR with an ipv6 subnet of a large prefix:
    `apiVersion: ipam.metal3.io/v1alpha1
    kind: IPPool
    metadata:
    labels:
    cluster.x-k8s.io/cluster-name: ipam
    cluster.x-k8s.io/provider: infrastructure-metal3
    name: pool4
    namespace: capm3-system
    spec:
    clusterName: ipam
    gateway: fda0:d59c:da0a:624::1
    namePrefix: pool4
    pools:
  • gateway: fda0:d59c:da0a:624::1
    start: fda0:d59c:da0a:624::8
    subnet: fda0:d59c:da0a:624::/96
    prefix: 96
    `
  1. Update the CR with a preAllocation claim that declares an ip address at the end of the cidr range:
    `apiVersion: ipam.metal3.io/v1alpha1
    kind: IPPool
    metadata:
    labels:
    cluster.x-k8s.io/cluster-name: ipam
    cluster.x-k8s.io/provider: infrastructure-metal3
    name: pool4
    namespace: capm3-system
    spec:
    clusterName: ipam
    gateway: fda0:d59c:da0a:624::1
    namePrefix: pool4
    pools:
  • subnet: fdad:d59d:da0a:624::/126
  • gateway: fda0:d59c:da0a:624::1
    start: fda0:d59c:da0a:624::8
    subnet: fda0:d59c:da0a:624::/96
    preAllocations:
    claim1: fda0:d59c:da0a:624::ffff:ffff
    prefix: 96
    `

kubectl edit cmd will return the following error after timeout of 10 seconds:
error: ippools.ipam.metal3.io "pool4" could not be patched: Internal error occurred: failed calling webhook "validation.ippool.ipam.metal3.io": failed to call webhook: Post "https://ipam-webhook-service.capm3-system.svc:443/validate-ipam-metal3-io-v1alpha1-ippool?timeout=10s": context deadline exceeded

The same would occur if the preallocation claim is out of bound of the large ip pool subnet.

What did you expect to happen:
CR update should complete without an error

Anything else you would like to add:

Environment:

  • Cluster-api version: 1.5.2
  • CAPM3 version: 1.5.1
  • IPAM version: 1.5.1
  • Minikube version: n/a
  • environment (metal3-dev-env or other):
  • Kubernetes version: (use kubectl version): 1.28.3

/kind bug

Make target 'kind-create' is broken

What steps did you take and what happened:
Cloned the repo, run: make kind-create

Output:

$make kind-create
./hack/kind_with_registry.sh
bash: ./hack/kind_with_registry.sh: No such file or directory
make: *** [Makefile:279: kind-create] Error 127

We have some scripts under root hack folder where some of them are used from the Makefile. The problem is, with kind-create Makefile target where it expects to find the kind_with_registry.sh script under the root hack directory. However, it does not exist there but does exist here https://github.com/metal3-io/ip-address-manager/tree/main/hack/hack

What did you expect to happen:
Makefile target to find the script properly

Anything else you would like to add:
We have to move kind_with_registry.sh to root hack folder as Makefile expects to find it, and get rid of the extra hack folder inside the root hack folder 😀

The fix should land in the main and needs to be backported to release branches (release-1.1/2)

Environment:

  • Cluster-api version: latest
  • CAPM3 version: latest
  • IPAM version: latest
  • Minikube version: N/A
  • environment (metal3-dev-env or other): N/A
  • Kubernetes version: (use kubectl version): N/A

/kind bug

Support building for multiple architectures

User Story

As a developer/user/operator I would like to use pre-built IPAM images on multiple architectures because it is convenient (compared to building everything yourself).

Detailed Description
We currently only publish amd64 container images, but there are other popular architectures. Go has good support for building for other architectures also, so there is no need for special hardware for testing this.
I suggest we look at how CAPI/CAPM3 has structured the Makefile with a docker-build-all target for building all supported architectures. This should be simple to implement for us as well.
Since we do sha pinning in the Dockerfile, we will also need to override these. This is not needed. The BASE_IMAGE is already pointing to a multi-arch manifest.

Once we can build these images, it should be fairly trivial to also make CI build and publish them together with a multi-arch manifest for easy consumption.

Anything else you would like to add:

/kind feature

Netbox integration

User Story

As a user I would like to be able to use Netbox to manage my IP addresses allocation to integrate with my current solutions.

Detailed Description

Some investigation is needed to figure out if it would be possible to use Netbox as a backend for the IP address generation. If possible, a design proposal should be submitted to propose the changes needed to do that. Then implementation could follow. This is a high level issue to track work in this area.

Anything else you would like to add:

I believe that Netbox is interesting to focus on since it is an open source project with an active community.

/kind feature

Change default branch to "main"

Once the community agrees on the naming, we need to change our default development branch to main, and check that any tools or references to it are changed as well.

CAPI and a couple of provider's repos already did this (CAPA, CAPZ for example)

Partially addresses this.

/kind feature

add nameserver info in IPPool

User Story

As a operator, I would like to use specific nameservers for IPs allocated from an IPPool.

Detailed Description

correct nameservers need to be set matching the IP/network to do dns resolution.

/kind feature

Bump CAPI to v1.5.0

User Story

As an operator I would like to bump CAPI to v1.5.0.

So providers that depend on ip-address-manager can use the new features of CAPI v1.5.0.

ip-address-manager v1.4.2 and CAPI v1.5.0 will encounter the following error:

image

/kind feature

Add unit-cover make target

This issue is reserved for the KubeCon EU 2024 contrib-fest. Please do not work on it before 2024-03-20.

Add a make target for running the unit tests and checking the test coverage. There is a target like this in BMO that you can use for inspiration.
The goal is to be able to run make unit-cover. This should run all unit tests and print test coverage results.

Add total IP count and available IP count in IPPool.Status

User Story

As a developer/user I would like to get the total IP count and available IP count of an IPPool CR by checking its status field.

Detailed Description

[A clear and concise description of what you want to happen.]

Anything else you would like to add:

[Miscellaneous information that will assist in solving the issue.]

/kind feature

Add an infoblox backend

User Story

As a user I would like to be able to use infoblox to manage my IP addresses allocation to integrate with my current solutions.

Detailed Description

Some investigation is needed to figure out if it would be possible to use infoblox as a backend for the IP address generation. If possible, a design proposal should be submitted to propose the changes needed to do that. Then implementation could follow. This is a high level issue to track work in this area.

/kind feature

controller-runtime 0.6.0 conflict with newer capi versions 0.3.7+

What steps did you take and what happened:
when integrate with capi 0.3.7+, controller-runtime 0.6.0 cause some conflict:

$ make manager
go build -o bin/manager .
# sigs.k8s.io/cluster-api/util
../../go/pkg/mod/sigs.k8s.io/[email protected]/util/util.go:422:48: undefined: controllerutil.Object
../../go/pkg/mod/sigs.k8s.io/[email protected]/util/util.go:433:46: undefined: controllerutil.Object
../../go/pkg/mod/sigs.k8s.io/[email protected]/util/util.go:457:47: undefined: controllerutil.Object

Also client-go is using 1.19-alpha2, which couldn't upgrade to 1.19 because it brings in a new version of klogr.

This kinds of blocking the integration to latest capi. We end up downgrade the controller-runtime and client-go to match capi 0.3.9. https://github.com/spectrocloud/ip-address-manager/pull/1/files.

Is there any specific reason to use controller-runtime 0.6.0 instead of controller-runtime 0.5.* ? Do you think it make sense to downgrade?

What did you expect to happen:

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api version:
  • CAPM3 version:
  • IPAM version:
  • Minikube version:
  • environment (metal3-dev-env or other):
  • Kubernetes version: (use kubectl version):

/kind bug

IPPool gets deleted when last IPClaim is deleted

Not sure if that's expected behaviour but we see that the IPPool gets deleted once all IPClaims are deleted. Before a first IPClaim exists the IPAM seems to wait for that event:

controllers/IPPool/IPPool-controller "msg"="Error fetching cluster. It might not exist yet, Requeuing" "metal3-ippool"={"Namespace":"default","Name":"some-ippool-name"}

Environment:

  • Cluster-api version: 1.2.1
  • IPAM version: 1.1.3
  • Kubernetes version: (use kubectl version): 1.24.4

/kind bug

CAPI v1.5.0-beta.0 has been released and is ready for testing

CAPI v1.5.0-beta.0 has been released and is ready for testing.
Looking forward to your feedback before CAPI 1.5.0 release, the 25th July 2023!

For quick reference:

Following are the planned dates for the upcoming releases:

Release Expected Date
v1.5.0-beta.x Tuesday 5th July 2023
release-1.5 branch created (Begin [Code Freeze]) Tuesday 11th July 2023
v1.5.0-rc.0 released Tuesday 11th July 2023
release-1.5 jobs created Tuesday 11th July 2023
v1.5.0-rc.x released Tuesday 18th July 2023
v1.5.0 released Tuesday 25th July 2023

Unable to run example from README

What steps did you take and what happened:

Tried to replay the example as given in

## Deployment and examples
ff.

Used a fresh k3s 1.25 on a virtual Ubuntu 22.04. as base.

The git commit was 2d8ba7f from a recent state of branch main.

(I tried tags v1.4.0 and api/v1.4.0 but both broke apart even earlier than described here.)

Issues:

  • In

    kubectl scale -n capm3-system deployment.v1.apps/metal3-ipam-controller-manager \
    the deployment’s name is given as metal3-ipam-controller-manager but must read ipam-controller-manager

  • Step make run at go run ./main.go fails with

    I0517 10:08:42.475992   21439 listener.go:44] "controller-runtime/metrics: Metrics server is starting to listen" addr="localhost:8080"
    I0517 10:08:42.476725   21439 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPPool" path="/mutate-ipam-metal3-io-v1alpha1-ippool"
    I0517 10:08:42.476838   21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/mutate-ipam-metal3-io-v1alpha1-ippool"
    I0517 10:08:42.476970   21439 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPPool" path="/validate-ipam-metal3-io-v1alpha1-ippool"
    I0517 10:08:42.477018   21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/validate-ipam-metal3-io-v1alpha1-ippool"
    I0517 10:08:42.477133   21439 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPAddress" path="/mutate-ipam-metal3-io-v1alpha1-ipaddress"
    I0517 10:08:42.477183   21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/mutate-ipam-metal3-io-v1alpha1-ipaddress"
    I0517 10:08:42.477296   21439 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPAddress" path="/validate-ipam-metal3-io-v1alpha1-ipaddress"
    I0517 10:08:42.477354   21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/validate-ipam-metal3-io-v1alpha1-ipaddress"
    I0517 10:08:42.477467   21439 webhook.go:124] "controller-runtime/builder: Registering a mutating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPClaim" path="/mutate-ipam-metal3-io-v1alpha1-ipclaim"
    I0517 10:08:42.477544   21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/mutate-ipam-metal3-io-v1alpha1-ipclaim"
    I0517 10:08:42.477682   21439 webhook.go:153] "controller-runtime/builder: Registering a validating webhook" GVK="ipam.metal3.io/v1alpha1, Kind=IPClaim" path="/validate-ipam-metal3-io-v1alpha1-ipclaim"
    I0517 10:08:42.477735   21439 server.go:149] "controller-runtime/webhook: Registering webhook" path="/validate-ipam-metal3-io-v1alpha1-ipclaim"
    I0517 10:08:42.477885   21439 main.go:137] "setup: starting manager"
    I0517 10:08:42.477948   21439 server.go:217] "controller-runtime/webhook/webhooks: Starting webhook server"
    I0517 10:08:42.478061   21439 internal.go:369] "Starting server" path="/metrics" kind="metrics" addr="127.0.0.1:8080"
    I0517 10:08:42.478142   21439 internal.go:369] "Starting server" kind="health probe" addr="[::]:9440"
    I0517 10:08:42.478223   21439 internal.go:581] "Stopping and waiting for non leader election runnables"
    I0517 10:08:42.478247   21439 internal.go:585] "Stopping and waiting for leader election runnables"
    I0517 10:08:42.478339   21439 controller.go:186] "Starting EventSource" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool" source="kind source: *v1alpha1.IPPool"
    I0517 10:08:42.478361   21439 controller.go:186] "Starting EventSource" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool" source="kind source: *v1alpha1.IPClaim"
    I0517 10:08:42.478374   21439 controller.go:194] "Starting Controller" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool"
    E0517 10:08:42.479060   21439 source.go:148] "controller-runtime/source: failed to get informer from cache" err="Timeout: failed waiting for *v1alpha1.IPClaim Informer to sync"
    I0517 10:08:42.479140   21439 controller.go:228] "Starting workers" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool" worker count=10
    I0517 10:08:42.479162   21439 controller.go:248] "Shutdown signal received, waiting for all workers to finish" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool"
    E0517 10:08:42.479429   21439 source.go:148] "controller-runtime/source: failed to get informer from cache" err="Timeout: failed waiting for *v1alpha1.IPPool Informer to sync"
    I0517 10:08:42.479483   21439 controller.go:250] "All workers finished" controller="ippool" controllerGroup="ipam.metal3.io" controllerKind="IPPool"
    I0517 10:08:42.479505   21439 internal.go:591] "Stopping and waiting for caches"
    I0517 10:08:42.479728   21439 internal.go:595] "Stopping and waiting for webhooks"
    I0517 10:08:42.479833   21439 internal.go:599] "Wait completed, proceeding to shutdown the manager"
    E0517 10:08:42.479904   21439 main.go:139] "setup: problem running manager" err="open /tmp/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"
    

What did you expect to happen:

The example from the README works, or the README is more explicit about the environment the example is supposed to work within.

Anything else you would like to add:

Environment:

  • Cluster-api version:
  • CAPM3 version:
  • IPAM version: git 2d8ba7f
  • Minikube version:
  • environment (metal3-dev-env or other):
  • Kubernetes version: (use kubectl version): v1.25.9+k3s1

/kind bug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.