Giter Club home page Giter Club logo

external-dns-operator's Introduction

ExternalDNS Operator

The ExternalDNS Operator allows you to deploy and manage ExternalDNS, a cluster-internal component which makes Kubernetes resources discoverable through public DNS servers. For more information about the initial motivation, see ExternalDNS Operator OpenShift Enhancement Proposal.

Deploying the ExternalDNS Operator

The following procedure describes how to deploy the ExternalDNS Operator for AWS.

Preparing the environment

Prepare your environment for the installation commands.

  • Select the container runtime you want to build the images with (podman or docker):
    export CONTAINER_ENGINE=podman
  • Select the name settings of the image:
    export REGISTRY=quay.io
    export REPOSITORY=myuser
    export VERSION=1.0.0
  • Login to the image registry:
    ${CONTAINER_ENGINE} login ${REGISTRY} -u ${REPOSITORY}

Installing the ExternalDNS Operator by building and pushing the Operator image to a registry

  1. Build and push the Operator image to a registry:

    export IMG=${REGISTRY}/${REPOSITORY}/external-dns-operator:${VERSION}
    make image-build image-push
  2. Optional: you may need to link the registry secret to external-dns-operator service account if the image is not public (Doc link):

    a. Create a secret with authentication details of your image registry:

    oc -n external-dns-operator create secret generic extdns-pull-secret  --type=kubernetes.io/dockercfg  --from-file=.dockercfg=${XDG_RUNTIME_DIR}/containers/auth.json

    b. Link the secret to external-dns-operator service account:

    oc -n external-dns-operator secrets link external-dns-operator extdns-pull-secret --for=pull
  3. Run the following command to deploy the ExternalDNS Operator:

    make deploy
  4. The previous step deploys the validation webhook, which requires TLS authentication for the webhook server. The manifests deployed through the make deploy command do not contain a valid certificate and key. You must provision a valid certificate and key through other tools. You can use a convenience script, hack/generate-certs.sh to generate the certificate bundle and patch the validation webhook config.
    Important: Do not use the hack/generate-certs.sh script in a production environment.
    Run the hack/generate-certs.sh script with the following inputs:

    hack/generate-certs.sh --service webhook-service --webhook validating-webhook-configuration \
    --secret webhook-server-cert --namespace external-dns-operator

    Note: you may need to wait for the retry of the volume mount in the operator's POD

  5. Now you can deploy an instance of ExternalDNS:

    • Run the following command to create the credentials secret for AWS:

      kubectl -n external-dns-operator create secret generic aws-access-key \
              --from-literal=aws_access_key_id=${ACCESS_KEY_ID} \
              --from-literal=aws_secret_access_key=${ACCESS_SECRET_KEY}

      Note: See this guide for instructions specific to other providers.

    • Run the following command:

      # for AWS
      kubectl apply -k config/samples/aws

      Note: For other providers, see config/samples/.

Installing the ExternalDNS Operator using a custom index image on OperatorHub

  1. Build and push the operator image to the registry:

    export IMG=${REGISTRY}/${REPOSITORY}/external-dns-operator:${VERSION}
    make image-build image-push
  2. Build and push the bundle image to the registry:

    export BUNDLE_IMG=${REGISTRY}/${REPOSITORY}/external-dns-operator-bundle:${VERSION}
    make bundle-image-build bundle-image-push
  3. Build and push the index image to the registry:

    export INDEX_IMG=${REGISTRY}/${REPOSITORY}/external-dns-operator-bundle-index:${VERSION}
    make index-image-build index-image-push
  4. Optional: you may need to link the registry secret to the pod of external-dns-operator created in the openshift-marketplace namespace if the image is not made public (Doc link). If you are using podman then these are the instructions:

    a. Create a secret with authentication details of your image registry:

    oc -n openshift-marketplace create secret generic extdns-olm-secret  --type=kubernetes.io/dockercfg  --from-file=.dockercfg=${XDG_RUNTIME_DIR}/containers/auth.json

    b. Link the secret to default service account:

    oc -n openshift-marketplace secrets link default extdns-olm-secret --for=pull
  5. Create the CatalogSource object:

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: external-dns-operator
      namespace: openshift-marketplace
    spec:
      sourceType: grpc
      image: ${INDEX_IMG}
    EOF
  6. Create the operator namespace:

    oc create namespace external-dns-operator
    oc label namespace external-dns-operator openshift.io/cluster-monitoring=true
  7. Create the OperatorGroup object to scope the operator to external-dns-operator namespace:

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: external-dns-operator
      namespace: external-dns-operator
    spec:
      targetNamespaces:
      - external-dns-operator
    EOF
  8. Create the Subscription object:

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: external-dns-operator
      namespace: external-dns-operator
    spec:
      channel: stable-v1
      name: external-dns-operator
      source: external-dns-operator
      sourceNamespace: openshift-marketplace
    EOF

    Note: The steps starting from the 7th can be replaced with the following actions in the web console: Navigate to Operators -> OperatorHub, search for the ExternalDNS Operator, and install it in the external-dns-operator namespace.

  9. Now you can deploy an instance of ExternalDNS:

    # for AWS
    oc apply -k config/samples/aws

    Note: For other providers, see config/samples/.

Using custom operand image

  1. Optional: you may need to link the registry secret to the operand's service account if your custom image is not public (Doc link):

    a. Create a secret with authentication details of your image registry:

    oc -n external-dns-operator create secret generic extdns-pull-secret --type=kubernetes.io/dockercfg --from-file=.dockercfg=${XDG_RUNTIME_DIR}/containers/auth.json

    b. Find the service account of your operand:

    oc -n external-dns-operator get sa | grep external-dns

    c. Link the secret to found service account:

    oc -n external-dns-operator secrets link external-dns-sample-aws extdns-pull-secret --for=pull
  2. Patch RELATED_IMAGE_EXTERNAL_DNS environment variable's value with your custom operand image:

    • In the operator's deployment:
    # "external-dns-operator" container has index 0
    # "RELATED_IMAGE_EXTERNAL_DNS" environment variable has index 1
    oc -n external-dns-operator patch deployment external-dns-operator --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/env/1/value", "value":"<CUSTOM_IMAGE_TAG>"}]'
    • Or in the operator's subscription:
    oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{"op": "add", "path": "/spec/config", "value":{"env":[{"name":"RELATED_IMAGE_EXTERNAL_DNS","value":"<CUSTOM_IMAGE_TAG>"}]}}]'

Running end-to-end tests manually

  1. Deploy the operator as described above

  2. Set the necessary environment variables

    For AWS:

    export KUBECONFIG=/path/to/mycluster/kubeconfig
    export DNS_PROVIDER=AWS
    export AWS_ACCESS_KEY_ID=my-aws-access-key
    export AWS_SECRET_ACCESS_KEY=my-aws-access-secret

    For Infoblox:

    export KUBECONFIG=/path/to/mycluster/kubeconfig
    export DNS_PROVIDER=INFOBLOX
    export INFOBLOX_GRID_HOST=myinfoblox.myorg.com
    export INFOBLOX_WAPI_USERNAME=my-infoblox-username
    export INFOBLOX_WAPI_PASSWORD=my-infoblox-password

    For the other providers: check out e2e directory.

  3. Run the test suite

    make test-e2e

Proxy support

Configuring proxy support for ExternalDNS Operator

Metrics

The ExternalDNS Operator exposes controller-runtime metrics using custom resources expected by Prometheus Operator. ServiceMonitor object is created in the operator's namespace (by default external-dns-operator), make sure that your instance of the Prometheus Operator is properly configured to find it.
You can check .spec.serviceMonitorNamespaceSelector and .spec.serviceMonitorSelector fields of prometheus resource and edit the operator's namespace or service monitor accordingly:

kubectl -n monitoring get prometheus k8s --template='{{.spec.serviceMonitorNamespaceSelector}}{{"\n"}}{{.spec.serviceMonitorSelector}}{{"\n"}}'
map[matchLabels:map[openshift.io/cluster-monitoring:true]]
map[]

For OpenShift:

oc -n openshift-monitoring get prometheus k8s --template='{{.spec.serviceMonitorNamespaceSelector}}{{"\n"}}{{.spec.serviceMonitorSelector}}{{"\n"}}'
map[matchLabels:map[openshift.io/cluster-monitoring:true]]
map[]

Status of providers

We define the following stability levels for DNS providers:

  • GA: Integration and smoke tests before release are done on the real platforms. API is stable with a guarantee of no breaking changes.
  • TechPreview: Maintainers have no access to resources to execute integration tests on the real platform, API may be a subject to a change.
Provider Status
AWS Route53 GA
AWS Route53 on GovCloud TechPreview
AzureDNS GA
GCP Cloud DNS GA
Infoblox GA
BlueCat TechPreview

Known limitations

Length of the domain name

ExternalDNS Operator uses the TXT registry which implies the usage of the new format and the prefix for the TXT records. This reduces the maximum length of the domain name for the TXT records.
Since the TXT record accompanies every DNS record, there cannot be a DNS record without a corresponding TXT record, therefore the DNS record's domain name gets hit by the same limit:

DNS record: <domain-name-from-source>
TXT record: external-dns-<record-type>-<domain-name-from-source>

Please be aware that instead of the standardized 63 characters long maximum length, the domain names generated from the ExternalDNS sources have the following limits:

  • for the CNAME record type:
    • 44 characters
    • 42 characters for wildcard records on AzureDNS (OCPBUGS-819)
  • for the A record type:
    • 48 characters
    • 46 characters for wildcard records on AzureDNS (OCPBUGS-819)

Example

ExternalDNS CR which uses Service of type ExternalName (to force CNAME record type) as the source for the DNS records:

$ oc get externaldns aws -o yaml
apiVersion: externaldns.olm.openshift.io/v1beta1
kind: ExternalDNS
metadata:
  name: aws
spec:
  provider:
    type: AWS
  zones:
  - Z06988883Q0H0RL6UMXXX
  source:
    type: Service
    service:
        serviceType:
        - ExternalName
    fqdnTemplate:
    - "{{.Name}}.test.example.io"

The service of the following name will not pose any problems because its name is 44 characters long:

$ oc get svc
NAME                                                          TYPE           CLUSTER-IP     EXTERNAL-IP              PORT(S)             AGE
hello-openshift-aaaaaaaaaa-bbbbbbbbbb-cccccc                  ExternalName   <none>         somednsname.example.io   8080/TCP,8888/TCP

$ aws route53 list-resource-record-sets --hosted-zone-id=Z06988883Q0H0RL6UMXXX
RESOURCERECORDSETS	external-dns-cname-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-cccccc.test.example.io.	300	TXT
RESOURCERECORDS	"heritage=external-dns,external-dns/owner=external-dns-aws,external-dns/resource=service/test-long-dnsname/hello-openshift-aaaaaaaaaa-bbbbbbbbbb-cccccc"
RESOURCERECORDSETS	external-dns-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-cccccc.test.example.io.	300	TXT
RESOURCERECORDS	"heritage=external-dns,external-dns/owner=external-dns-aws,external-dns/resource=service/test-long-dnsname/hello-openshift-aaaaaaaaaa-bbbbbbbbbb-cccccc"
RESOURCERECORDSETS	hello-openshift-aaaaaaaaaa-bbbbbbbbbb-cccccc.test.example.io.	300	CNAME
RESOURCERECORDS	somednsname.example.io

The service of a longer name will result in no changes on the DNS provider and the errors similar to the below ones in the ExternalDNS instance:

$ oc -n external-dns-operator logs external-dns-aws-7ddbd9c7f8-2jqjh
...
time="2022-09-02T08:53:57Z" level=info msg="Desired change: CREATE external-dns-cname-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io TXT [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]"
time="2022-09-02T08:53:57Z" level=info msg="Desired change: CREATE external-dns-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io TXT [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]"
time="2022-09-02T08:53:57Z" level=info msg="Desired change: CREATE hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io A [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]"
time="2022-09-02T08:53:57Z" level=error msg="Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]"
time="2022-09-02T08:53:57Z" level=error msg="InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\n\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6"
...

external-dns-operator's People

Contributors

alebedev87 avatar arjunrn avatar connect2naga avatar dbaker-rh avatar dgoodwin avatar dhritishikhar avatar elbehery avatar gcs278 avatar josefkarasek avatar lmzuccarelli avatar miciah avatar miheer avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar rfredette avatar sairameshv avatar sebwoj avatar sgreene570 avatar sherine-k avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

external-dns-operator's Issues

Override defaultTXTRecordPrefix in ExternalDNS

Hi,
is it possible to override defaultTXTRecordPrefix in the yaml file?

My current yaml is looking this way:

apiVersion: externaldns.olm.openshift.io/v1alpha1
kind: ExternalDNS
metadata:
  name: my-internal-domain
  namespace: external-dns
spec:
  provider:
    type: AWS
    aws:
      credentials:
        name: aws-access-key
  source:
    openshiftRouteOptions:
      routerName: default
    type: OpenShiftRoute
  zones:
    - MY_ZONE_ID

After having a quick look into the sourcecode, I've found

defaultTXTRecordPrefix   = "external-dns-"

but no possibility to provide this value via the ExternalDNSProvider struct.

Any ideas?

Thx.

Support Nodename for ExternalDNSServiceSourceOptions

We need to add the machine names into a corporate DNS therefore would it be very helpfull to be able to use the node status for external DNS or some labels or annotations.

apiVersion: v1
kind: Node
metadata:
  annotations:
    k8s.ovn.org/host-addresses: '["10.x.x.x"]'
    # other annotations
  labels:
    kubernetes.io/hostname: mycluster-p8zqf-worker-chpgv
    # other labels
  name: mycluster-p8zqf-worker-chpgv
.......
status:
  addresses:
  - address: 10.x.x.x
    type: ExternalIP
  - address: 10.x.x.x
    type: InternalIP
  - address: mycluster-p8zqf-worker-chpgv
    type: Hostname
.......

Maybe this is already possible with hostnameAnnotation HostnameAnnotationPolicy but it looks to me that this annotation is only possible for the supported serviceTypes

// +kubebuilder:validation:Enum=OpenShiftRoute;Service;CRD

Infoblox Custom View Support

I am looking into using the Infoblox provider type with ExternalDNS. However, when exploring the ExternalDNS object in this operator .spec.provider.infoblox doesn't allow you specify the Infoblox view to use. I know in lots of environments, the default view within Infoblox is not the target. The CLI of external-dns shows more options than the Operator exposes right now. It would be great if these options are flushed out more in the next release.

$ docker run -it --entrypoint="external-dns" registry.k8s.io/external-dns/external-dns:v0.13.5 --help | grep infoblox
  --provider=provider            The DNS provider where the DNS records will be created (required, options: akamai, alibabacloud, aws, aws-sd, azure, azure-dns, azure-private-dns, bluecat, civo, cloudflare, coredns, designate, digitalocean, dnsimple, dyn, exoscale, gandi, godaddy, google, ibmcloud, infoblox, inmemory, linode, ns1, oci, ovh, pdns, pihole, plural, rcodezero, rdns, rfc2136, safedns, scaleway, skydns, tencentcloud, transip, ultradns, vinyldns, vultr)
  --infoblox-grid-host=""        When using the Infoblox provider, specify the Grid Manager host (required when --provider=infoblox)
  --infoblox-wapi-port=443       When using the Infoblox provider, specify the WAPI port (default: 443)
  --infoblox-wapi-username="admin"  
  --infoblox-wapi-password=""    When using the Infoblox provider, specify the WAPI password (required when --provider=infoblox)
  --infoblox-wapi-version="2.3.1"  
  --infoblox-ssl-verify          When using the Infoblox provider, specify whether to verify the SSL certificate (default: true, disable with --no-infoblox-ssl-verify)
  --infoblox-view=""             DNS view (default: "")
  --infoblox-max-results=0       Add _max_results as query parameter to the URL on all API requests. The default is 0 which means _max_results is not set and the default of the server is used.
  --infoblox-fqdn-regex=""       Apply this regular expression as a filter for obtaining zone_auth objects. This is disabled by default.
  --infoblox-name-regex=""       Apply this regular expression as a filter on the name field for obtaining infoblox records. This is disabled by default.
  --infoblox-create-ptr          When using the Infoblox provider, create a ptr entry in addition to an entry
  --infoblox-cache-duration=0    When using the Infoblox provider, set the record TTL (0s to disable).

`make image-build` fails with permission denied issue

➜ external-dns-operator git:(CFE-368-2) ✗ make image-build

go run sigs.k8s.io/controller-tools/cmd/controller-gen "crd:trivialVersions=true,preserveUnknownFields=false" rbac:roleName=external-dns-operator webhook paths="./..." output:crd:artifacts:config=config/crd/bases
go run sigs.k8s.io/controller-tools/cmd/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
mkdir -p "/home/dhritishikhar/go/src/github.com/openshift/external-dns-operator/testbin"
KUBEBUILDER_ASSETS="/home/dhritishikhar/go/src/github.com/openshift/external-dns-operator/testbin/k8s/1.21.4-linux-amd64" go test ./... -race -covermode=atomic -coverprofile coverage.out
ok github.com/openshift/external-dns-operator/api/v1alpha1 9.857s coverage: 28.9% of statements
? github.com/openshift/external-dns-operator/cmd/external-dns-operator [no test files]
? github.com/openshift/external-dns-operator/pkg/operator [no test files]
? github.com/openshift/external-dns-operator/pkg/operator/config [no test files]
ok github.com/openshift/external-dns-operator/pkg/operator/controller 0.084s coverage: 21.7% of statements
ok github.com/openshift/external-dns-operator/pkg/operator/controller/ca-configmap 2.112s coverage: 65.6% of statements
ok github.com/openshift/external-dns-operator/pkg/operator/controller/credentials-secret 2.102s coverage: 47.6% of statements
ok github.com/openshift/external-dns-operator/pkg/operator/controller/externaldns 1.324s coverage: 84.8% of statements
ok github.com/openshift/external-dns-operator/pkg/operator/controller/utils 0.075s coverage: 100.0% of statements
? github.com/openshift/external-dns-operator/pkg/operator/controller/utils/test [no test files]
? github.com/openshift/external-dns-operator/pkg/utils [no test files]
? github.com/openshift/external-dns-operator/pkg/version [no test files]
podman build -t quay.io/dhritishikhar/external-dns-operator:1.0.5 .
[1/2] STEP 1/4: FROM registry.access.redhat.com/ubi8/go-toolset:latest AS builder
[1/2] STEP 2/4: WORKDIR /opt/app-root/src
--> Using cache 8331e7ddef3c0437ecd68924827f4ff3d9123de1dbd9649ed21abe8d1118a670
--> 8331e7ddef3
[1/2] STEP 3/4: COPY . .
--> e0e9565ac89
[1/2] STEP 4/4: RUN make build-operator
GO111MODULE=on GOFLAGS=-mod=vendor CGO_ENABLED=0 go build -ldflags "-X github.com/openshift/external-dns-operator/pkg/version.SHORTCOMMIT=1bd6091 -X github.com/openshift/external-dns-operator/pkg/version.COMMIT=1bd6091d986fff306f6979c9c9fc81646b3f1a0e" -o bin/external-dns-operator github.com/openshift/external-dns-operator/cmd/external-dns-operator
go build github.com/openshift/external-dns-operator/cmd/external-dns-operator: copying /tmp/go-build1488934978/b001/exe/a.out: open bin/external-dns-operator: permission denied
make: *** [Makefile:134: build-operator] Error 1
Error: error building at STEP "RUN make build-operator": error while running runtime: exit status 2
make: *** [Makefile:142: image-build] Error 2

Missing profile in AWS credentials file in pod

Problem

Right now, its possible to inject AWS credentials with any profile.

Expectation

Add a [default] profile when profile is explicitely missing in the source secret.

Steps to reproduce

  1. Create a secret with AWS credentials without externalDNS
apiVersion: v1
stringData:
  credentials: |-
    aws_access_key_id = "lbnNoaWZ0Lm9yZwo="
    aws_secret_access_key = "PngjH/0zSTEm7n"
kind: Secret
metadata:
  name: credentials-demo
  namespace: external-dns-operator
type: Opaque
  1. Create an externalDNS CR
apiVersion: externaldns.olm.openshift.io/v1alpha1
kind: ExternalDNS
metadata:
  name: external-demo-7
  namespace: external-dns-operator
spec:
  provider:
    type: AWS
    aws:  
      credentials:
        name: credentials-demo
  zones:
    - "Z3URY6TWQ91KXX"
  source:
    type: Service
    fqdnTemplate:
    - '{{.Name}}.mydomain.net'
  1. Get the pod
➜  external-dns-operator git:(main) ✗ k get pods -n external-dns-operator
NAME                                            READY   STATUS             RESTARTS      AGE
external-dns-external-demo-7-5b84d4bbd5-lp5b7   1/1     Running            0             5s
external-dns-operator-696b9bf7b9-9dwt7          2/2     Running            0             55m

➜  external-dns-operator git:(main) ✗ k exec -it external-dns-external-demo-7-5b84d4bbd5-lp5b7 -n external-dns-operator -- sh
~ $ cat /etc/kubernetes/aws-credentials 
aws_access_key_id = "lbnNoaWZ0Lm9yZwo="
aws_secret_access_key = "PngjH/0zSTEm7n"~ $

Notice profile missing in the file /etc/kubernets/aws-credentials

Expected format:

~ $ cat /etc/kubernetes/aws-credentials 
[default]
aws_access_key_id = "lbnNoaWZ0Lm9yZwo="
aws_secret_access_key = "PngjH/0zSTEm7n"~ $

Expose pod nodeSelector in the CRD

It would be nice if the field spec.template.spec.nodeSelector in the external-dns Deployment could be configurable by exposing this configuration via the ExternalDNS custom resource.
I am using the external-dns-operator in a cluster provisioned via hypershift. In this scenario the scheduler is unable to run external-dns pods because there are no master nodes available, so the pods stay in pending forever. The node selector is hardcoded here.
The workaround I'm appying is to remove the node-role.kubernetes.io/master: '' label from the nodeSelector in the Deployment object, as the operator seems to not reconcile this.

Add a YAML lint make target

In addition to go lint, a YAML lint could also be added for all the YAML files:

YAML_FILES := $(shell find . -path ./vendor -prune -o -path ./config -prune -o -type f -regex ".*\.y[a]ml" -print)
.PHONY: lint-yaml
## Runs yamllint on all yaml files
lint-yaml: ${YAML_FILES}
	$(Q)$(PYTHON_VENV_DIR)/bin/pip install yamllint==1.23.0
	$(Q)$(PYTHON_VENV_DIR)/bin/yamllint -c .yamllint $(YAML_FILES)

Azure Private DNS handling

This PR suggest that it's possible to manage also azure private dns zones with external-dns-operator (edo).
https://github.com/openshift/external-dns-operator/pull/89/files

When I run this command with the latest edo can I also see some private dns output.

F:\openshift>oc run ext-dns --image=quay.io/external-dns-operator/external-dns:latest -it --rm --command  -- bash

[root@ext-dns /]# external-dns --help 2>&1|egrep -i azure
                                 godaddy, google, azure, azure-dns,
                                 azure-private-dns, bluecat, cloudflare,
                                 only AzureDNS provider is using this flag);
  --azure-config-file="/etc/kubernetes/azure.json"
                                 When using the Azure provider, specify the
                                 Azure configuration file (required when
                                 --provider=azure
  --azure-resource-group=""      When using the Azure provider, override the
                                 Azure resource group to use (required when
                                 --provider=azure-private-dns)
  --azure-subscription-id=""     When using the Azure provider, specify the
                                 Azure configuration file (required when
                                 --provider=azure-private-dns)
  --azure-user-assigned-identity-client-id=""
                                 When using the Azure provider, override the

In the official doc is this not documented. Is the azure private DNS "just" not supported but possible or not available in the operator?

https://docs.openshift.com/container-platform/4.13/networking/external_dns_operator/nw-creating-dns-records-on-azure.html

What I understand in the CRD is the Azure private DNS not listed, right?
https://github.com/openshift/external-dns-operator/blob/release-4.13/bundle/manifests/externaldns.olm.openshift.io_externaldnses.yaml#L228

Can't scrape external-dns instance metrics

Instance metrics in the operator managed external-dns pods are bound to 127.0.0.1:7979 and this doesn't seem to be configurable. Also, external-dns instance pods don't have the rbac proxy sidecar (which the operator pod has), so it's not possible to scrape the metrics from Prometheus through the sidecar either. I'm using version v1.2.0.
Being able to scrape the metrics would be great :).

`make test-e2e` fails with error

➜  external-dns-operator git:(main) make test-e2e
go test \
-ldflags "-X github.com/openshift/external-dns-operator/pkg/version.SHORTCOMMIT=ea98598 -X github.com/openshift/external-dns-operator/pkg/version.COMMIT=ea9859875676474644424ecf3ec7c0ce52836577" \
-timeout 1h \
-count 1 \
-v \
-tags e2e \
-run "" \
./test/e2e
I0421 10:46:33.230220  653910 request.go:665] Waited for 1.049125091s due to client-side throttling, not priority and fairness, request: GET:https://api.ci-ln-gis99cb-76ef8.origin-ci-int-aws.dev.rhcloud.com:6443/apis/batch/v1?timeout=32s
panic: environment variable DNS_PROVIDER must be set

goroutine 1 [running]:
github.com/openshift/external-dns-operator/test/e2e.mustGetEnv({0x196e768, 0xc})
	/home/dhritishikhar/go/src/github.com/openshift/external-dns-operator/test/e2e/util.go:120 +0x8e
github.com/openshift/external-dns-operator/test/e2e.TestMain(0x442691?)
	/home/dhritishikhar/go/src/github.com/openshift/external-dns-operator/test/e2e/operator_test.go:133 +0x174
main.main()
	_testmain.go:55 +0x1d3
FAIL	github.com/openshift/external-dns-operator/test/e2e	3.859s
FAIL
make: *** [Makefile:115: test-e2e] Error 1
➜  external-dns-operator git:(main) 

Feature request: Support Cloudflare

I would love to see Cloudflare supported a DNS provider to manage existing Cloudflare zones via OpenShift. Is there any plan in the future to support CF?

Regards
Alex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.