Giter Club home page Giter Club logo

stakater / reloader Goto Github PK

View Code? Open in Web Editor NEW
7.0K 49.0 474.0 26.46 MB

A Kubernetes controller to watch changes in ConfigMap and Secrets and do rolling upgrades on Pods with their associated Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it!

Home Page: https://docs.stakater.com/reloader/

License: Apache License 2.0

Makefile 0.93% Go 98.18% Dockerfile 0.32% Mustache 0.57%
kubernetes openshift configmap secrets pods deployments daemonset statefulsets k8s watch-changes deploymentconfigs

reloader's Introduction

Reloader-logo Reloader

Go Report Card Go Doc Release GitHub tag Docker Pulls Docker Stars license Get started with Stakater

Problem

We would like to watch if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout

Solution

Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets Statefulsets and Rollouts.

Enterprise Version

Reloader is available in two different versions:

  1. Open Source Version
  2. Enterprise Version, which includes:
    • SLA (Service Level Agreement) for support and unique requests
    • Slack support
    • Certified images

Contact [email protected] for info about Reloader Enterprise.

Compatibility

Reloader is compatible with Kubernetes >= 1.19

How to use Reloader

For a Deployment called foo have a ConfigMap called foo-configmap or Secret called foo-secret or both. Then add your annotation (by default reloader.stakater.com/auto) to main metadata of your Deployment

kind: Deployment
metadata:
  annotations:
    reloader.stakater.com/auto: "true"
spec:
  template:
    metadata:

This will discover deploymentconfigs/deployments/daemonsets/statefulset/rollouts automatically where foo-configmap or foo-secret is being used either via environment variable or from volume mount. And it will perform rolling upgrade on related pods when foo-configmap or foo-secretare updated.

You can filter it by the type of monitored resource and use typed versions of auto annotation. If you want to discover changes only in mounted Secrets and ignore changes in ConfigMaps, add secret.reloader.stakater.com/auto annotation instead. Analogously, you can use configmap.reloader.stakater.com/auto annotation to look for changes in mounted ConfigMap, changes in any of mounted Secrets will not trigger a rolling upgrade on related pods.

You can also restrict this discovery to only ConfigMap or Secret objects that are tagged with a special annotation. To take advantage of that, annotate your deploymentconfigs/deployments/daemonsets/statefulset/rollouts like this:

kind: Deployment
metadata:
  annotations:
    reloader.stakater.com/search: "true"
spec:
  template:

and Reloader will trigger the rolling upgrade upon modification of any ConfigMap or Secret annotated like this:

kind: ConfigMap
metadata:
  annotations:
    reloader.stakater.com/match: "true"
data:
  key: value

provided the secret/configmap is being used in an environment variable, or a volume mount.

Please note that reloader.stakater.com/search and reloader.stakater.com/auto do not work together. If you have the reloader.stakater.com/auto: "true" annotation on your deployment, then it will always restart upon a change in configmaps or secrets it uses, regardless of whether they have the reloader.stakater.com/match: "true" annotation or not.

Similarly, reloader.stakater.com/auto and its typed version (secret.reloader.stakater.com/auto or configmap.reloader.stakater.com/auto) do not work together. If you have both annotations in your deployment, then only one of them needs to be true to trigger the restart. For example, having both reloader.stakater.com/auto: "true" and secret.reloader.stakater.com/auto: "false" or both reloader.stakater.com/auto: "false" and secret.reloader.stakater.com/auto: "true" will restart upon a change in a secret it uses.

We can also specify a specific configmap or secret which would trigger rolling upgrade only upon change in our specified configmap or secret, this way, it will not trigger rolling upgrade upon changes in all configmaps or secrets used in a deploymentconfig, deployment, daemonset, statefulset or rollout. To do this either set the auto annotation to "false" (reloader.stakater.com/auto: "false") or remove it altogether, and use annotations for Configmap or Secret.

It's also possible to enable auto reloading for all resources, by setting the --auto-reload-all flag. In this case, all resources that do not have the auto annotation (or its typed version) set to "false", will be reloaded automatically when their ConfigMaps or Secrets are updated. Notice that setting the auto annotation to an undefined value counts as false as-well.

Configmap

To perform rolling upgrade when change happens only on specific configmaps use below annotation.

For a Deployment called foo have a ConfigMap called foo-configmap. Then add this annotation to main metadata of your Deployment

kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "foo-configmap"
spec:
  template:
    metadata:

Use comma separated list to define multiple configmaps.

kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "foo-configmap,bar-configmap,baz-configmap"
spec:
  template: 
    metadata:

Secret

To perform rolling upgrade when change happens only on specific secrets use below annotation.

For a Deployment called foo have a Secret called foo-secret. Then add this annotation to main metadata of your Deployment

kind: Deployment
metadata:
  annotations:
    secret.reloader.stakater.com/reload: "foo-secret"
spec:
  template: 
    metadata:

Use comma separated list to define multiple secrets.

kind: Deployment
metadata:
  annotations:
    secret.reloader.stakater.com/reload: "foo-secret,bar-secret,baz-secret"
spec:
  template: 
    metadata:

NOTES

  • Reloader also supports sealed-secrets. Here are the steps to use sealed-secrets with Reloader.
  • For rollouts Reloader simply triggers a change is up to you how you configure the rollout strategy.
  • reloader.stakater.com/auto: "true" will only reload the pod, if the configmap or secret is used (as a volume mount or as an env) in DeploymentConfigs/Deployment/Daemonsets/Statefulsets
  • secret.reloader.stakater.com/reload or configmap.reloader.stakater.com/reload annotation will reload the pod upon changes in specified configmap or secret, irrespective of the usage of configmap or secret.
  • you may override the auto annotation with the --auto-annotation flag
  • you may override the secret typed auto annotation with the --secret-auto-annotation flag
  • you may override the configmap typed auto annotation with the --configmap-auto-annotation flag
  • you may override the search annotation with the --auto-search-annotation flag and the match annotation with the --search-match-annotation flag
  • you may override the configmap annotation with the --configmap-annotation flag
  • you may override the secret annotation with the --secret-annotation flag
  • you may want to prevent watching certain namespaces with the --namespaces-to-ignore flag
  • you may want to watch only a set of namespaces with certain labels by using the --namespace-selector flag
  • you may want to watch only a set of secrets/configmaps with certain labels by using the --resource-label-selector flag
  • you may want to prevent watching certain resources with the --resources-to-ignore flag
  • you can configure logging in JSON format with the --log-format=json option
  • you can configure the "reload strategy" with the --reload-strategy=<strategy-name> option (details below)

Reload Strategies

Reloader supports multiple "reload" strategies for performing rolling upgrades to resources. The following list describes them:

  • env-vars: When a tracked configMap/secret is updated, this strategy attaches a Reloader specific environment variable to any containers referencing the changed configMap or secret on the owning resource (e.g., Deployment, StatefulSet, etc.). This strategy can be specified with the --reload-strategy=env-vars argument. Note: This is the default reload strategy.
  • annotations: When a tracked configMap/secret is updated, this strategy attaches a reloader.stakater.com/last-reloaded-from pod template annotation on the owning resource (e.g., Deployment, StatefulSet, etc.). This strategy is useful when using resource syncing tools like ArgoCD, since it will not cause these tools to detect configuration drift after a resource is reloaded. Note: Since the attached pod template annotation only tracks the last reload source, this strategy will reload any tracked resource should its configMap or secret be deleted and recreated. This strategy can be specified with the --reload-strategy=annotations argument.

Deploying to Kubernetes

You can deploy Reloader by following methods:

Vanilla Manifests

You can apply vanilla manifests by changing RELEASE-NAME placeholder provided in manifest with a proper value and apply it by running the command given below:

kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

By default, Reloader gets deployed in default namespace and watches changes secrets and configmaps in all namespaces.

Reloader can be configured to ignore the resources secrets and configmaps by passing the following arguments (spec.template.spec.containers.args) to its container :

Argument Description
--resources-to-ignore=configMaps To ignore configMaps
--resources-to-ignore=secrets To ignore secrets

Note: At one time only one of these resource can be ignored, trying to do it will cause error in Reloader. Workaround for ignoring both resources is by scaling down the Reloader pods to 0.

Reloader can be configured to only watch secrets/configmaps with one or more labels using the --resource-label-selector parameter. Supported operators are !, in, notin, ==, =, !=, if no operator is found the 'exists' operator is inferred (i.e. key only). Additional examples of these selectors can be found in the Kubernetes Docs.

Note: The old : delimited key value mappings are deprecated and if provided will be translated to key=value. Likewise, if a wildcard value is provided (e.g. key:*) it will be translated to the standalone key which checks for key existence.

These selectors can be combined, for example with:

--resource-label-selector=reloader=enabled,key-exists,another-label in (value1,value2,value3)

Only configmaps or secrets labeled like the following will be watched:

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    reloader: enabled
    key-exists: yes
    another-label: value1

Reloader can be configured to only watch namespaces labeled with one or more labels using the --namespace-selector parameter. Supported operators are !, in, notin, ==, =, !=, if no operator is found the 'exists' operator is inferred (i.e. key only). Additional examples of these selectors can be found in the Kubernetes Docs.

Note: The old : delimited key value mappings are deprecated and if provided will be translated to key=value. Likewise, if a wildcard value is provided (e.g. key:*) it will be translated to the standalone key which checks for key existence.

These selectors can be combined, for example with:

--namespace-selector=reloader=enabled,test=true

Only namespaces labeled as below would be watched and eligible for reloads:

kind: Namespace
apiVersion: v1
metadata:
  labels:
    reloader: enabled
    test: true

Vanilla Kustomize

You can also apply the vanilla manifests by running the following command

kubectl apply -k https://github.com/stakater/Reloader/deployments/kubernetes

Similarly to vanilla manifests get deployed in default namespace and watches changes secrets and configmaps in all namespaces.

Kustomize

You can write your own kustomization.yaml using ours as a 'base' and write patches to tweak the configuration.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://github.com/stakater/Reloader/deployments/kubernetes

namespace: reloader

Helm Charts

Alternatively if you have configured helm on your cluster, you can add Reloader to helm from our public chart repository and deploy it via helm using below-mentioned commands. Follow this guide, in case you have trouble migrating Reloader from Helm2 to Helm3.

Installation

helm repo add stakater https://stakater.github.io/stakater-charts

helm repo update

helm install stakater/reloader # For helm3 add --generate-name flag or set the release name

helm install {{RELEASE_NAME}} stakater/reloader -n {{NAMESPACE}} --set reloader.watchGlobally=false # By default, Reloader watches in all namespaces. To watch in single namespace, set watchGlobally=false

helm install stakater/reloader --set reloader.watchGlobally=false --namespace test --generate-name # Install Reloader in `test` namespace which will only watch `Deployments`, `Daemonsets` `Statefulsets` and `Rollouts` in `test` namespace.

Uninstalling

helm uninstall {{RELEASE_NAME}} -n {{NAMESPACE}}

Parameters

Global Parameters

Parameter Description Type Default
global.imagePullSecrets Reference to one or more secrets to be used when pulling images array []

Common Parameters

Parameter Description Type Default
nameOverride replace the name of the chart string ""
fullnameOverride replace the generated name string ""

Core Reloader Parameters

Parameter Description Type Default
reloader.autoReloadAll boolean false
reloader.isArgoRollouts Enable Argo Rollouts. Valid value are either true or false boolean false
reloader.isOpenshift Enable OpenShift DeploymentConfigs. Valid value are either true or false boolean false
reloader.ignoreSecrets To ignore secrets. Valid value are either true or false. Either ignoreSecrets or ignoreConfigMaps can be ignored, not both at the same time boolean false
reloader.ignoreConfigMaps To ignore configMaps. Valid value are either true or false boolean false
reloader.reloadOnCreate Enable reload on create events. Valid value are either true or false boolean false
reloader.syncAfterRestart Enable sync after Reloader restarts for Add events, works only when reloadOnCreate is true. Valid value are either true or false boolean false
reloader.reloadStrategy Strategy to trigger resource restart, set to either default, env-vars or annotations enumeration default
reloader.ignoreNamespaces List of comma separated namespaces to ignore, if multiple are provided, they are combined with the AND operator string ""
reloader.namespaceSelector List of comma separated namespaces to select, if multiple are provided, they are combined with the AND operator string ""
reloader.resourceLabelSelector List of comma separated label selectors, if multiple are provided they are combined with the AND operator string ""
reloader.logFormat Set type of log format. Value could be either json or "" string ""
reloader.watchGlobally Allow Reloader to watch in all namespaces (true) or just in a single namespace (false) boolean true
reloader.enableHA Enable leadership election allowing you to run multiple replicas boolean false
reloader.readOnlyRootFileSystem Enforce readOnlyRootFilesystem boolean false
reloader.legacy.rbac boolean false
reloader.matchLabels Pod labels to match map {}

Deployment Reloader Parameters

Parameter Description Type Default
reloader.deployment.replicas Number of replicas, if you wish to run multiple replicas set reloader.enableHA = true int 1
reloader.deployment.revisionHistoryLimit Limit the number of revisions retained in the revision history int 2
reloader.deployment.nodeSelector Scheduling pod to a specific node based on set labels map {}
reloader.deployment.affinity Set affinity rules on pod map {}
reloader.deployment.securityContext Set pod security context map {}
reloader.deployment.containerSecurityContext Set container security context map {}
reloader.deployment.tolerations A list of tolerations to be applied to the deployment array []
reloader.deployment.topologySpreadConstraints Topology spread constraints for pod assignment array []
reloader.deployment.annotations Set deployment annotations map {}
reloader.deployment.labels Set deployment labels, default to stakater settings array see values.yaml
reloader.deployment.image Set container image name, tag and policy array see values.yaml
reloader.deployment.env Support for extra environment variables array []
reloader.deployment.livenessProbe Set liveness probe timeout values map {}
reloader.deployment.readinessProbe Set readiness probe timeout values map {}
reloader.deployment.resources Set container requests and limits (e.g. CPU or memory) map {}
reloader.deployment.pod.annotations Set annotations for pod map {}
reloader.deployment.priorityClassName Set priority class for pod in cluster string ""

Other Reloader Parameters

Parameter Description Type Default
reloader.service map {}
reloader.rbac.enabled Specifies whether a role based access control should be created boolean true
reloader.serviceAccount.create Specifies whether a ServiceAccount should be created boolean true
reloader.custom_annotations Add custom annotations map {}
reloader.serviceMonitor.enabled Enable to scrape Reloader's Prometheus metrics (legacy) boolean false
reloader.podMonitor.enabled Enable to scrape Reloader's Prometheus metrics boolean false
reloader.podDisruptionBudget.enabled Limit the number of pods of a replicated application boolean false
reloader.netpol.enabled boolean false
reloader.volumeMounts Mount volume array []
reloader.volumes Add volume to a pod array []
reloader.webhookUrl Add webhook to Reloader string ""

Additional Remarks

  • Both namespaceSelector & resourceLabelSelector can be used together. If they are then both conditions must be met for the configmap or secret to be eligible to trigger reload events. (e.g. If a configMap matches resourceLabelSelector but namespaceSelector does not match the namespace the configmap is in, it will be ignored).
  • At one time only one of the resources ignoreConfigMaps or ignoreSecrets can be ignored, trying to do both will cause error in helm template compilation
  • Reloading of OpenShift (DeploymentConfig) and/or Argo Rollouts has to be enabled explicitly because it might not be always possible to use it on a cluster with restricted permissions
  • isOpenShift Recent versions of OpenShift (tested on 4.13.3) require the specified user to be in an uid range which is dynamically assigned by the namespace. The solution is to unset the runAsUser variable via deployment.securityContext.runAsUser=null and let OpenShift assign it at install
  • reloadOnCreate controls how Reloader handles secrets being added to the cache for the first time. If reloadOnCreate is set to true:
    1. Configmaps/secrets being added to the cache will cause Reloader to perform a rolling update of the associated workload
    2. When applications are deployed for the first time, Reloader will perform a rolling update of the associated workload
    3. If you are running Reloader in HA mode all workloads will have a rolling update performed when a new leader is elected
  • serviceMonitor will be removed in future releases of Reloader in favour of Pod monitor
  • If reloadOnCreate is set to false:
    1. Updates to configmaps/secrets that occur while there is no leader will not be picked up by the new leader until a subsequent update of the configmap/secret occurs
    2. In the worst case the window in which there can be no leader is 15s as this is the LeaseDuration
  • By default, reloadOnCreate and syncAfterRestart are both set to false. Both need to be enabled explicitly

Help

Documentation

You can find more documentation here

Have a question?

File a GitHub issue.

Talk to us on Slack

Join and talk to us on Slack for discussing Reloader

Join Slack Chat

Contributing

Bug Reports & Feature Requests

Please use the issue tracker to report any bugs or file feature requests.

Developing

  1. Deploy Reloader.
  2. Run okteto up to activate your development container.
  3. make build
  4. ./Reloader

PRs are welcome. In general, we follow the "fork-and-pull" Git workflow.

  1. Fork the repo on GitHub
  2. Clone the project to your own machine
  3. Commit changes to your own branch
  4. Push your work back up to your fork
  5. Submit a Pull request so that we can review your changes

NOTE: Be sure to merge the latest from "upstream" before making a pull request!

Changelog

View our closed Pull Requests.

License

Apache2 © Stakater

About

Reloader is maintained by Stakater. Like it? Please let us know at [email protected]

See our other projects or contact us in case of professional services and queries on [email protected]

Acknowledgements

reloader's People

Contributors

ahmedwaleedmalik avatar ahsan-storm avatar alexconlin avatar aliartiza75 avatar avihuly avatar bnallapeta avatar ctrought avatar d3adb5 avatar daniel-butler-irl avatar faizanahmad055 avatar gciria avatar hanzala1234 avatar hussnain612 avatar itaispiegel avatar jkroepke avatar kahootali avatar karl-johan-grahn avatar katainaka0503 avatar muneebaijaz avatar patrickspies avatar rasheedamir avatar renovate[bot] avatar sheryarbutt avatar stakater-user avatar talha0324 avatar tanalam2411 avatar tete17 avatar usamaahmadkhan avatar vladlosev avatar waseem-h avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reloader's Issues

Remove obsolete(?) file build/package/reloader

The file was added in release 0.0.1 (067f09b), and it does not seem to be used anywhere. At least I was able to build the docker images and deploy to a local kubernetes cluster even though I deleted the file.

The presence of a binary elf file in the repo freaks out our security people to no end.

Reloader losing connection / kubernetes api connection refused / http2 GOAWAY

Hi there!
First up, thanks for this great tool!

We are experiencing some issues with reloader on the long run. We used the default config:

---
# Source: reloader/templates/role.yaml


---
# Source: reloader/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
    group: com.stakater.platform
    provider: stakater
    version: v0.0.38
    
  name: reloader
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: reloader
      release: "RELEASE-NAME"
  template:
    metadata:
      labels:
        app: reloader
        chart: "reloader-v0.0.38"
        release: "RELEASE-NAME"
        heritage: "Tiller"
        group: com.stakater.platform
        provider: stakater
        version: v0.0.38
        
    spec:
      containers:
      - env:
        image: "stakater/reloader:v0.0.38"
        imagePullPolicy: IfNotPresent
        name: reloader
        args:
      serviceAccountName: reloader

---
# Source: reloader/templates/clusterrole.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
  name: reloader-role
  namespace: default
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
      - configmaps
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - "apps"
    resources:
      - deployments
      - daemonsets
      - statefulsets
    verbs:
      - list
      - get
      - update
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - deployments
      - daemonsets
    verbs:
      - list
      - get
      - update
      - patch

---
# Source: reloader/templates/rolebinding.yaml


---
# Source: reloader/templates/clusterrolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
  name: reloader-role-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: reloader-role
subjects:
  - kind: ServiceAccount
    name: reloader
    namespace: default

---
# Source: reloader/templates/serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
  name: reloader

This works fine in the next few days. However, after a few days, changes to CMs and Secrets aren't detected and we see the following in the logs:

→ kubectl logs -f reloader-795497cbbc-msg47
time="2019-10-09T14:10:23Z" level=info msg="Environment: Kubernetes"
time="2019-10-09T14:10:23Z" level=info msg="Starting Reloader"
time="2019-10-09T14:10:23Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2019-10-09T14:10:23Z" level=info msg="Starting Controller to watch resource type: configMaps"
time="2019-10-09T14:10:23Z" level=info msg="Starting Controller to watch resource type: secrets"
E1017 03:54:22.864403       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=6329, ErrCode=NO_ERROR, debug=""
E1017 03:54:22.883831       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=6329, ErrCode=NO_ERROR, debug=""
E1017 03:54:22.939076       1 reflector.go:322] github.com/stakater/Reloader/internal/pkg/controller/controller.go:77: Failed to watch *v1.ConfigMap: Get https://10.11.32.1:443/api/v1/configmaps?resourceVersion=33794470&timeoutSeconds=324&watch=true: dial tcp 10.11.32.1:443: connect: connection refused

The last entry continues.
I checked, we can reach the Kubernetes API at https://10.11.32.1:443 from within the container via curl.

Any clues what that's about?

Cheers from Hamburg

Helm - broken chart and multiple issues

Hi,

The current helm chart has multiple issues:

  • the reloader level in the values file completely breaks backwards compatibility
  • some resources use reloader-name instead of reloader-fullname
  • missing nodeSelector on the deployment
  • values specify a default name for the service account instead of leaving it empty, to fallback on reloader-fullname

As a last note, it would be great if the reloader chart would be moved to Helm Hub, in order to expedite reviews and updates.

Race condition for kube-system namespace

Describe the bug
Logs are full of changes observed in kube-system namespace. For cluster deployments, in this case, cluster-autoscaler-status. From what I observed this adds quite GB per month just for logs, then delays a bit updates to resources that we want to watch actually.

To Reproduce
Deploy Reloader to Google Kubernetes Engine

Expected behavior
I think Reloader should not watch certain namespaces, there should be an option to ignore, I can work on it, just need your input. :)

Screenshots
image

Additional context

Deployed on managed Kubernetes cluster.

Only reload pods on specific environment variables

Hi :) Been using reloader for a while and it works great, thanks.

Is there any way of setting reloader to watch on specific environment variables?

Our use case is that we have one configmap for a number of different pods, and don't want reloader reloading all pods when a value changes in the configmap which may not be related to that pod.

Currently in Helm we're using envFrom: configMapRef: x, and not specifying exact environment variables.

Check for pod annotations too

Hi

I often have troubles because I use charts that I would like to automatically reload, but they almost never allow configuring the annotations of the deployment / statefulset. However, they almost always allow to edit the annotations of the pod. Thus it would be easier if the reloader would also check the annotations of the pod (when not found on the deployment / statefulset).

Do you see any objection to such a feature ?

Correct documentation for in repo helm chart

Since this has transitioned to using an in repo helm chart, the helm chart documentation should be checked, and documented. The current helm chart in this repo, seems to be built from several layers of templating which makes it harder to understand, and the correctness of this documentation all the more essential.

Reloader is not detecting any changes in config maps.

Hi,
This application is a perfect solution to the nightmare. Thank you for building this.

But the thing is I am not able to make it work.
I deployed using helm chart with --set reloader.watchGlobally=false --namespace dev and without any args. I have istio enabled in the NS, so I have to add sidecar.istio.io/inject: "false" else it was giving 172.17.0.1 connection refused.

But for what ever reason, reloader is not checking for any changes in configmaps.

reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Environment: Kubernetes"
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Starting Reloader"
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Starting Controller to watch resource type: configMaps"
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Starting Controller to watch resource type: secrets"
                                                                                                                                           │

Add support for envFrom

Currently relaoder only supports env vars from config map like this

- name: SPECIAL_TYPE_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: SPECIAL_TYPE

and not like this

envFrom:
      - configMapRef:
          name: special-config

Add support for this as soon as possible

Permission Issue

Hi,

I followed the documentation and added the annotations to detect changes for the configmaps like so

metadata:
  annotations:
    reloader.stakater.com/auto: "true"
    configmap.reloader.stakater.com/reload: "adapter-config"

But when I apply changes to my configmap the pod doesn't reload. I investigated the logs for the reloader and I get the following error:

Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:my-ns:reloader-reloader" cannot list resource "configmaps" in API group "" at the cluster scope

Did anyone faced a similar issue before?

Thanks,
Mark

Prometheus endpoint to monitor Reloader's metrics

To monitor reloader, It would be good if Reloader has prometheus endpoint and be able to fetch metrics.

In my usecase, Following metrics are wanted.

  • How many times reloader tried to reload Deployments (or other resources)
  • How many times reloader succeeded to reload Deployments (or other resources)
  • How many times reloader failed to reload Deployments (or other resources)

wrong permissions on deployment config

Hi,

I deployed reloader in OpenShift with the helm chart. it looks like it detects that it is running on OpenShift because it tries to list deployment config, but it can't because it doesn't have permissions to do so.

time="2019-08-03T11:08:36Z" level=error msg="Failed to list deploymentConfigs deploymentconfigs.apps.openshift.io is forbidden: User \"system:serviceaccount:reloader:reloader\" cannot list resource \"deploymentconfigs\" in API group \"apps.openshift.io\" in the namespace \"openshift-kube-scheduler\""
time="2019-08-03T11:08:36Z" level=error msg="Failed to list deploymentConfigs deploymentconfigs.apps.openshift.io is forbidden: User \"system:serviceaccount:reloader:reloader\" cannot list resource \"deploymentconfigs\" in API group \"apps.openshift.io\" in the namespace \"openshift-kube-apiserver\""
time="2019-08-03T11:08:36Z" level=error msg="Failed to list deploymentConfigs deploymentconfigs.apps.openshift.io is forbidden: User \"system:serviceaccount:reloader:reloader\" cannot list resource \"deploymentconfigs\" in API group \"apps.openshift.io\" in the namespace \"openshift-kube-apiserver\""

Log is printing detected changes of configmaps that it's not to be monitored

Log is constantly printing 'Changes detected ...' of a configmap that I don't wanna its changes be monitored.

I have only one deployment with the related annotation for the reloader monitor the changes of its configmap.
But the reloader logs is printing detected changes of a configmap that it's constantly changing, but I don't wanna its changes be monitored by reloader.

time="2019-04-24T14:35:31Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:35:41Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:35:51Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:01Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:11Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:21Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:32Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:42Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:52Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:03Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:13Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:23Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:33Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:43Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:54Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:04Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:14Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:24Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:34Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:45Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:55Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:05Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:15Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:25Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:36Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:46Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:56Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:06Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:17Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:27Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:37Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:47Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:57Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:07Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:18Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:28Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:38Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:48Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:59Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:09Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:19Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:29Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:39Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:50Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:00Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:10Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:20Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:31Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:41Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:51Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:01Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:11Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:22Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:32Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:42Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:52Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:03Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:13Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:23Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:34Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:44Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:54Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:04Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:15Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:25Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:35Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:45Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:55Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:06Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:16Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:26Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:36Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:46Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:57Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:07Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:17Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:27Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:38Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:48Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:58Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:08Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:18Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:29Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:39Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:49Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:59Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:10Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:20Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:30Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:40Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:50Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:01Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:11Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:21Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:31Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:41Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:52Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:02Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:12Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:22Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:33Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:43Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:53Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:53:03Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"

With that, it's dificult to me to view the detected changes of the configmap that I really want to be monitored.

Add HA support in Reloader

Is it possible to have two replicas running at the same time? I mean, are they able to coordinate so, when a configmap/secret changes, only one of them takes care of reloading the associated deployment/statefulset/etc?

The use case is being able to deploy reloader with HA guarantees in a K8S cluster.

Thanks. This project is really useful.

Add support for reloading application running in a container instead of rolling update

Suggested by Marton Szucs in slack channel:

For applications that supports updating configuration at runtime, like pgbouncer or nginx, it would be nice to just reload the application inside a running container. Instead of restarting the whole Pod using a rolling update.

This is how I do it manually:

  • Change configmap/secret and apply it to the cluster
  • Exec into the container that has mounted the configmap/secret as a file.
  • Check if the mounted file is updated inside the container
  • reload application with kill -SIGHUP 1 or nginx -s reload or some other application specific command.

Questions:

  • Will it be safe?
  • Look into how does nginx ingress controller does it right now? It indeed never does rolling update but does have new configs!

Errors in pod logs on startup - failed to list deployments

I installed the stable release with default settings on a Kubernetes cluster using Helm. Annotated my Deployments as per the instructions but I'm not seeing any rolling updates when I change a ConfigMap. When I checked the reloader pod logs I found this:

time="2019-12-13T15:46:02Z" level=info msg="Environment:Kubernetes"
time="2019-12-13T15:46:02Z" level=info msg="Starting Reloader"
time="2019-12-13T15:46:02Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2019-12-13T15:46:02Z" level=info msg="Starting Controller to watch resource type: secrets"
time="2019-12-13T15:46:02Z" level=info msg="Starting Controller to watch resource type: configMaps"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list deployments the server could not find the requested resource"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list daemonSets the server could not find the requested resource"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list statefulSets the server could not find the requested resource"

Then the last 3 lines just repeat periodically.

I'm wondering if its an RBAC issue, but the ClusterRole and ClusterRoleBinding seem to be there.

Any help would be greatly appreciated.

Disable monitoring changes to Secrets

I want to use Reloader, but don't want it to monitor Secrets, just ConfigMaps (as I don't want to give it access to read Secrets in my cluster). Seems like a pretty straightforward change - happy to open a PR.

Using reloader manifests with kustomize build prints error log.

We tried to install reloader by kustomization.
Then kustomize build . printed following log .

2019/11/22 22:42:48 nil value at `env.valueFrom.configMapKeyRef.name` ignored in mutation attempt
2019/11/22 22:42:48 nil value at `env.valueFrom.secretKeyRef.name` ignored in mutation attempt

kustomize build . succeeded and result is correct, so maybe these logs are warning.
Are these log because of reloader's manifests? How to suppress?

Our settings

reloader/kustomization.yml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - github.com/stakater/Reloader//deployments/kubernetes?ref=v0.0.49

commonAnnotations:
  reloader.stakater.com/auto: "true"

namespace: ops

usage

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - ../reloader 

Create a doc which compares Reloader vs. k8s-trigger-controller

Create a separate README in doc directory of the repo which just talks about differences between k8s-trigger-controller & ConfigmapController; why did we create Reloader?

  • only support for deployments
  • what else?
  • we have more efficient way to do hash calculation?

doesn't see work on 1.12.6 EKS

version 0.0.27 - do nothing - means reloader comes up and that's it.
only shows :
time="2019-05-07T15:08:49Z" level=info msg="Starting Reloader"
time="2019-05-07T15:08:49Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2019-05-07T15:08:49Z" level=info msg="Starting Controller to watch resource type: secrets"
time="2019-05-07T15:08:49Z" level=info msg="Starting Controller to watch resource type: configMaps"

version 0.0.26 - writing to log such message "level=info msg="Changes detected in 'prometheus-alert-rules' of type 'CONFIGMAP' in namespace 'monitoring'"
but pod don't make restart.

any idea ?

Helm chart

Public helm chart of reloader is not available i have to do helm repo add to add the helm chart.Can you please make your helm chart's yaml public.

Thanks

Coalescing changes in multiple configmaps

In our setup we have many services modelled as deployments, and each of them consumes settings from multiple configmaps and secrets (typically one for the "bigger picture" settings, and then one more specialized for the service itself).

We also roll-out changes in the kubernetes resources through our CI setup (Travis + https://github.com/Collaborne/kubernetes-bootstrap).

Together this leads to situations where multiple configmaps update at the same time, and reloader seems to trigger multiple redeployments. As an idea here: It could be nice to collect the updates for deployments for a short time, and only trigger a single redeployment.

(For a somewhat unrelated reason I actually implemented our own reloader now with that built-in and moved away from reloader, so this is merely a "might be interesting for you guys to think about" feedback issue.)

Reloader overrides my dnsConfig

Helm chart version: 1.1.3

I have set up dnsConfig in my deployment to override the ndots parameter. But, when reloader rolls out a new release, the parameter value gets overridden.

My original deployment manifest - kgdepoyaml pushowl-backend-gunicorn

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: env-staging
    deployment.kubernetes.io/revision: "6"
  creationTimestamp: "2019-10-14T09:13:14Z"
  generation: 73
  labels:
    app: gunicorn
    env: staging
    release: pushowl-backend
  name: pushowl-backend-gunicorn
  namespace: pushowl-backend
  resourceVersion: "5683180"
  selfLink: /apis/extensions/v1beta1/namespaces/pushowl-backend/deployments/pushowl-backend-gunicorn
  uid: d9feff65-ee62-11e9-8b9b-02b54c8e79ec
spec:
  progressDeadlineSeconds: 600
  replicas: 4
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: gunicorn
      env: staging
      release: pushowl-backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/path: /
        prometheus.io/port: "9106"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: gunicorn
        env: staging
        release: pushowl-backend
    spec:
      containers:
      - command:
        - sh
        - bin/start_gunicorn.sh
        envFrom:
        - configMapRef:
            name: env-default
        - configMapRef:
            name: env-staging
        image: 684417159526.dkr.ecr.us-east-1.amazonaws.com/pushowl-backend:13100634024a9e92b8a014ec15d46bd5bfe575c7
        imagePullPolicy: IfNotPresent
        name: pushowl-backend-gunicorn
        ports:
        - containerPort: 8000
          name: gunicorn
          protocol: TCP
        - containerPort: 9106
          name: prometheus
# Please edit the object below. Lines beginning with a '#' will be ignored,
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /prometheus
          name: prometheus-multiproc-dir
      dnsConfig:
        options:
        - name: ndots
          value: "1"
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: prometheus-multiproc-dir
status:
  availableReplicas: 4
  conditions:
  - lastTransitionTime: "2019-10-15T05:53:58Z"
    lastUpdateTime: "2019-10-15T05:53:58Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-10-14T09:13:14Z"
    lastUpdateTime: "2019-10-15T05:55:05Z"
    message: ReplicaSet "pushowl-backend-gunicorn-75b494bb47" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 73
  readyReplicas: 4
  replicas: 4
  updatedReplicas: 4

Then I change my configmap to change some value. kubectl edit configmap/env-staging

My new deployment then becomes - kgdepoyaml pushowl-backend-gunicorn

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: env-staging
    deployment.kubernetes.io/revision: "7"
  creationTimestamp: "2019-10-14T09:13:14Z"
  generation: 75
  labels:
    app: gunicorn
    env: staging
    release: pushowl-backend
  name: pushowl-backend-gunicorn
  namespace: pushowl-backend
  resourceVersion: "5684388"
  selfLink: /apis/extensions/v1beta1/namespaces/pushowl-backend/deployments/pushowl-backend-gunicorn
  uid: d9feff65-ee62-11e9-8b9b-02b54c8e79ec
spec:
  progressDeadlineSeconds: 600
  replicas: 7
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: gunicorn
      env: staging
      release: pushowl-backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/path: /
        prometheus.io/port: "9106"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: gunicorn
        env: staging
        release: pushowl-backend
    spec:
      containers:
      - command:
        - sh
        - bin/start_gunicorn.sh
        env:
        - name: STAKATER_ENV_STAGING_CONFIGMAP
          value: 45a547270324d5feded44fe0938dfbdfbbf6250e
        envFrom:
        - configMapRef:
            name: env-default
        - configMapRef:
            name: env-staging
        image: 684417159526.dkr.ecr.us-east-1.amazonaws.com/pushowl-backend:13100634024a9e92b8a014ec15d46bd5bfe575c7
        imagePullPolicy: IfNotPresent
        name: pushowl-backend-gunicorn
        ports:
        - containerPort: 8000
          name: gunicorn
          protocol: TCP
        - containerPort: 9106
          name: prometheus
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /prometheus
          name: prometheus-multiproc-dir
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: prometheus-multiproc-dir
status:
  availableReplicas: 4
  conditions:
  - lastTransitionTime: "2019-10-14T09:13:14Z"
    lastUpdateTime: "2019-10-15T05:59:20Z"
    message: ReplicaSet "pushowl-backend-gunicorn-66cc987457" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2019-10-15T05:59:23Z"
    lastUpdateTime: "2019-10-15T05:59:23Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  observedGeneration: 75
  readyReplicas: 4
  replicas: 7
  unavailableReplicas: 3
  updatedReplicas: 7

Please note that the dnsConfig parameter is gone in new deployment.

Direct installation into k8s not working (since 0.39)

Hi,

I tried to install per instructions today (v0.40):

kubectl  apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

but this results in an error:

Error from server (Invalid): error when creating "https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml": Deployment.apps "RELEASE-NAME-reloader" is invalid: [metadata.name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), spec.template.spec.containers[0].name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name',  or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?'), spec.template.spec.serviceAccountName: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]
    default: Error from server (Invalid): error when creating "https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml": ClusterRoleBinding.rbac.authorization.k8s.io "RELEASE-NAME-reloader-role-binding" is invalid: subjects[0].name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Error from server (Invalid): error when creating "https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml": ServiceAccount "RELEASE-NAME-reloader" is invalid: metadata.name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

I think this is coming from changes after 0.38 release when names in the reloader.yaml were changed from reloader to RELEASE-NAME-reloader.

Error in logs "spec.template.spec.dnsConfig: Required value: must provide `dnsConfig` when `dnsPolicy` is None"

Hello!
We are facing an issue with reloader running on k8s 1.13.5 on DigitalOcean:

time="2019-06-28T09:28:33Z" level=error msg="Update for 'web' of type 'Deployment' in namespace 'test' failed with error Deployment.apps \"web\" is invalid: spec.template.spec.dnsConfig: Required value: must provide `dnsConfig` when `dnsPolicy` is None"

While dnsConfig is provided in corresponding Deployment:

    spec:
      dnsConfig:
        nameservers:
          - 10.0.0.1
        searches:
          - svc.cluster.local
        options:
          - name: ndots
            value: "5"
      dnsPolicy: "None" 

Please advice.

Support for openshift deploymentconfig

First of thanks for great documentations .
I have a question :
can we extend support to openshift deploymentconfigs ?
would be great if this works well for deploymentconfigs

[SUPPORT] reloader not working

Hello, I need your help.
I followed the documentation but it does not work.
I have a deployment with config maps, and also the reloeader container running.
When the config map is updated, also in the deployment volume path, then the reloader does not reload the deployment to reflect configuration changes.

All set up I did was:
in the deployment add the annotation:
reloader.stakater.com/auto: "true"
and then I have the reloader container and my deployment service container.

am I missing some other configuration? I am using the default.

Thanks

Documentation disambiguation regarding restart vs rolling upgrade

In the headline of the github project page, one can read:

A Kubernetes controller to watch changes in ConfigMap and Secrets and then restart pods for Deployment, StatefulSet, DaemonSet and

However, further down below the docs mentioned that a rolling uprade is performed.

e.g.

Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.

Which one is the actual case?

crashing with "rpc error: code = Unknown desc = Error: No such container:"

I depoyed reloader in my minikube and facing

time="2018-12-17T07:51:10Z" level=info msg="Starting Controller"
time="2018-12-17T07:51:10Z" level=info msg="Starting Controller"
\\\\\time="2018-12-17T07:52:35Z" level=info msg="Changes detected in myconfig of type 'CONFIGMAP' in namespace: default"
time="2018-12-17T07:52:35Z" level=info msg="Updated reloader of type Deployment in namespace: default "
rpc error: code = Unknown desc = Error: No such container: b92e64652c9555ec1cf70db9e8c1e0816269723798f297acee64bae2b64946a48c8590bf0681:tmp z00341m$ \\\\\
> 
-bash: \\: command not found

and it crashing

Add support for whitelisting resourceNames in clusterrole.yaml

Hello,

Very useful project and helpful documentation! I was wondering if there would be interest in adding a change which would allow folks to explicitly limit Reloader's RBAC access to named resources. For example in clusterrole.yaml...

rules:
  - apiGroups:
      - ""
    resources:      
      - secrets
{{- if .Values.reloader.resourceNames}}
    resourceNames: {{ .Values.reloader.resouceNames}}
{{- end }}
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - ""
    resources:      
      - configmaps
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - "extensions"
      - "apps"
    resources:
      - deployments
      - daemonsets
      - statefulsets
    verbs:
      - list
      - get
      - update
      - patch
{{- end }}

I'm not sure if it would make sense to do this for all of the resources or only secrets. For our use case we are mainly concerned about secrets. Thanks!

Charlie

Reloader broken with Kubernetes 1.16

After deploying Reloader on an on-prem Kubernetes 1.16 cluster, I've seen many logs like :
Failed to list [kind] the server could not find the requested resource

I've reproduced the issue on a more simple and controlled minikube setup.

Using minikube 1.3.1 (with Kubernetes 1.15), it works fine.
Using minikube 1.4.0 (with Kubernetes 1.16), it produces the same error.

Manifests are available there to reproduce if needed.

I assume it's linked to deprecations in Kubernetes 1.16 : see https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

DaemonSet, Deployment, StatefulSet, and ReplicaSet (in the extensions/v1beta1 and apps/v1beta2 API groups)
Migrate to use the apps/v1 API, available since v1.9. Existing persisted data can be retrieved/updated via the apps/v1 API.

And I guess to fix it you would have to (at least) change clients.KubernetesClient.ExtensionsV1beta1().[Kind] in https://github.com/stakater/Reloader/blob/master/internal/pkg/callbacks/rolling_upgrade.go

Support for ~~openshift~~ init containers?

Is there anyway currently to have this work with openshift's DeploymentConfigs?


Edit: I switched over to a regular deployment and am still having issues getting Reloader to see it.

I'm curious if you see anything in the deployment below that would prevent the it from being picked up?

oc describe deployment manifest-config-map
Name:                   manifest-config-map
Namespace:              nx-lowest
CreationTimestamp:      Fri, 15 Feb 2019 18:56:09 -0800
Labels:                 app=review
                        name=manifest-config-map
Annotations:            configmap.reloader.stakater.com/reload=content-manifest
                        deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"configmap.reloader.stakater.com/reload":"content-manifest"},"labels":{...
Selector:               name=manifest-config-map
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  name=manifest-config-map
  Init Containers:
   process-index:
    Image:  busybox
    Port:   <none>
    Command:
      sh
      -c
      ...  

    Environment:  <none>
    Mounts:
      /content-manifest.json from content-manifest-volume (rw)
      /original/index.html from nginx-config-volume (rw)
      /processed from processed-index-volume (rw)
  Containers:
   manifest-config-map:
    Image:        quay.sys.com/digital/nx:lowest-manifest-config-map-latest
    Port:         8443/TCP
    Liveness:     http-get http://:8443/healthcheck delay=0s timeout=1s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8443/web/secure/consumer delay=10s timeout=2s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/nginx/nginx.conf from nginx-config-volume (rw)
      /usr/share/nginx/html/index.html from processed-index-volume (rw)
  Volumes:
   nginx-config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      manifest-config-map-nginx-conf
    Optional:  false
   content-manifest-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      content-manifest
    Optional:  false
   processed-index-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  manifest-config-map-7dbc6fcc4c (3/3 replicas created)
NewReplicaSet:   <none>
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  11m   deployment-controller  Scaled up replica set manifest-config-map-7dbc6fcc4c to 3

Reloader With SealedSecrets

Hi, Would it be possible to make reloader restart pods if sealed secrets changes, as of now it works great with configmap and secret changes but we want it to restart pod on an update of sealed secrets

Geeting Issue While Using Reloader In ConfigMap

Hi Team,
I used reloader for ConfigMap in my deployment file .and as per documentation when we make any changes in configmap then their value should reflect on running pod.But we are not getting that change in pod.
Here is My Deployment File:-------
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
annotations:
configmap.reloader.stakater.com/reload: "nginx-configmap"
spec:
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config1
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
volumes:
- name: nginx-config1
configMap:
name: nginx-configmap
Here Is the Config File:----------------------------
apiVersion: v1
data:
default.conf: |2

   upstream backend {
        server 172.27.15.8:8081;
     server 192.0.0.1 backup;
    }
server {
    location / {
        proxy_pass http://backend;
    }
}

kind: ConfigMap
metadata:
creationTimestamp: 2019-01-18T05:33:31Z
name: nginx-configmap
namespace: default
resourceVersion: "9436168"
selfLink: /api/v1/namespaces/default/configmaps/nginx-configmap
uid: 971e503d-1ae2-11e9-9773-a08cfdc9b1ab


--please help me on this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.