Cookie Settings
Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Other cookies are those that are being identified and have not been classified into any category as yet.

No cookies to display.

6826eec9 configmap hash

Helm, Kubernetes, and the immutable configMap… a design pattern


Lets say you have got some program that doesn’t reload when its config changes. You introduce it to Kubernetes via helm. You use a configMap. All is good. Later you do a helm upgrade and… nothing happens. You are sad. You roll up your sleeves, write some code using inotify(), and the program restarts as soon as a change happens to the config. You are happy. Until that day you make a small typo in the config, call helm upgrade, and watch the continuous suicide of your Pods. Now you are sad again. If only there were a better way.

I present to you the better way. And its simple. It solves both problems at once.

Conceptually its pretty simple. You make the ‘name’ of the configMap have its contents-hash in it. Now, when it changes, the Deployment is different, so it will start to replace the Pods. It will ripple through, as the new Pods start, they must come online before the old ones will die. Thus if you have an error, it will not be a problem. Boom!

So here’s a subset of an example:. You’re welcome.

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "X.fullname" . }}-\
        {{ tpl .Values.config . | sha256sum | trunc 8 }}-\
        {{ tpl .Values.application . | sha256sum | trunc 8 }}-\
        {{ tpl .Values.logging . | sha256sum | trunc 8 }}
  labels:
    app.kubernetes.io/name: {{ include "X.name" . }}
    helm.sh/chart: {{ include "X.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
  server.json: {{ tpl .Values.config . | quote }}
  application.ini: {{ tpl .Values.application . | quote }}
  logback.xml: {{ tpl .Values.logging . | quote }}

 ...
apiVersion: apps/v1beta2
kind: Deployment
 ...
      - mountPath: /X/conf
        name: {{ include "X.fullname" . }}-config
  volumes:
    - name: {{ include "X.fullname" . }}-config
      configMap:
        name: {{ include "X.fullname" . }}-\
              {{ tpl .Values.config . | sha256sum | trunc 8 }}-\
              {{ tpl .Values.application . | sha256sum | trunc 8 }}-\
              {{ tpl .Values.logging . | sha256sum | trunc 8 }}