Making your Helm Chart observable for Prometheus

In this blog post, I walk you through the various steps required to make an existing Helm chart observable by Prometheus.

Making your Helm Chart observable for Prometheus
Photo by Liam Welch / Unsplash

In this blog post, I walk you through the various steps required to make an Helm chart observable by Prometheus. I explain concepts like the ServiceMonitor and PodMonitor that Prometheus uses to dynamically scrape your application, and how you can add them to your Helm chart.

Prerequisites

This tutorial assumes that your Application already exposes metrics on a port called http-metrics, as here:

apiVersion: v1
kind: Pod
metadata:
  name: your-pod-with-metrics
spec:
    containers:
    - name: your-container-with-metrics
        image: some-registry.com/your-repository
        ports:
        - name: http-metrics
            containerPort: 9095
            protocol: TCP

What is Prometheus

Prometheus is a monitoring solution supported by the Cloud-Native Computing Foundation (CNCF). It is used to collect metrics published by an application, a network device, or basically anything that provides metrics in the Prometheus format. If you want to learn more about the format used to define metrics, read the official documentation here.

docs/exposition_formats.md at main · prometheus/docs
Prometheus documentation: content and static site generator - docs/exposition_formats.md at main · prometheus/docs

Prometheus configuration

In order for Prometheus to know where to look for metrics, you need to define a list of targets in the Prometheus configuration. There are basically two approaches to defining this list of targets: static and dynamic. Since we want to dynamically add the application that is installed with our Helm chart to the list of targets whenever it is installed, we will only take a look at the dynamic approach.

Scraping Targets in Kubernetes

In the following sections, I'll show you the two different ways to dynamically define scrape targets for Prometheus inside a Helm chart:

  1. using annotations on a Service or Pod object
  2. using the Custom Resources ServiceMonitor and PodMonitor

Approach 2 requires that you have the Prometheus operator installed with the corresponding Custom Resource Definitions. This can be easily achieved with the official kube-prometheus-stack Helm chart, which you can find here:

kube-prometheus-stack 46.6.0 · prometheus/prometheus-community
kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

Using annotations

The easiest way to get Prometheus to scrape the metrics provided by your application is to use annotations and add them to the default Service or Pod object provided by Kubernetes. The annotations of interest for this case are the following:

annotations:
  prometheus.io/scrape: "true"
  prometheus.io/path: "/metrics"
  prometheus.io/port: "9095"

A sample values.yaml file that already contains these annotations might look like this:

metrics:
  service:
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/path: "/metrics"
      prometheus.io/port: "{{ .Values.metrics.service.port }}"
    port: 9095
    sessionAffinity: None

And the corresponding Helm Template for the Service:

apiVersion: v1
kind: Service
metadata:
  {{- if .Values.metrics.service.annotations }}
  annotations:
    {{- tpl (toYaml .Values.metrics.service.annotations) . | nindent 4 }}
  {{- end }}
  name: {{ include "a-helm-chart.fullname" . }}-metrics
  labels:
    {{- include "a-helm-chart.labels" . | nindent 4 }}
spec:
  type: ClusterIP
  ports:
    - port: {{ .Values.metrics.service.port }}
      targetPort: http-metrics
      protocol: TCP
      name: http-metrics
  selector:
    {{- include "a-helm-chart.selectorLabels" . | nindent 4 }}

As you can see, I used the tpl function provided by Helm to render the annotations. This way you can define them in the values.yaml in a generic way and reference the value defined in {{ .Values.metrics.service.port }}.

ServiceMonitor

The ServiceMonitor is an object that, as the name suggests, scrapes Service resources in your cluster. These Service resources themself must correspond to your Pods that expose the metrics.

Unlike the few options available when using annotations, when using a ServiceMonitor we can use all Prometheus configuration settings, such as relabeling, scraping intervals and much more.

The values.yaml must therefore also provide many more options for the user of our Helm chart:

metrics:
  serviceMonitor:
    # -- Additional labels that can be used so ServiceMonitor will be discovered by Prometheus
    additionalLabels: {}
    # -- Specify honorLabels parameter to add the scrape endpoint
    honorLabels: false
    # -- Interval at which metrics should be scraped.
    interval: "30s"
    # -- The name of the label on the target service to use as the job name in Prometheus
    jobLabel: ""
    # -- MetricRelabelConfigs to apply to samples before ingestion
    metricRelabelings: {}
    # -- Namespace for the ServiceMonitor Resource (defaults to the Release Namespace)
    namespace: ""
    # -- The path used by Prometheus to scrape metrics
    path: "/metrics"
    # -- RelabelConfigs to apply to samples before scraping
    relabelings: {}
    # -- Timeout after which the scrape is ended
    scrapeTimeout: ""
    # -- Prometheus instance selector labels
    selector: {}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: {{ template "a-helm-chart.fullname" . }}
  namespace: {{ .Values.metrics.serviceMonitor.namespace | default .Release.Namespace | quote }}
  labels:
    {{- include "a-helm-chart.labels" . | nindent 4 }}
    {{- with .Values.metrics.serviceMonitor.additionalLabels }}
      {{- toYaml . | nindent 4 }}
    {{- end }}
spec:
  endpoints:
    - port: http-metrics
      {{- if .Values.metrics.serviceMonitor.honorLabels }}
      honorLabels: {{ .Values.metrics.serviceMonitor.honorLabels }}
      {{- end }}
      {{- if .Values.metrics.serviceMonitor.interval }}
      interval: {{ .Values.metrics.serviceMonitor.interval | quote }}
      {{- end }}
      {{- with .Values.metrics.serviceMonitor.metricRelabelings }}
      metricRelabelings:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      path: {{ .Values.metrics.serviceMonitor.path | quote }}
      {{- with .Values.metrics.serviceMonitor.relabelings }}
      relabelings:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- if .Values.metrics.serviceMonitor.scrapeTimeout }}
      scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout | quote }}
      {{- end }}
  {{- if .Values.metrics.serviceMonitor.jobLabel }}
  jobLabel: {{ .Values.metrics.serviceMonitor.jobLabel | quote }}
  {{- end }}
  namespaceSelector:
    matchNames:
      - {{ .Release.Namespace | quote }}
  selector:
    matchLabels:
      {{- include "a-helm-chart.selectorLabels" . | nindent 6 }}
      {{- with .Values.metrics.serviceMonitor.selector }}
      {{- toYaml . | nindent 6 }}
      {{- end }}

PodMonitor

The last option we have is to use a PodMonitor instead of a ServiceMonitor. With this approach, you get one target entry in the Prometheus configuration for each Pod running for your application, while the ServiceMonitor gives you only one target, no matter how many Pods are running.

Besides that, the values.yaml and the Helm Template for this resource look almost identical to the definition of the ServiceMonitor:

metrics:
  podMonitor:
    # -- Additional labels that can be used so ServiceMonitor will be discovered by Prometheus
    additionalLabels: {}
    # -- Specify honorLabels parameter to add the scrape endpoint
    honorLabels: false
    # -- Interval at which metrics should be scraped.
    interval: "30s"
    # -- The name of the label on the target service to use as the job name in Prometheus
    jobLabel: ""
    # -- MetricRelabelConfigs to apply to samples before ingestion
    metricRelabelings: {}
    # -- Namespace for the ServiceMonitor Resource (defaults to the Release Namespace)
    namespace: ""
    # -- The path used by Prometheus to scrape metrics
    path: "/metrics"
    # -- RelabelConfigs to apply to samples before scraping
    relabelings: {}
    # -- Timeout after which the scrape is ended
    scrapeTimeout: ""
    # -- Prometheus instance selector labels
    selector: {}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: {{ template "a-helm-chart.fullname" . }}
  namespace: {{ .Values.metrics.podMonitor.namespace | default .Release.Namespace | quote }}
  labels:
    {{- include "a-helm-chart.labels" . | nindent 4 }}
    {{- with .Values.metrics.podMonitor.additionalLabels }}
      {{- toYaml . | nindent 4 }}
    {{- end }}
spec:
  podMetricsEndpoints:
    - port: http-metrics
      {{- if .Values.metrics.podMonitor.honorLabels }}
      honorLabels: {{ .Values.metrics.podMonitor.honorLabels }}
      {{- end }}
      {{- if .Values.metrics.podMonitor.interval }}
      interval: {{ .Values.metrics.podMonitor.interval | quote }}
      {{- end }}
      {{- with .Values.metrics.podMonitor.metricRelabelings }}
      metricRelabelings:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      path: {{ .Values.metrics.podMonitor.path | quote }}
      {{- with .Values.metrics.podMonitor.relabelings }}
      relabelings:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- if .Values.metrics.podMonitor.scrapeTimeout }}
      scrapeTimeout: {{ .Values.metrics.podMonitor.scrapeTimeout | quote }}
      {{- end }}
  {{- if .Values.metrics.podMonitor.jobLabel }}
  jobLabel: {{ .Values.metrics.podMonitor.jobLabel | quote }}
  {{- end }}
  namespaceSelector:
    matchNames:
      - {{ .Release.Namespace | quote }}
  selector:
    matchLabels:
      {{- include "a-helm-chart.selectorLabels" . | nindent 6 }}
      {{- with .Values.metrics.podMonitor.selector }}
      {{- toYaml . | nindent 6 }}
      {{- end }}

Implementation

A sample implementation of these concepts can be found on my GitHub repository helm-templates:

helm-templates/charts/prometheus-integration at master · christianhuth/helm-templates
A collection of different Helm Templates for developing Helm charts - christianhuth/helm-templates

This implementation also includes an additional example that shows you how to define a PrometheusRule, which is also a CRD provided by the Prometheus Operator that can be used to specify Alerting rules in a declarative way.

Helm courses

If you want to learn even more about Helm, check out my live instructor-led courses for Helm users and developers:

Helm Charts - Application and Installation
Administer and deploy applications in Kubernetes using Helm.
Best Practices | Development of own Helm Charts
Learn applications to develop and publish your own Helm Charts.