Alertmanager Setup on Kubernetes for Prometheus Monitoring

Alertmanager Setup on Kubernetes

In this quick start demo, we are going to do Alertmanager setup on Kubernetes cluster to handle Prometheus alerts. We will use slack as alert receiver.

What is Alertmanager?

The Alertmanager handles alerts sent by Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations such as email, PagerDuty, Slack. It also takes care of silencing and inhibition of alerts.

Alertmanager Setup on Kubernetes

Prerequisites:
  • A Kubernetes cluster(For information on how to deploy a GKE cluster, see this post.)
  • kubectl client library to connect to Kubernetes Cluster
  • Admin privileges on Kubernetes Cluster.
  • up & and running Prometheus Server (For information on how to deploy a Prometheus Server, see this post.)
  • Slack channel.

Connect to GKE cluster

gcloud container clusters get-credentials demo-k8s-cluster

Copy the following yaml configuration into a file called alertmanager-config.yaml. Here we are using monitoring namespace that we have already created while deploying Prometheus Server.

apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-config
  namespace: monitoring
  labels:
    k8s-app: alertmanager
data:
  alertmanager.yml: |
    global:
      resolve_timeout: 1m
      slack_api_url: 'https://hooks.slack.com/services/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'
    receivers:
    - name: 'slack-notificaions'
      slack_configs:
      - channel: '#devops-counsel-alertmanager-demo'
        send_resolved: true
    route:
      group_interval: 5m
      group_wait: 10s
      receiver: 'slack-notificaions'
      repeat_interval: 3h

The above Kubernetes config gets mounted on Alertmanager deployments pods. Slack receiver has configurated to receive alerts from Alertmanager using incoming webhook. When it pushed Prometheus monitoring alerts they will sent to #devops-counsel-alertmanager-demo channel.

Apply above configuration with kubectl command like below.

cloudshell:~/prometheus$ k apply -f alertmanager-config.yaml
configmap/alertmanager-config created

Copy below yaml configuration to a file called alertmanager.yaml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: alertmanager
  namespace: monitoring
  labels:
    k8s-app: alertmanager
spec:
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: alertmanager
  template:
    metadata:
      labels:
        k8s-app: alertmanager
    spec:
      containers:
        - name: prometheus-alertmanager
          image: prom/alertmanager
          imagePullPolicy: Always
          args:
            - --config.file=/etc/config/alertmanager.yml
            - --storage.path=/data
            - --web.listen-address=:9093
            - --web.route-prefix=/
            - --log.level=debug
          env:
          - name: POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          ports:
            - containerPort: 9093
              name: http
            - containerPort: 6783
              name: mesh
          readinessProbe:
            httpGet:
              path: /#/status
              port: 9093
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config
            - name: alertmanager-local-data
              mountPath: "/data"
              subPath: ""
          resources:
            limits:
              cpu: 10m
              memory: 50Mi
            requests:
              cpu: 10m
              memory: 50Mi
      volumes:
        - name: config-volume
          configMap:
            name: alertmanager-config
        - name: alertmanager-local-data
          emptyDir: {}

The above configuration deploys 2 Alertmanager pods. Apply the config using below kubectl command.

cloudshell:~/prometheus$ k apply -f alertmanager.yaml
deployment.apps/alertmanager created

Now we are going to create services for Alertmanager PODs. One config will create a service with GCP External load balancer another config will create a service with Cluster IP. Copy below yaml configuration to a file called alertmanager-server.yaml file.

apiVersion: v1
kind: Service
metadata:
  name: alertmanager
  namespace: monitoring
  labels:
    k8s-app: alertmanager
spec:
  ports:
    - name: http
      port: 9093
      protocol: TCP
      targetPort: 9093
  selector:
    k8s-app: alertmanager
  type: "LoadBalancer"
---
apiVersion: v1
kind: Service
metadata:
  name: alertmanager-operated
  namespace: monitoring
  labels:
    k8s-app: alertmanager
spec:
  type: "ClusterIP"
  clusterIP: None
  selector:
    k8s-app: alertmanager
  ports:
    - name: mesh
      port: 6783
      protocol: TCP
      targetPort: 6783
    - name: http
      port: 9093
      protocol: TCP
      targetPort: 9093

Apply the config using below kubectl command.

cloudshell:~/prometheus$ k apply -f alertmanager-service.yaml
service/alertmanager created
service/alertmanager-operated created

Now we can see Alertmanager services on GKE console.

gke console

Click on Alertmanager external load balancer url to access Alertmanager console.

Alertmanager console looks like below.

alertmanager console

When Prometheus start firing alerts like below those alerts will be pushed to Alertmanager.

prometheus console

You can see in below screen shot that the alerts received by Alertmanager from Prometheus.

alerts on alertmanager console

Finally you can see in below screen shot that the alert message reached Slack channel through incoming webhook.

alerts in slack channel

As per our Alertmanager configuration(repeat_interval) if an alert is not resolved in 3 hours it will send alert again to slack channel.

We can silence alerts on Alertmanager console. To silence an alert click on “Silence” button.

silence an alert

Then enter duration(how long we need to disable it) and name and comment. then Click on “Create” button.

alertmanager silence alerts

Conclusion

In this quick start demo we have configured Alertmanager to handle Prometheus monitoring alerts on a GKE cluster. We used Slack channel to receive alerts from Alertmanager server. You can find more information about Alertmanager in official documentation.

For more on Kubernetes Monitoring:

Monitoring Kubernetes Cluster with Prometheus

Prometheus Node Exporter Setup on Kubernetes

Grafana Setup for Prometheus Server on Kubernetes

Leave a Reply

%d