Kubernetes部署和使用Prometheus(图片待补)

这篇具有很好参考价值的文章主要介绍了Kubernetes部署和使用Prometheus(图片待补)。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

Kubernetes部署和使用Prometheus

默认已安装kuberneter
kubernetes1.25 kube-prometheus release-0.12
这里使用的是Kube-Prometheus

1、架构介绍

Kubernetes部署和使用Prometheus(图片待补)

Prometheus Server:抓取和存储时间序列数据,同时提供数据的查询和告警策略的配置管理

Alertmanager:Prometheus Server 会将告警发送给 Alertmanager,Alertmanager 根据路由配置,将告警信息发送给指定的或组,支持邮件、Webhook、微信、钉钉、短信等

Grafana:用于展示数据

Push Gateway:Prometheus 通过 Pull 的方式拉取数据,但是有些监控数据可能是短期的,如果没有采集数据可能会出现丢失,Push Gateway 可以用来接收数据

ServiceMonitor:监控配置,通过选择service的/metrics接口获取数据

Exporter:用来采集非云原生监控数据,主机的监控数据通过 node_exporter 采集,MySQL 的监控数据通过 mysql_exporter 采集

PromQL:查询数据的一种语法

Service Discovery:用来发现监控目标的自动发现,常用的有基于 Kubernetes、 Consul、Eureka、文件的自动发现等

2、安装

# 这里注意去github找kubernetes对应的版本,1.25对应的kube-prometheus版本为release-0.12
git clone -b release-0.12 https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus
# 部分镜像下载不下来,这里在对应文件修改镜像地址
vi manifests/kubeStateMetrics-deployment.yaml
vi manifests/prometheusAdapter-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: exporter
    app.kubernetes.io/name: kube-state-metrics
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.7.0
  name: kube-state-metrics
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: exporter
      app.kubernetes.io/name: kube-state-metrics
      app.kubernetes.io/part-of: kube-prometheus
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/default-container: kube-state-metrics
      labels:
        app.kubernetes.io/component: exporter
        app.kubernetes.io/name: kube-state-metrics
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 2.7.0
    spec:
      automountServiceAccountToken: true
      containers:
        - args:
            - --host=127.0.0.1
            - --port=8081
            - --telemetry-host=127.0.0.1
            - --telemetry-port=8082
          # image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
          image: registry.cn-hangzhou.aliyuncs.com/ialso/kube-state-metrics:v2.7.0
          name: kube-state-metrics
          resources:
            limits:
              cpu: 100m
              memory: 250Mi
            requests:
              cpu: 10m
              memory: 190Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
            runAsUser: 65534
        - args:
            - --logtostderr
            - --secure-listen-address=:8443
            - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
            - --upstream=http://127.0.0.1:8081/
          image: quay.io/brancz/kube-rbac-proxy:v0.14.0
          name: kube-rbac-proxy-main
          ports:
            - containerPort: 8443
              name: https-main
          resources:
            limits:
              cpu: 40m
              memory: 40Mi
            requests:
              cpu: 20m
              memory: 20Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
            runAsGroup: 65532
            runAsNonRoot: true
            runAsUser: 65532
        - args:
            - --logtostderr
            - --secure-listen-address=:9443
            - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
            - --upstream=http://127.0.0.1:8082/
          image: quay.io/brancz/kube-rbac-proxy:v0.14.0
          name: kube-rbac-proxy-self
          ports:
            - containerPort: 9443
              name: https-self
          resources:
            limits:
              cpu: 20m
              memory: 40Mi
            requests:
              cpu: 10m
              memory: 20Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
            runAsGroup: 65532
            runAsNonRoot: true
            runAsUser: 65532
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: metrics-adapter
    app.kubernetes.io/name: prometheus-adapter
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.10.0
  name: prometheus-adapter
  namespace: monitoring
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/component: metrics-adapter
      app.kubernetes.io/name: prometheus-adapter
      app.kubernetes.io/part-of: kube-prometheus
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/component: metrics-adapter
        app.kubernetes.io/name: prometheus-adapter
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 0.10.0
    spec:
      automountServiceAccountToken: true
      containers:
        - args:
            - --cert-dir=/var/run/serving-cert
            - --config=/etc/adapter/config.yaml
            - --logtostderr=true
            - --metrics-relist-interval=1m
            - --prometheus-url=http://prometheus-k8s.monitoring.svc:9090/
            - --secure-port=6443
            - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
          # image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.10.0
          image: registry.cn-hangzhou.aliyuncs.com/ialso/prometheus-adapter:v0.10.0
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /livez
              port: https
              scheme: HTTPS
            initialDelaySeconds: 30
            periodSeconds: 5
          name: prometheus-adapter
          ports:
            - containerPort: 6443
              name: https
          readinessProbe:
            failureThreshold: 5
            httpGet:
              path: /readyz
              port: https
              scheme: HTTPS
            initialDelaySeconds: 30
            periodSeconds: 5
          resources:
            limits:
              cpu: 250m
              memory: 180Mi
            requests:
              cpu: 102m
              memory: 180Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
          volumeMounts:
            - mountPath: /tmp
              name: tmpfs
              readOnly: false
            - mountPath: /var/run/serving-cert
              name: volume-serving-cert
              readOnly: false
            - mountPath: /etc/adapter
              name: config
              readOnly: false
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: prometheus-adapter
      volumes:
        - emptyDir: {}
          name: tmpfs
        - emptyDir: {}
          name: volume-serving-cert
        - configMap:
            name: adapter-config
          name: config
kubectl apply --server-side -f manifests/setup
kubectl wait \
    --for condition=Established \
    --all CustomResourceDefinition \
    --namespace=monitoring
kubectl apply -f manifests/

3、配置外部访问

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-grafana
  namespace: monitoring
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  # 转发规则
  rules:
    - host: grafana.ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: grafana
                port:
                  number: 3000
    - host: alertmanager.ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: alertmanager-main
                port:
                  number: 9093
    - host: prometheus.ialso.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: prometheus-k8s
                port:
                  number: 9090

4、配置grafana

面板配置:https://grafana.com/grafana/dashboards/

Kubernetes部署和使用Prometheus(图片待补)

导入面板

Kubernetes部署和使用Prometheus(图片待补)

输入模板ID

Kubernetes部署和使用Prometheus(图片待补)

配置

Kubernetes部署和使用Prometheus(图片待补)

5、ControllerManager报警处理

# 确定目标serviceMonitor是否存在
kubectl get serviceMonitor -n monitoring kube-controller-manager
# 查看目标对应的service标签
kubectl get serviceMonitor -n monitoring kube-controller-manager -o yaml
# 查看目标service是否存在(我这里service不存在)
kubectl get svc -n kube-system -l app.kubernetes.io/name=kube-controller-manager
# 查看kube-controll端口
netstat -lntp|grep "kube-controll"
# 为目标创建service&endpiont
vi cm-prometheus.yaml
apiVersion: v1
kind: Endpoints
metadata:
  labels:
    app.kubernetes.io/name: kube-controller-manager
  name: cm-prometheus
  namespace: kube-system
subsets:
  - addresses:
      - ip: 10.10.0.15
    ports:
      - name: https-metrics
        port: 10252
        protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: kube-controller-manager
  name: cm-prometheus
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
    - name: https-metrics
      port: 10252
      protocol: TCP
      targetPort: 10252
# 再次查看目标service是否存在
kubectl get svc -n kube-system -l app.kubernetes.io/name=kube-controller-manager

此时prometheus target已经出现了kube-controller-manager,但是状态时DOWN,可能是因为Controller Manager监听的127.0.0.1,导致无法被外部访问

# 修改kube-controller-manager配置文件(- --bind-address=0.0.0.0)
vi /etc/kubernetes/manifests/kube-controller-manager.yaml

此时prometheus target已经出现了kube-controller-manager的状态就应该是UP了

6、Scheduler报警处理

# 确定目标serviceMonitor是否存在
kubectl get serviceMonitor -n monitoring kube-scheduler
# 查看目标对应的service标签
kubectl get serviceMonitor -n monitoring kube-scheduler -o yaml
# 查看目标service是否存在(我这里service不存在)
kubectl get serviceMonitor -n monitoring kube-scheduler -l app.kubernetes.io/name=kube-scheduler
# 查看kube-scheduler端口
netstat -lntp|grep "kube-scheduler"
# 为目标创建service&endpiont
vi scheduler-prometheus.yaml
apiVersion: v1
kind: Endpoints
metadata:
  labels:
    app.kubernetes.io/name: kube-scheduler
  name: scheduler-prometheus
  namespace: kube-system
subsets:
  - addresses:
      - ip: 10.10.0.15
    ports:
      - name: https-metrics
        port: 10259
        protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: kube-scheduler
  name: scheduler-prometheus
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
    - name: https-metrics
      port: 10259
      protocol: TCP
      targetPort: 10259
# 再次查看目标service是否存在
kubectl get serviceMonitor -n monitoring kube-scheduler -l app.kubernetes.io/name=kube-scheduler

此时prometheus target已经出现了kube-scheduler,但是状态时DOWN,可能是因为Scheduler监听的127.0.0.1,导致无法被外部访问

# 修改kube-scheduler配置文件(- --bind-address=0.0.0.0)
vi /etc/kubernetes/manifests/kube-scheduler.yaml

此时prometheus target已经出现了kube-scheduler的状态就应该是UP了

7、监控etcd

配置Service&Endpoints

# 查看证书文件位置
cat /etc/kubernetes/manifests/etcd.yaml
# 找到其中(--cert-file、--key-file)
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--key-file=/etc/kubernetes/pki/etcd/server.key
# 尝试访问metrics接口
curl --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key https://10.10.0.15:2379/metrics -k | tail -1
# 查看目前的ServiceMonitor
kubectl get ServiceMonitor -n monitoring
# 创建Etcd的Service&Endpoints
vi etcd-prometheus.yaml
kubectl apply -f etcd-prometheus.yaml
# 测试10.96.27.17是service etcd-prometheus的ClusterIP
curl --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key https://10.96.27.17:2379/metrics -k | tail -1
apiVersion: v1
kind: Endpoints
metadata:
  labels:
    app: etcd-prometheus
  name: etcd-prometheus
  namespace: kube-system
subsets:
  - addresses:
    - ip: 10.10.0.15
    ports:
      - name: https-metrics
        port: 2379
        protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: etcd-prometheus
  name: etcd-prometheus
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
    - name: https-metrics
      port: 2379
      protocol: TCP
      targetPort: 2379

创建证书供Prometheus使用

# 创建证书
kubectl create secret generic etcd-ssl \
    --from-file=/etc/kubernetes/pki/etcd/ca.crt \
    --from-file=/etc/kubernetes/pki/etcd/server.crt \
    --from-file=/etc/kubernetes/pki/etcd/server.key \
    -n monitoring
# 挂载到Prometheus(修改后prometheus-k8s-n会自动重启)
kubectl edit prometheus k8s -n monitoring
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"prometheus","app.kubernetes.io/instance":"k8s","app.kubernetes.io/name":"prometheus","app.kubernetes.io/part-of":"kube-prometheus","app.kubernetes.io/version":"2.41.0"},"name":"k8s","namespace":"monitoring"},"spec":{"alerting":{"alertmanagers":[{"apiVersion":"v2","name":"alertmanager-main","namespace":"monitoring","port":"web"}]},"enableFeatures":[],"externalLabels":{},"image":"quay.io/prometheus/prometheus:v2.41.0","nodeSelector":{"kubernetes.io/os":"linux"},"podMetadata":{"labels":{"app.kubernetes.io/component":"prometheus","app.kubernetes.io/instance":"k8s","app.kubernetes.io/name":"prometheus","app.kubernetes.io/part-of":"kube-prometheus","app.kubernetes.io/version":"2.41.0"}},"podMonitorNamespaceSelector":{},"podMonitorSelector":{},"probeNamespaceSelector":{},"probeSelector":{},"replicas":2,"resources":{"requests":{"memory":"400Mi"}},"ruleNamespaceSelector":{},"ruleSelector":{},"securityContext":{"fsGroup":2000,"runAsNonRoot":true,"runAsUser":1000},"serviceAccountName":"prometheus-k8s","serviceMonitorNamespaceSelector":{},"serviceMonitorSelector":{},"version":"2.41.0"}}
  creationTimestamp: "2023-05-22T07:44:05Z"
  generation: 1
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/instance: k8s
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.41.0
  name: k8s
  namespace: monitoring
  resourceVersion: "105405"
  uid: 9f6efc98-0f83-4d4d-b8e6-c77bd857efa0
spec:
  alerting:
    alertmanagers:
    - apiVersion: v2
      name: alertmanager-main
      namespace: monitoring
      port: web
  enableFeatures: []
  evaluationInterval: 30s
  externalLabels: {}
  image: quay.io/prometheus/prometheus:v2.41.0
  nodeSelector:
    kubernetes.io/os: linux
  podMetadata:
    labels:
      app.kubernetes.io/component: prometheus
      app.kubernetes.io/instance: k8s
      app.kubernetes.io/name: prometheus
      app.kubernetes.io/part-of: kube-prometheus
      app.kubernetes.io/version: 2.41.0
  podMonitorNamespaceSelector: {}
  podMonitorSelector: {}
  probeNamespaceSelector: {}
  probeSelector: {}
  replicas: 2
  # 添加证书
  secrets:
    - etcd-ssl
  resources:
    requests:
      memory: 400Mi
  ruleNamespaceSelector: {}
  ruleSelector: {}
  scrapeInterval: 30s
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: 2.41.0

创建ServiceMonitor

# 创建etcd的ServiceMonitor
vi etcd-servermonitor.yaml
kubectl apply -f etcd-servermonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: etcd
  namespace: monitoring
  labels:
    app: etcd
spec:
  jobLabel: k8s-app
  endpoints:
    - interval: 30s
      port: https-metrics  # 对应Service.spec.ports.name
      scheme: https
      tlsConfig:
        # 注意此处路径应为
        caFile: /etc/prometheus/secrets/etcd-ssl/ca.crt # 证书路径
        certFile: /etc/prometheus/secrets/etcd-ssl/server.crt
        keyFile: /etc/prometheus/secrets/etcd-ssl/server.key
        insecureSkipVerify: true # 关闭证书校验
  selector:
    matchLabels:
      app: etcd-prometheus  # 跟svc的 lables 保持一致
  namespaceSelector:
    matchNames:
      - kube-system

此时应当可以看到prometheus target中的etcd

Kubernetes部署和使用Prometheus(图片待补)

grafana面板(面板id:3070)

Kubernetes部署和使用Prometheus(图片待补)

8、监控mysql

默认集群已安装mysql,这里用的mysql5.7
另外这里监控的是单节点,多节点监控暂时还不清楚要怎么用,可能要多个mysql_exporter?

创建用于监控的用户

create user 'exporter'@'%' identified by '123456';
grant process,replication client,select on *.* to 'exporter'@'%';

配置mysql_exporter

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-exporter
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: mysql-exporter
  template:
    metadata:
      labels:
        k8s-app: mysql-exporter
    spec:
      containers:
        - name: mysql-exporter
          image: registry.cn-beijing.aliyuncs.com/dotbalo/mysqld-exporter
          env:
            - name: DATA_SOURCE_NAME
              # 格式为username:password@(ip/service.namespace:3306)/
              value: "exporter:123456@(mysql-master.mysql:3306)/"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9104
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: mysql-exporter
  name: mysql-exporter
  namespace: monitoring
spec:
  type: ClusterIP
  ports:
    # 此处name应为api,代指9104端口
    - name: api
      protocol: TCP
      port: 9104
  selector:
    k8s-app: mysql-exporter
# 10.96.136.65是service mysql-exporter的ClusterIP
curl 10.96.136.65:9104/metrics

配置serviceMonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: mysql-exporter
  namespace: monitoring
  labels:
    k8s-app: mysql-exporter
    namespace: monitoring
spec:
  jobLabel: k8s-app
  endpoints:
    - port: api
      interval: 30s
      scheme: http
  selector:
    matchLabels:
      k8s-app: mysql-exporter
  namespaceSelector:
    matchNames:
      - monitoring

此时应当可以看到prometheus target中的mysql_exporter

Kubernetes部署和使用Prometheus(图片待补)

grafana面板(面板id:7362)

Kubernetes部署和使用Prometheus(图片待补)

如果出现问题可参照上方cm和scheduler方式排查,若均无问题可查看日志看是否出错kubectl logs -n monitoring mysql-exporter-6559759477-m8tqc

9、告警(邮件)

https://prometheus.io/docs/alerting/latest/alertmanager/

https://github.com/prometheus/alertmanager/blob/main/doc/examples/simple.yml

  • Global:全局配置,主要用来配置一些通用的配置,比如邮件通知的账号、密码、SMTP 服务器、微信告警等
  • Templates:用于放置自定义模板的位置
  • Route:告警路由配置,用于告警信息的分组路由,可以将不同分组的告警发送给不同的收件人
  • Inhibit_rules:告警抑制,主要用于减少告警的次数,防止“告警轰炸”
  • Receivers:告警收件人配置
# 修改告警规则
vi kube-prometheus/manifests/alertmanager-secret.yaml
# 应用新的告警规则
kubectl replace -f kube-prometheus/manifests/alertmanager-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.25.0
  name: alertmanager-main
  namespace: monitoring
stringData:
  alertmanager.yaml: |-
    # global:全局配置,主要配置告警方式,如邮件、webhook等
    global:
      # 超时,默认5min
      resolve_timeout: 5m
      # 邮件配置
      smtp_smarthost: 'smtp.qq.com:465'
      smtp_from: '2750955630@qq.com'
      smtp_auth_username: '2750955630@qq.com'
      smtp_auth_password: 'puwluaqcmkrdddge'
      smtp_require_tls: false

    # 模板
    templates:
      - '/usr/local/alertmanager/*.tmp'

    # 路由
    route:
      # 路由分组规则
      group_by: [ 'namespace', 'job', 'alertname' ]
      # 当一个新告警组被创建时,需要等待'group_wait'后才发送初始通知,防止告警轰炸
      group_wait: 30s
      # 当第一次告警通知发出后,在新的评估周期内又收到了该分组最新告警,则需等待'group_interval'时间后,开始发送为该组触发的新告警
      group_interval: 2m
      # 告警通知成功发送后,若问题一直未恢复,再次重复发送的间隔
      repeat_interval: 10m
      #  配置告警消息接收者
      receiver: 'Default'

      # 子路由
      routes:
        - receiver: 'email'
          match:
            alertname: "Watchdog"

    # 配置报警信息接收者信息
    receivers:
      - name: 'Default'
        email_configs:
          # 接收警报的email
          - to: 'xumeng03@bilibili.com'
            # 故障恢复后通知
            send_resolved: true
      - name: 'email'
        email_configs:
          # 接收警报的email
          - to: '2750955630@qq.com'
            # 故障恢复后通知
            send_resolved: true

    # 抑制规则配置
    inhibit_rules:
      - source_matchers:
          - severity="critical"
        target_matchers:
          - severity=~"warning|info"
        equal:
          - namespace
          - alertname
      - source_matchers:
          - severity="warning"
        target_matchers:
          - severity="info"
        equal:
          - namespace
          - alertname
      - source_matchers:
          - alertname="InfoInhibitor"
        target_matchers:
          - severity="info"
        equal:
          - namespace
type: Opaque

10、告警(企业微信)

需获取到以下信息

# 企业ID: wwe86504f797d306ce
# 部门ID: 4
# 应用AgentId: 1000002
# 应用Secret: FrAuzVnZvkmJdQcRiESKtBHsX8Xmq5LHEc-cn-xxxx

并且配置好“网页授权及JS-SDK”与“企业可信IP”

Kubernetes部署和使用Prometheus(图片待补)

这里如果使用nginx配置可信域名,可以采用以下方式(在能获取授权文件的同时不影响正常的服务)

server {
        # 监听端口
        listen 443 ssl;
        # 监听域名
        server_name  ialso.cn;
        # 证书信息
        ssl_certificate      /etc/nginx/ssl/ialso.cn_bundle.pem;
        ssl_certificate_key  /etc/nginx/ssl/ialso.cn.key;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;
        root /etc/nginx/wechat; # MP_verify_7UJT32UzCOGkaUNB.txt 文件放在了 /etc/nginx/wechat 目录下

        location / {
            # 访问 https://ialso.cn/WW_verify_wEY0iTPFwKQAen0a.txt 时 --> try_files $uri --> try_files /WW_verify_wEY0iTPFwKQAen0a.txt --> /etc/nginx/wechat/WW_verify_wEY0iTPFwKQAen0a.txt --> 实现了访问
            # 访问 https://ialso.cn/网关转发的uri/xxx 时 --> try_files $uri --> try_files /网关转发的uri/xxx --> /data/wx/网关转发的uri/xxx 不存在 --> try_files @gateway --> location @gateway --> proxy_pass http://ialso_index --> 实现了访问
            try_files $uri @gateway;
        }

        # 转发/下的请求到ialso_kubernetes_ingress
        location @gateway {
            proxy_pass http://ialso_index;
            # 转发时保留原始请求域名
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }

配置好上述信息就可以开始验证下消息的发送

python wechat.py Warning "warning message"
#!/bin/python
# wechat.py
import urllib,urllib2
import json
import sys
import simplejson

reload(sys)
sys.setdefaultencoding('utf-8')

def gettoken(corpid,corpsecret):
    gettoken_url = 'https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid=' + corpid + '&corpsecret=' + corpsecret
    print  gettoken_url
    try:
        token_file = urllib2.urlopen(gettoken_url)
    except urllib2.HTTPError as e:
        print e.code
        print e.read().decode("utf8")
        sys.exit()
    token_data = token_file.read().decode('utf-8')
    token_json = json.loads(token_data)
    token_json.keys()
    token = token_json['access_token']
    return token



def senddata(access_token,subject,content):

    send_url = 'https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token=' + access_token
    # toparty: 部门ID agentid: 应用ID
    send_values = {
        "toparty":"4",
        "msgtype":"text",
        "agentid":"1000002",
        "text":{
            "content":subject + '\n' + content
           },
        "safe":"0"
        }

    send_data = simplejson.dumps(send_values, ensure_ascii=False).encode('utf-8')
    send_request = urllib2.Request(send_url, send_data)
    response = json.loads(urllib2.urlopen(send_request).read())
    print str(response)


if __name__ == '__main__':
    # 消息标题
    subject = str(sys.argv[1])
    # 消息内容
    content = str(sys.argv[2])
    
    # 企业ID
    corpid =  'wwe86504f797d306ce'
    # 应用secret
    corpsecret = 'FrAuzVnZvkmJdQcRiESKtBHsX8Xmq5LHEc-cn-wl3UY'
    accesstoken = gettoken(corpid,corpsecret)
    senddata(accesstoken,subject,content)

成功收到消息后就开始修改alertmanager规则(若不成功,可根据返回消息指示进行配置)

apiVersion: v1
kind: Secret
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.25.0
  name: alertmanager-main
  namespace: monitoring
stringData:
  alertmanager.yaml: |-
    # global:全局配置,主要配置告警方式,如邮件、webhook等
    global:
      # 超时,默认5min
      resolve_timeout: 5m
      # 邮件配置
      smtp_smarthost: 'smtp.qq.com:465'
      smtp_from: '2750955630@qq.com'
      smtp_auth_username: '2750955630@qq.com'
      smtp_auth_password: 'puwluaqcmkrdddge'
      smtp_require_tls: false

      # 企业微信配置
      wechat_api_url: 'https://qyapi.weixin.qq.com/cgi-bin/'
      # 企业ID
      wechat_api_corp_id: 'wwe86504f797d306ce'
      # 应用secret
      wechat_api_secret: 'FrAuzVnZvkmJdQcRiESKtBHsX8Xmq5LHEc-cn-wl3UY'

    # 模板
    templates:
      - '/usr/local/alertmanager/*.tmp'

    # 路由
    route:
      # 路由分组规则
      group_by: [ 'namespace', 'job', 'alertname' ]
      # 当一个新告警组被创建时,需要等待'group_wait'后才发送初始通知,防止告警轰炸
      group_wait: 30s
      # 当第一次告警通知发出后,在新的评估周期内又收到了该分组最新告警,则需等待'group_interval'时间后,开始发送为该组触发的新告警
      group_interval: 2m
      # 告警通知成功发送后,若问题一直未恢复,再次重复发送的间隔
      repeat_interval: 10m
      #  配置告警消息接收者
      receiver: 'Default'

      # 子路由
      routes:
        - receiver: 'wechat'
          match:
            alertname: "Watchdog"

    # 配置报警信息接收者信息
    receivers:
      - name: 'Default'
        email_configs:
          # 接收报警的email
          - to: 'xumeng03@bilibili.com'
            # 故障恢复后通知
            send_resolved: true
      - name: 'email'
        email_configs:
          # 接收报警的email
          - to: '2750955630@qq.com'
            # 故障恢复后通知
            send_resolved: true
      - name: 'wechat'
        wechat_configs:
          # 接收报警的部门ID
          - to_party: 2
            # 报警应用AgentId
            agent_id: 1000002
            # 故障恢复后通知
            send_resolved: true

    # 抑制规则配置
    inhibit_rules:
      - source_matchers:
          - severity="critical"
        target_matchers:
          - severity=~"warning|info"
        equal:
          - namespace
          - alertname
      - source_matchers:
          - severity="warning"
        target_matchers:
          - severity="info"
        equal:
          - namespace
          - alertname
      - source_matchers:
          - alertname="InfoInhibitor"
        target_matchers:
          - severity="info"
        equal:
          - namespace
type: Opaque

11、自定义告警模板

添加自定义模板文章来源地址https://www.toymoban.com/news/detail-468455.html

# 添加自定义模板
vi kube-prometheus/manifests/alertmanager-secret.yaml
# 应用新的告警规则
kubectl replace -f kube-prometheus/manifests/alertmanager-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.25.0
  name: alertmanager-main
  namespace: monitoring
stringData:
  wechat.tmpl: |-
    {{ define "wechat.default.message" }}
    {{- if gt (len .Alerts.Firing) 0 -}}
    {{- range $index, $alert := .Alerts -}}
    {{- if eq $index 0 -}}
    # 异常报警!!!
    {{- end }}
    告警状态:{{   .Status }}
    告警级别:{{ .Labels.severity }}
    告警类型:{{ $alert.Labels.alertname }}
    故障主机: {{ $alert.Labels.instance }}
    告警主题: {{ $alert.Annotations.summary }}
    告警详情: {{ $alert.Annotations.message }}{{ $alert.Annotations.description}}
    触发阀值:{{ .Annotations.value }}
    故障时间: {{ ($alert.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
    {{- end }}
    {{- end }}
    {{- if gt (len .Alerts.Resolved) 0 -}}
    {{- range $index, $alert := .Alerts -}}
    {{- if eq $index 0 -}}
    # 异常恢复!!!
    {{- end }}
    异常恢复!!!
    告警类型:{{ .Labels.alertname }}
    告警状态:{{   .Status }}
    告警主题: {{ $alert.Annotations.summary }}
    告警详情: {{ $alert.Annotations.message }}{{ $alert.Annotations.description}}
    故障时间: {{ ($alert.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
    恢复时间: {{ ($alert.EndsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
    {{- if gt (len $alert.Labels.instance) 0 }}
    实例信息: {{ $alert.Labels.instance }}
    {{- end }}
    {{- end }}
    {{- end }}
    {{- end }}
  alertmanager.yaml: |-
    # global:全局配置,主要配置告警方式,如邮件、webhook等
    global:
      # 超时,默认5min
      resolve_timeout: 5m
      # 邮件配置
      smtp_smarthost: 'smtp.qq.com:465'
      smtp_from: '2750955630@qq.com'
      smtp_auth_username: '2750955630@qq.com'
      smtp_auth_password: 'puwluaqcmkrdddge'
      smtp_require_tls: false

      # 企业微信配置
      wechat_api_url: 'https://qyapi.weixin.qq.com/cgi-bin/'
      wechat_api_corp_id: 'wwe86504f797d306ce'
      wechat_api_secret: 'FrAuzVnZvkmJdQcRiESKtBHsX8Xmq5LHEc-cn-wl3UY'

    # 模板
    templates:
      - '/etc/alertmanager/config/*.tmpl'

    # 路由
    route:
      # 路由分组规则
      group_by: [ 'namespace', 'job', 'alertname' ]
      # 当一个新告警组被创建时,需要等待'group_wait'后才发送初始通知,防止告警轰炸
      group_wait: 30s
      # 当第一次告警通知发出后,在新的评估周期内又收到了该分组最新告警,则需等待'group_interval'时间后,开始发送为该组触发的新告警
      group_interval: 2m
      # 告警通知成功发送后,若问题一直未恢复,再次重复发送的间隔
      repeat_interval: 5m
      #  配置告警消息接收者
      receiver: 'Default'

      # 子路由
      routes:
        - receiver: 'wechat'
          match:
            alertname: "Watchdog"

    # 配置报警信息接收者信息
    receivers:
      - name: 'Default'
        email_configs:
          # 接收报警的email
          - to: 'xumeng03@bilibili.com'
            # 故障恢复后通知
            send_resolved: true
      - name: 'email'
        email_configs:
          # 接收报警的email
          - to: '2750955630@qq.com'
            # 故障恢复后通知
            send_resolved: true
      - name: 'wechat'
        wechat_configs:
          # 接收报警的部门ID
          - to_party: 4
            # 报警应用AgentId
            agent_id: 1000002
            # 故障恢复后通知
            send_resolved: true
            # 使用指定模板
            message: '{{ template "wechat.default.message" . }}'

    # 抑制规则配置
    inhibit_rules:
      - source_matchers:
          - severity="critical"
        target_matchers:
          - severity=~"warning|info"
        equal:
          - namespace
          - alertname
      - source_matchers:
          - severity="warning"
        target_matchers:
          - severity="info"
        equal:
          - namespace
          - alertname
      - source_matchers:
          - alertname="InfoInhibitor"
        target_matchers:
          - severity="info"
        equal:
          - namespace
type: Opaque

到了这里,关于Kubernetes部署和使用Prometheus(图片待补)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Prometheus的Pushgateway快速部署及使用

    Pushgateway为Prometheus整体监控方案的功能组件之一,并做于一个独立的工具存在。它主要用于Prometheus无法直接拿到监控指标的场景,如监控源位于防火墙之后,Prometheus无法穿透防火墙;目标服务没有可抓取监控数据的端点等多种情况。 在类似场景中,可通过部署Pushgateway的方

    2024年02月08日
    浏览(41)
  • 使用docker部署prometheus最新版本2.43.0

    环境准备: 192.168.197.136    安装prometheus 192.168.197.137 安装grafana 先把需要用到的镜像拉取下来 docker pull prom/prometheus    136服务器拉取 当前最新版本2.43.0 在部署prometheus之前先运行一个容器,把需要挂载的目录文件拷贝出来 docker run -d  --name prometheus --net=host docker.io/prom/prom

    2024年02月02日
    浏览(33)
  • Kubernetes 和 Prometheus

    资源监控系统是容器编排系统必不可少的组件,也是服务治理的核心之一。而 Prometheus 本质上是一个开源的服务监控系统和时序数据库,是 CNCF 起家的第二个项目,目前已经成为 Kubernetes 生态圈中的监控系统的核心。 Prometheus 的核心组件 Prometheus 服务器定期从静态配置的监控

    2024年02月10日
    浏览(50)
  • 基于Prometheus监控Kubernetes集群

    目录 ometheus优势 Prometheus工作服务过程 Prometheus核心组件 Prometheus实践架构图 Grafana简介 Grafana特点 一、环境准备 1.1、主机初始化配置 1.2、部署docker环境 二、部署kubernetes集群 2.1、组件介绍 2.2、配置阿里云yum源 2.3、安装kubelet kubeadm kubectl 2.4、配置init-config.yaml 2.5、安装master节

    2024年02月12日
    浏览(46)
  • 教你如何使用Docker部署Prometheus,全套教程,一步到位,避坑

    前言: 本文使用doker部署Prometheus,文中命令直接复制粘贴即可 一、概念 Prometheus是一个开源的系统监控和报警系统,它前身是SoundCloud的警告工具包。 二、组件介绍 1. Node Exporter :收集服务器硬件和操作系统信息 2. mysqld-exporter:收集mysql使用情况数据信息 3. cAdvisor:负责收集

    2023年04月16日
    浏览(61)
  • Kubernetes rancher、prometheus、ELK的安装

    有关rancher的安装其实官方网站给的步骤已经很详细了,可以直接按照步骤安装即可。 执行 helm repo add 命令,以添加包含安装 Rancher 的 Chart 的 Helm Chart 仓库。 官方推荐的是 Stable:建议用于生产环境 默认是使用下面方式,当然你自己已有证书的情况还是配置为已有证书。 Ra

    2024年02月05日
    浏览(51)
  • Prometheus通过cAdvisor监控Kubernetes中的Pod

    cAdvisor(容器顾问)是一个用于监控容器(包括Pod)的工具,它可以搜集一台机器上所有运行的容器信息,并提供基础查询界面和HTTP接口,方便其他组件如Prometheus进行数据抓取。cAdvisor可以对节点机器上的容器进行实时监控和性能数据采集,包括容器的CPU使用情况、内存使用

    2024年04月27日
    浏览(35)
  • Kubernetes介绍和部署,使用

    1.k8s kubernetes来自希腊语舵手,google, 8是ubernete 1.管理docker容器 go写的(并发) 2.用于微服务 3.cncf云原生基金会 2.mater(管理节点)和nodes(微服务节点) 3.部署 1.minikube kind官网在线测试语句 2.kubeadm(官方)(安装比较方便 添加) 3.github下载二进制包 4.yum(老) 5.大神封装的环境 6.买云服务商提

    2024年02月12日
    浏览(35)
  • prometheus自动发现之kubernetes_sd_configs

    1. 为什么要使用服务发现 通常我们的Kubernetes 集群中会有很多的 Service 和 Pod等资源,这些资源可以随着需求规模的变化而变化,而这些pod的ip,名称也并非一成不变的。那么当k8s资源创建或更新时,如果一个一个的去更改或创建对应的监控Job,那操作将会非常的繁琐。而pro

    2023年04月19日
    浏览(36)
  • 使用 Sealos 一键部署 Kubernetes 集群

    Sealos 是一款以 Kubernetes 为内核的 云操作系统发行版 ,使用户能够 像使用个人电脑一样 简单地使用云。 与此同时,Sealos 还提供一套强大的工具,可以便利地管理整个 Kubernetes 集群的生命周期。 Sealos 不仅可以一键安装一个单节点的 Kubernetes 开发环境,还能构建数千节点的生

    2024年02月03日
    浏览(34)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包