k8s集群部署vmalert和prometheusalert实现钉钉告警

这篇具有很好参考价值的文章主要介绍了k8s集群部署vmalert和prometheusalert实现钉钉告警。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

先决条件

安装以下软件包:git, kubectl, helm, helm-docs,请参阅本教程。

1、安装 helm

wget https://xxx-xx.oss-cn-xxx.aliyuncs.com/helm-v3.8.1-linux-amd64.tar.gz
tar xvzf helm-v3.8.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin
rm -rf linux-amd64

2、安装victoria-metrics-alert

(1)使用以下命令添加 helm chart存储库

helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update

(2)列出vm/victoria-metrics-alert可供安装的helm版本

helm search repo vm/victoria-metrics-alert -l

(3)victoria-metrics-alert将图表的默认值导出到文件values.yaml

helm show values vm/victoria-metrics-alert > values.yaml

(4)根据环境需要更改values.yaml文件中的值,完整配置参考如下

# Default values for victoria-metrics-alert.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:
  # mount API token to pod directly
  automountToken: true

imagePullSecrets: []

rbac:
  create: true
  pspEnabled: true
  namespaced: false
  extraLabels: {}
  annotations: {}

server:
  name: server
  enabled: true
  image:
    repository: victoriametrics/vmalert
    tag: "" # rewrites Chart.AppVersion
    pullPolicy: IfNotPresent
  nameOverride: ""
  fullnameOverride: ""

  ## See `kubectl explain poddisruptionbudget.spec` for more
  ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
  podDisruptionBudget:
    enabled: false
    # minAvailable: 1
    # maxUnavailable: 1
    labels: {}

  # -- Additional environment variables (ex.: secret tokens, flags) https://github.com/VictoriaMetrics/VictoriaMetrics#environment-variables
  env:
    []
    # - name: VM_remoteWrite_basicAuth_password
    #   valueFrom:
    #     secretKeyRef:
    #       name: auth_secret
    #       key: password

  replicaCount: 1

  # deployment strategy, set to standard k8s default
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%

  # specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing/terminating
  # 0 is the standard k8s default
  minReadySeconds: 0

  # vmalert reads metrics from source, next section represents its configuration. It can be any service which supports
  # MetricsQL or PromQL.
  datasource:
    url: "http://192.168.47.9:8481/select/0/prometheus/"
    basicAuth:
      username: ""
      password: ""

  remote:
    write:
      url: ""
    read:
      url: ""

  notifier:
    alertmanager:
      url: "http://192.168.112.68:9093"

  extraArgs:
    envflag.enable: "true"
    envflag.prefix: VM_
    loggerFormat: json

  # Additional hostPath mounts
  extraHostPathMounts:
    []
    # - name: certs-dir
    #   mountPath: /etc/kubernetes/certs
    #   subPath: ""
    #   hostPath: /etc/kubernetes/certs
  #   readOnly: true

  # Extra Volumes for the pod
  extraVolumes:
    []
     #- name: example
     #  configMap:
     #    name: example

  # Extra Volume Mounts for the container
  extraVolumeMounts:
    []
    # - name: example
    #   mountPath: /example

  extraContainers:
    []
    #- name: config-reloader
    #  image: reloader-image

  service:
    annotations: {}
    labels: {}
    clusterIP: ""
    ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
    ##
    externalIPs: []
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    servicePort: 8880
    type: ClusterIP
    # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    # externalTrafficPolicy: "local"
    # healthCheckNodePort: 0

  ingress:
    enabled: false
    annotations: {}
    #   kubernetes.io/ingress.class: nginx
    #   kubernetes.io/tls-acme: 'true'

    extraLabels: {}
    hosts: []
    #   - name: vmselect.local
    #     path: /select
    #     port: http

    tls: []
    #   - secretName: vmselect-ingress-tls
    #     hosts:
    #       - vmselect.local

    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # -- pathType is only for k8s >= 1.1=
    pathType: Prefix

  podSecurityContext: {}
  # fsGroup: 2000

  securityContext:
    {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
    # runAsNonRoot: true
  # runAsUser: 1000

  resources:
    {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
    #   cpu: 100m
  #   memory: 128Mi

  # Annotations to be added to the deployment
  annotations: {}
  # labels to be added to the deployment
  labels: {}

  # Annotations to be added to pod
  podAnnotations: {}

  podLabels: {}

  nodeSelector: {}

  priorityClassName: ""

  tolerations: []

  affinity: {}

  # vmalert alert rules configuration configuration:
  # use existing configmap if specified
  # otherwise .config values will be used
  configMap: ""
  config:
    alerts:
      groups:
          - name: 磁盘挂载错误
            rules:
            - alert: 磁盘挂载错误
              expr: mount_error == 1
              for: 1m
              labels:
                level: 1
                severity: warning
              annotations:
                description: "{{$labels.job}}链{{$labels.instance}}节点磁盘挂载错误!"

serviceMonitor:
  enabled: false
  extraLabels: {}
  annotations: {}
#    interval: 15s
#    scrapeTimeout: 5s
  # -- Commented. HTTP scheme to use for scraping.
#    scheme: https
  # -- Commented. TLS configuration to use when scraping the endpoint
#    tlsConfig:
#      insecureSkipVerify: true

alertmanager:
  enabled: true
  replicaCount: 1
  podMetadata:
    labels: {}
    annotations: {}
  image: prom/alertmanager
  tag: v0.20.0
  retention: 120h
  nodeSelector: {}
  priorityClassName: ""
  resources: {}
  tolerations: []
  imagePullSecrets: []
  podSecurityContext: {}
  extraArgs: {}
  # key: value

  # external URL, that alertmanager will expose to receivers
  baseURL: ""
  # use existing configmap if specified
  # otherwise .config values will be used
  configMap: ""
  config:
    global:
      resolve_timeout: 5m
    route:
      # default receiver
      receiver: ops_notify
      # tag to group by
      group_by: [alertname]
      # How long to initially wait to send a notification for a group of alerts
      group_wait: 30s
      # How long to wait before sending a notification about new alerts that are added to a group
      group_interval: 60s
      # How long to wait before sending a notification again if it has already been sent successfully for an alert
      repeat_interval: 1h
    receivers:
      - name: ops_notify
        webhook_configs:
        - url: http://192.168.157.59:8080/prometheusalert?type=dd&tpl=prometheus-dd&split=false
          send_resolved: true
    inhibit_rules:
      - source_match:
          severity: 'warning'
        target_match:
          severity: 'warning'
        equal: ['alertname', 'job']

  templates: {}
  #  alertmanager.tmpl: |-
  service:
    annotations: {}
    type: ClusterIP
    port: 9093
    # if you want to force a specific nodePort. Must be use with service.type=NodePort
    # nodePort:
  ingress:
    enabled: false
    annotations: {}
    #   kubernetes.io/ingress.class: nginx
    #   kubernetes.io/tls-acme: 'true'
    extraLabels: {}
    hosts: []
    #   - name: alertmanager.local
    #     path: /
    #     port: web

    tls: []
    #   - secretName: alertmanager-ingress-tls
    #     hosts:
    #       - alertmanager.local

    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    # -- pathType is only for k8s >= 1.1=
    pathType: Prefix
  persistentVolume:
    # -- Create/use Persistent Volume Claim for alertmanager component. Empty dir if false
    enabled: false
    # -- Array of access modes. Must match those of existing PV or dynamic provisioner. Ref: [http://kubernetes.io/docs/user-guide/persistent-volumes/](http://kubernetes.io/docs/user-guide/persistent-volumes/)
    accessModes:
      - ReadWriteOnce
    # -- Persistant volume annotations
    annotations: {}
    # -- StorageClass to use for persistent volume. Requires alertmanager.persistentVolume.enabled: true. If defined, PVC created automatically
    storageClass: ""
    # -- Existing Claim name. If defined, PVC must be created manually before volume will be bound
    existingClaim: ""
    # -- Mount path. Alertmanager data Persistent Volume mount root path.
    mountPath: /data
    # -- Mount subpath
    subPath: ""
    # -- Size of the volume. Better to set the same as resource limit memory property.
    size: 50Mi

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

(5)使用命令测试安装:

helm install vmalert vm/victoria-metrics-alert -f values.yaml -n victoria-metrics --debug --dry-run

(6)使用以下命令安装

helm install vmalert vm/victoria-metrics-alert -f values.yaml -n victoria-metrics

(7)通过运行以下命令获取 pod 列表

kubectl get pods -A | grep 'alert'

(8)通过运行以下命令获取应用程序

helm list -f vmalert -n victoria-metrics

(9)使用命令查看应用程序版本的历史记录vmalert

helm history vmalert -n victoria-metrics

(10)更新配置

cd /root/vmalert
#修改value.yaml文件
helm upgrade vmalert vm/victoria-metrics-alert -f values.yaml -n victoria-metrics

(11)查看service

kubectl get svc -n victoria-metrics

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

3、安装prometheusalert

(1)使用helm部署

git clone https://github.com/feiyu563/PrometheusAlert.git
cd PrometheusAlert/example/helm/prometheusalert
#如需修改配置文件,请更新config中的app.conf
helm install -n victoria-metrics prometheus-alert .

(2)values.yaml配置文件参考

cat /root/PrometheusAlert/example/helm/prometheusalert/values.yaml

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

# Default values for prometheusalert.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

global:
  imagePullSecrets: []
    # - name: "registry-secret"

replicaCount: 1

image:
  # 支持配置自定义模版需要重出镜像,或者使用本人构建镜像:lusson/prometheusalert:v1.0
  repository: feiyu563/prometheus-alert:v4.8
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
  type: ClusterIP
  port: 8080

ingress:
  enabled: true
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: prometheusalert.xxxxx.com
      paths: ["/"]

  tls: []

resources:
  limits:
   cpu: 1000m
   memory: 1024Mi
  requests:
   cpu: 100m
   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

(3)app.conf配置参考

cat /root/PrometheusAlert/example/helm/prometheusalert/config/app.conf

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

(4)ingress.yaml配置参考

cat /root/PrometheusAlert/example/helm/prometheusalert/templates/ingress.yaml

{{- if .Values.ingress.enabled -}}
{{- $fullName := include "prometheusalert.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ $fullName }}
  labels:
{{ include "prometheusalert.labels" . | indent 4 }}
  {{- with .Values.ingress.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
{{- if .Values.ingress.tls }}
  tls:
  {{- range .Values.ingress.tls }}
    - hosts:
      {{- range .hosts }}
        - {{ . | quote }}
      {{- end }}
      secretName: {{ .secretName }}
  {{- end }}
{{- end }}
  rules:
  {{- range .Values.ingress.hosts }}
    - host: {{ .host | quote }}
      http:
        paths:
        {{- range .paths }}
          - path: {{ . }}
            pathType: Prefix
            backend:
              service:
                name: {{ $fullName }}
                port:
                  number: {{ $svcPort }}
        {{- end }}
  {{- end }}
{{- end }}

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

(5)更新配置

cd /root/PrometheusAlert/example/helm/prometheusalert
helm upgrade -n victoria-metrics prometheus-alert .

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

(6)重启pod

删除Pod
helm delete prometheus-alert -n victoria-metrics

查看pods和service
kubectl get pods -n victoria-metrics
kubectl get svc -n victoria-metrics

重新安装
helm install -n victoria-metrics prometheus-alert .

查看pods和service
kubectl get pods -n victoria-metrics
kubectl get svc -n victoria-metrics

(1)告警测试

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

模板内容:

{{ $var := .externalURL}}{{ range $k,$v:=.alerts }}
{{if eq $v.status "resolved"}}
## [巡检恢复信息]({{$v.generatorURL}})
#### [{{$v.labels.alertname}}]({{$var}})
###### 告警级别:{{$v.labels.level}}
###### 开始时间:{{$v.startsAt}}
###### 故障主机:{{$v.labels.instance}}
##### {{$v.annotations.description}}
{{else}}
## [巡检告警信息]({{$v.generatorURL}})
#### [{{$v.labels.alertname}}]({{$var}})
###### 告警级别:{{$v.labels.level}}
###### 开始时间:{{$v.startsAt}}
###### 故障主机:{{$v.labels.instance}}
##### {{$v.annotations.description}}
{{end}}
{{ end }}

(2)查看日志

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

(3)查看钉钉告警

k8s集群部署vmalert和prometheusalert实现钉钉告警,VictoriaMetrics,kubernetes,钉钉,容器

参考文档:https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-alert

参考文档:https://github.com/feiyu563/PrometheusAlert/tree/master/example/helm/prometheusalert文章来源地址https://www.toymoban.com/news/detail-651551.html

到了这里,关于k8s集群部署vmalert和prometheusalert实现钉钉告警的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 部署K8S集群

    目录 一、环境搭建 1、准备环境 2、安装master节点 3、安装k8s-master上的node 4、安装配置k8s-node1节点 5、安装k8s-node2节点 6、为所有node节点配置flannel网络 7、配置docker开启加载防火墙规则允许转发数据 二、k8s常用资源管理 1、创建一个pod 2、pod管理 1、准备环境 计算机说明,建议

    2024年02月13日
    浏览(39)
  • K8s 集群部署

    学习了黑马K8s,首先跟着视频部署K8s,写下笔记 转至 原文链接 整合黑马老师笔记 目前生产部署Kubernetes集群主要有两种方式: 一 kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 二 进制包 从github下载发行版的二进制包,手动部署每个

    2024年02月04日
    浏览(41)
  • K8S—集群部署

            K8s是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩展容维护等功能,k8s的原名是kubernetes。 1.1、K8s的功能: 有大量跨主机的容器需要管理 快速部署应用快速扩展应用 无缝对接新的应用功能 节省资源,优化硬件资源的使用情况

    2024年02月12日
    浏览(43)
  • K8S二进制部署详解,一文教会你部署高可用K8S集群

    Pod网段: 10.0.0.0/16 Service网段: 10.255.0.0/16 集群角色 ip 主机名 安装组件 控制节点 10.10.0.10 master01 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx 控制节点 10.10.0.11 master02 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx 控制节点 10.10.0.12 master03 apiser

    2024年04月28日
    浏览(42)
  • K8s 部署 CNI 网络组件+k8s 多master集群部署+负载均衡

    ------------------------------ 部署 CNI 网络组件 ------------------------------ ---------- 部署 flannel ---------- K8S 中 Pod 网络通信: ●Pod 内容器与容器之间的通信 在同一个 Pod 内的容器(Pod 内的容器是不会跨宿主机的)共享同一个网络命名空间,相当于它们在同一台机器上一样,可以用 lo

    2024年02月08日
    浏览(46)
  • 单机部署K8S集群

    1 系统准备 操作系统: Kubernetes 支持多种Linux发行版,包括但不限于 CentOS、Ubuntu、RHEL等。通常建议使用稳定版本,并且是 64位系统 。我这里使用的是CentOS 8.1版本  硬件配置: 内存(RAM): 每台机器至少需要2GB或更多 ,具体需求根据集群规模和应用程序负载来定。 CPU核心数

    2024年02月20日
    浏览(37)
  • 安装部署k8s集群

    系统: CentOS Linux release 7.9.2009 (Core) 准备3台主机 192.168.44.148 k8s-master 92.168.44.154 k8s-worker01 192.168.44.155 k8s-worker02 3台主机准备工作 关闭防火墙和selinux 关闭swap分区(swap分区会降低性能,所以选择关闭) 参考如下链接: 设置node的主机名,并配置/etc/hosts (这样可以方面看到pod调

    2024年02月19日
    浏览(38)
  • 一键部署k8s集群

    机器至少配置 序号 类型 主机名 IP 备注(CPU/内存/硬盘) 1 Mater k8s-api.bcs.local 192.168.46.128 8C16G,100G 2 Node1 node-192-168-46-129 192.168.46.129 4C8G,100G 3 Node2 node-192-168-46-130 192.168.46.130 4C8G,100G 4 Node3 node-192-168-46-131 192.168.46.131 4C8G,100G 软件需求 需求项 具体要求 检查命令 操作系统 Cen

    2024年02月09日
    浏览(52)
  • k8s 集群部署尝试

    K8S 部署方式有很多,有的方式不太友好,需要注意很多关键点,有的方式对小白比较友好,部署简单方便且高效 使用 二进制源码包的方式部署会比较麻烦,大概分为如下几步: 获取源码包 部署在 master 节点和 worker 节点上 启动相应节点的关键服务 master 节点上 api-server ,分

    2024年02月10日
    浏览(45)
  • Prometheus+Grafana监控K8S集群(基于K8S环境部署)

    1、服务器及K8S版本信息: IP地址 主机名称 角色 K8S版本 16.32.15.200 master-1 Master节点 v1.23.0 16.32.15.201 node-1 Node节点 v1.23.0 16.32.15.202 node-2 Node节点 v1.23.0 2、部署组件版本: 序号 名称 版本 作用 1 Prometheus v2.33.5 收集、存储和处理指标数据 2 Node_exporter v0.16.0 采集服务器指标,如CP

    2024年02月04日
    浏览(67)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包