k8s v1.27.4 部署metrics-serverv:0.6.4,kube-prometheus,镜像下载问题处理

这篇具有很好参考价值的文章主要介绍了k8s v1.27.4 部署metrics-serverv:0.6.4,kube-prometheus,镜像下载问题处理。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

只有一个问题,原来的httpGet存活、就绪检测一直不通过,于是改为tcpSocket后pod正常。

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

修改后的yaml文件,镜像修改为阿里云

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
#        - --cert-dir=/tmp
#        - --secure-port=4443
#        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
#        - --kubelet-use-node-status-port
#        - --metric-resolution=15s
        - --cert-dir=/tmp
        - --secure-port=4443
        - --metric-resolution=30s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.4
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          tcpSocket:
            port: 4443
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          tcpSocket:
            port: 4443
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
kubectl apply -f components.yaml

k8s镜像下载,Kubernetes,kubernetes,容器,云原生

k8s镜像下载,Kubernetes,kubernetes,容器,云原生
部署kube-prometheus
兼容1.27的为main分支

k8s镜像下载,Kubernetes,kubernetes,容器,云原生
只克隆main分支

git clone  --single-branch --branch main https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus

如果直接运行kubectl apply -f manifests/setup/,会有元数据注解太长的错误
Error from server (Invalid): error when creating "manifests/setup/0prometheusCustomResourceDefinition.yaml": CustomResourceDefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

应该使用以下方式
1. kubectl apply -f manifests/setup/
2. kubectl apply --server-side -f manifests/setup 

CRD正常创建后然后再执行:
kubectl apply -f manifests

镜像下载使用这个项目的方法:https://github.com/DaoCloud/public-image-mirror,在镜像前面加:m.daocloud.io

prometheus-k8s-0 pod报权限不足问题

ts=2023-08-21T03:14:34.582Z caller=klog.go:116 level=error component=k8s_client_runtime func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.27.3/tools/cache/reflector.go:231: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:monitoring:prometheus-k8s\" cannot list resource \"pods\" in API group \"\" at the cluster scope"

处理:
修改prometheus-clusterRole.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.26.0
  name: prometheus-k8s
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - /metrics
  verbs:
  - get

跨节点pind不通monitoring下的pod ip,prometheus target异常问题处理,

删除kube-prometheus/manifests下的netowrkprolic
alertmanager-networkPolicy.yaml
blackboxExporter-networkPolicy.yaml
grafana-networkPolicy.yaml
kubeStateMetrics-networkPolicy.yaml
nodeExporter-networkPolicy.yaml
prometheusAdapter-networkPolicy.yaml
prometheus-networkPolicy.yaml
prometheusOperator-networkPolicy.yaml

使用ServiceMonitor添加监控:
以ingress-nginx为例
修改ingress-nginx.yaml的service

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: '10254' #新增内容
    prometheus.io/scrape: 'true' #新增内容
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Local
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  - appProtocol: http  #新增内容
    name: prometheus  #新增内容
    port: 10254  #新增内容
    protocol: TCP  #新增内容
    targetPort: 10254  #新增内容
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
  

# ingress-nginx-monitor.yaml

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: ingress-nginx  # ServiceMonitor名称
  namespace: monitoring # ServiceMonitor所在名称空间
spec:
  endpoints:  # prometheus所采集Metrics地址配置,endpoints为一个数组,可以创建多个,但是每个endpoints包含三个字段interval、path、port
    - interval: 15s # prometheus采集数据的周期,单位为秒
      path: /metrics # prometheus采集数据的路径
      port: prometheus # prometheus采集数据的端口,这里为port的name,主要是通过spec.selector中选择对应的svc,在选中的svc中匹配该端口
  namespaceSelector: # 需要发现svc的范围
    any: true     # 有且仅有一个值true,当该字段被设置时,表示监听所有符合selector所选择的svc
  selector:
    matchLabels:  # 选择svc的标签
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
      app.kubernetes.io/version: 1.8.1
      
kubectl apply -f ingress-nginx-monitor.yaml
确认ingress-nginx servicemonitors已添加
kubectl -n monitoring get servicemonitors.monitoring.coreos.com 
NAME                      AGE
alertmanager-main         70m
blackbox-exporter         70m
coredns                   70m
grafana                   70m
ingress-nginx             24m
kube-apiserver            70m
kube-controller-manager   70m
kube-scheduler            70m
kube-state-metrics        70m
kubelet                   70m
node-exporter             70m
prometheus-adapter        70m
prometheus-k8s            70m
prometheus-operator       70m


k8s镜像下载,Kubernetes,kubernetes,容器,云原生
监控外部ETCD:
etcd配置文件添加:ETCD_LISTEN_METRICS_URLS=“http://0.0.0.0:2381”
etcd-job.yaml

- job_name: "etcd-server"
  scrape_interval: 30s
  scrape_timeout: 10s
  static_configs:
    - targets: ["172.16.0.157:2381"]
      labels:
        instance: etcdserver

创建secret

kubectl create secret generic etcd-secret --from-file=etcd-job.yaml -n monitoring

kube-prometheus/manifests/prometheus-prometheus.yaml 末尾添加配置:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/instance: k8s
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.46.0
  name: k8s
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
    - apiVersion: v2
      name: alertmanager-main
      namespace: monitoring
      port: web
  enableFeatures: []
  externalLabels: {}
  image: quay.io/prometheus/prometheus:v2.46.0
  nodeSelector:
    kubernetes.io/os: linux
  podMetadata:
    labels:
      app.kubernetes.io/component: prometheus
      app.kubernetes.io/instance: k8s
      app.kubernetes.io/name: prometheus
      app.kubernetes.io/part-of: kube-prometheus
      app.kubernetes.io/version: 2.46.0
  podMonitorNamespaceSelector: {}
  podMonitorSelector: {}
  probeNamespaceSelector: {}
  probeSelector: {}
  replicas: 2
  resources:
    requests:
      memory: 400Mi
  ruleNamespaceSelector: {}
  ruleSelector: {}
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: 2.46.0
  additionalScrapeConfigs:  #新增
    name: etcd-secret  #新增
    key: etcd-job.yaml  #新增
kubectl apply -f kube-prometheus/manifests/prometheus-prometheus.yaml

grafana导入 3070模板
k8s镜像下载,Kubernetes,kubernetes,容器,云原生
http存活检测

kind: Probe
apiVersion: monitoring.coreos.com/v1
metadata:
  name: example-com-website
  namespace: monitoring
spec:
  interval: 60s
  module: http_2xx
  prober:
    url: blackbox-exporter.monitoring.svc.cluster.local:19115
  targets:
    staticConfig:
      static:
      - https://www.baidu.com

k8s镜像下载,Kubernetes,kubernetes,容器,云原生
k8s镜像下载,Kubernetes,kubernetes,容器,云原生
添加proxy监控,默认为http

---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app.kubernetes.io/name: kube-proxy
    app.kubernetes.io/part-of: kube-prometheus
  name: kube-proxy
  namespace: monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 30s
    port: http-metrics
    scheme: http
    tlsConfig:
      insecureSkipVerify: true
  jobLabel: app.kubernetes.io/name
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      app.kubernetes.io/name: kube-proxy
---
apiVersion: v1
kind: Endpoints
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    app.kubernetes.io/name: kube-proxy
subsets:
- addresses:
  - ip: 172.16.0.157
    targetRef:
      kind: Node
      name: master1
  - ip: 172.16.0.124
    targetRef:
      kind: Node
      name: node1
  - ip: 172.16.0.46
    targetRef:
      kind: Node
      name: node2
  ports:
  - name: http-metrics
    port: 10249
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    app.kubernetes.io/name: kube-proxy
spec:
  type: ClusterIP 
  clusterIP: None
  ports:
  - name: http-metrics
    port: 10249
    targetPort: 10249
    protocol: TCP

k8s镜像下载,Kubernetes,kubernetes,容器,云原生文章来源地址https://www.toymoban.com/news/detail-754270.html

到了这里,关于k8s v1.27.4 部署metrics-serverv:0.6.4,kube-prometheus,镜像下载问题处理的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 基于Ubuntu22.04部署生产级K8S集群v1.27(规划和核心组件部署篇)

    本文档主要根据k8s官网文档和其插件的官网文档,参考部分他人优秀经验,在实际操作中逐渐完成,比较详尽,适合在境内学习者和实践者参考。 实操环境基于VMware Workstation 17 pro,采用ubuntu22.04操作系统(有时也提到rhel系列系统),采用kubeadm1.27.4(部分地方提到了1.28)部署

    2024年02月02日
    浏览(46)
  • Centos7安装部署k8s(kubernetes)最新v1.27.1版本超详细安装教程

    从零开始的k8s安装 硬件配置要求 cpu = 2核 硬盘 = 20G 内存 = 2G 节点数量建议为奇数(3, 5, 7, 9等)(1台好像也能搭,没试过) 以下命令出除特殊要求外,其余都建议在master主机执行 本教程配置如下 主机名 IP 配置 master 192.168.42.150 2核+2G+20G node1 192.168.42.151 2核+2G+20G node2 192.168.

    2024年02月11日
    浏览(47)
  • K8S-容器运行时(v1.27)

    说明:  自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除。阅读 Dockershim 移除的常见问题了解更多详情。 你需要在集群内每个节点上安装一个 容器运行时 以使 Pod 可以运行在上面。本文概述了所涉及的内容并描述了与节点设置相关的任务。 Kubernetes 1.27 要求你使用符合容器

    2024年01月21日
    浏览(31)
  • 云原生周刊:K8s 在 v1.27 中移除的特性和主要变更

    K8s 在 v1.27 中移除的特性和主要变更 随着 Kubernetes 发展和成熟,为了此项目的整体健康,某些特性可能会被弃用、移除或替换为优化过的特性。基于目前在 v1.27 发布流程中获得的信息,本文将列举并描述一些计划在 Kubernetes v1.27 发布中的变更, 发布工作目前仍在进行中,可

    2024年02月01日
    浏览(48)
  • 【k8s】:部署、使用 metrics-server

    💖The Begin💖点点关注,收藏不迷路💖 基于Kubernetes 集群,并已经安装并配置好 kubectl 工具。 Metrics Server 可以帮助我们监控集群中节点和容器的资源使用情况。 在本篇 CSDN 博客中,我将详细介绍如何部署 Metrics Server 到 Kubernetes 集群中。 工作流程说明: 1、用户执行 kubectl

    2024年04月16日
    浏览(47)
  • 国内环境下ubuntu22.04+kubeadm搭建v1.27.2多节点k8s集群

    按说,使用kubeadm搭建k8s集群最权威的方法、步骤,应该是直接参考kubeadm官网,里边描述了从OS基础配置到containerd,再到kubeadm安装、initjoin的全过程。 只是,kubernetes的官网对整个过程的描述并不是一种step by step的方式,而是把相关的步骤分散于各个富含上下文知识的页面中,

    2024年02月14日
    浏览(49)
  • 二进制安装Kubernetes(k8s) v1.27.3 IPv4/IPv6双栈 可脱离互联网

    https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了 kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。 我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。 若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影

    2024年02月16日
    浏览(35)
  • 二进制安装Kubernetes(k8s) v1.27.1 IPv4/IPv6双栈 可脱离互联网

    https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了 kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。 我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。 若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影

    2024年02月06日
    浏览(39)
  • kubeadmin部署k8s1.27.4

    IP 主机名 资源配置 系统版本 192.168.117.170 k8s-master 2c2g200g Centos7.9 192.168.117.171 k8s-node1 2c2g200g Centos7.9 192.168.117.172 k8s-node2 2c2g200g Centos7.9 三台主机都要做 配置主机名 master node1 node2 master节点产成ssh密钥拷贝给node节点实现免密登录 所有节点 所有节点 所有节点 所有节点 配置docke

    2024年02月08日
    浏览(29)
  • kubernetes(k8s) v1.28.2 安装与部署

    版本:kubernetes(k8s) v1.28.2 并准备主机名映射。 设置好静态IP。 在Ubuntu的/etc/hosts文件中,填入如下内容。也可以在Windows的C:WindowsSystem32driversetchosts文件中填写相同内容。 关闭防火墙和SELinux。 关闭防火墙命令如下。 可使用命令 systemctl status firewalld 查看防火墙状态。 关闭

    2024年02月03日
    浏览(40)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包