在k8s集群部署ELK

这篇具有很好参考价值的文章主要介绍了在k8s集群部署ELK。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

使用kubeadm或者其他方式部署一套k8s集群。

在k8s集群创建一个namespace:halashow

2 ELK部署架构

3.1 准备资源配置清单

 Deployment中存在一个es的业务容器,和一个init容器,init容器主要是配置vm.max_map_count=262144。

service暴露了9200端口,其他服务可通过service name加端口访问es。

在k8s集群部署ELK

3.1 准备资源配置清单

 Deployment中存在一个es的业务容器,和一个init容器,init容器主要是配置vm.max_map_count=262144。

service暴露了9200端口,其他服务可通过service name加端口访问es。

apiVersion: v1
kind: Service
metadata:
  namespace: halashow
  name: elasticsearch
  labels:
    app: elasticsearch-logging
spec:
  type: ClusterIP
  ports:
  - port: 9200
    name: elasticsearch  
  selector: 
    app: elasticsearch-logging
---
apiVersion: apps/v1
kind: Deployment
metadata:
  generation: 1
  labels:
    app: elasticsearch-logging
    version: v1
  name: elasticsearch
  namespace: halashow
spec:
  serviceName: elasticsearch-logging
  minReadySeconds: 10
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: elasticsearch-logging
      version: v1
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: elasticsearch-logging
        version: v1
    spec:
      affinity:
        nodeAffinity: {}
      containers:
      - env:
        - name: discovery.type
          value: single-node
        - name: ES_JAVA_OPTS
          value: -Xms512m -Xmx512m
        - name: MINIMUM_MASTER_NODES
          value: "1"
        image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0-amd64
        imagePullPolicy: IfNotPresent
        name: elasticsearch-logging
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: "1"
            memory: 1Gi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data
          name: es-persistent-storage
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: user-1-registrysecret
      initContainers:
      - command:
        - /sbin/sysctl
        - -w
        - vm.max_map_count=262144
        image: alpine:3.6
        imagePullPolicy: IfNotPresent
        name: elasticsearch-logging-init
        resources: {}
        securityContext:
          privileged: true
          procMount: Default
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /data/elk/elasticsearch-logging
          type: DirectoryOrCreate
        name: es-persistent-storage
      nodeSelector:
        alibabacloud.com/is-edge-worker: 'false'
        beta.kubernetes.io/arch: amd64
        beta.kubernetes.io/os: linux
      tolerations:
        - effect: NoSchedule
          key: node-role.alibabacloud.com/addon
          operator: Exists

elasticsearch持久化部署,参考资料

https://www.51cto.com/article/673023.html

apiVersion: v1
kind: Service
metadata:
  name: es
  namespace: default
  labels:
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: es
---
# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
  name: es
  namespace: default
  labels:
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: es
  labels:
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: es
  labels:
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: es
  namespace: default
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: es
  apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet #使用statefulset创建Pod
metadata:
  name: es #pod名称,使用statefulSet创建的Pod是有序号有顺序的
  namespace: default  #命名空间
  labels:
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    srv: srv-elasticsearch
spec:
  serviceName: es #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local)
  replicas: 1 #副本数量,单节点
  selector:
    matchLabels:
      k8s-app: es #和pod template配置的labels相匹配
  template:
    metadata:
      labels:
        k8s-app: es
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: es
      containers:
      - image: docker.io/library/elasticsearch:7.10.1
        name: es
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 500Mi
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: es
          mountPath: /usr/share/elasticsearch/data/   #挂载点
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: "discovery.type"  #定义单节点类型
          value: "single-node"
        - name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整
          value: "-Xms1024m -Xmx4g" 
      volumes:
      - name: es
        hostPath:
          path: /data/es/
      nodeSelector: #如果需要匹配落盘节点可以添加nodeSelect
        es: data
      tolerations:
      - effect: NoSchedule
        operator: Exists
      # Elasticsearch requires vm.max_map_count to be at least 262144.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
      initContainers: #容器初始化前的操作
      - name: es-init
        image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误
        securityContext:  #仅应用到指定的容器上,并且不会影响Volume
          privileged: true #运行特权容器
      - name: increase-fd-ulimit
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量
        securityContext:
          privileged: true
      - name: elasticsearch-volume-init #es数据落盘初始化,加上777权限
        image: alpine:3.6
        command:
          - chmod
          - -R
          - "777"
          - /usr/share/elasticsearch/data/
        volumeMounts:
        - name: es
          mountPath: /usr/share/elasticsearch/data/


---

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: default
  labels:
    k8s-app: kibana
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
    srv: srv-kibana
spec:
  type: NodePort #采用nodeport方式进行暴露,端口默认为25601
  ports:
  - port: 5601
    nodePort: 30561
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: default
  labels:
    k8s-app: kibana
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    srv: srv-kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana
  template:
    metadata:
      labels:
        k8s-app: kibana
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      containers:
      - name: kibana
        image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_HOSTS
            value: http://es:9200
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana
  namespace: yk-mysql-test
spec:
  rules:
  - host: kibana.ctnrs.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kibana
          servicePort: 5601

 4 部署logstash

创建configMap定义logstash相关配置项,主要包括一下几项。

  input:定义输入到logstash的源。

  filter:定义过滤条件。

  output:可以定义输出到es,redis,kafka等等。

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: halashow
data:
  logstash.conf: |-

    input {
        redis {
            host => "10.36.21.220"
            port => 30079
            db => 0
            key => "localhost"
            password => "123456"
            data_type => "list"
            threads => 4
            batch_count => "1"
            #tags => "user.log"
        }
     
    }
     
    filter {
      dissect {
        mapping => { "message" => "[%{Time}] %{LogLevel} %{message}" }
      }
    }
     
    output {
      if "nginx.log" in [tags] {
        elasticsearch {
          hosts => ["elasticsearch:9200"]
          index => "nginx.log"
        }
      }

       if "osale-uc-test" in [tags] {
        elasticsearch {
          hosts => ["elasticsearch:9200"]
          index => "osale-uc-test"
        }
      }
       if "osale-jindi-client-test" in [tags] {
        elasticsearch {
          hosts => ["elasticsearch:9200"]
          index => "osale-jindi-client-test"
        }
      }
      if "osale-admin-weixin" in [tags] {
        elasticsearch {
          hosts => ["elasticsearch:9200"]
          index => "osale-admin-weixin"
        }
      }
    }


---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  namespace: halashow
  labels: 
    name: logstash
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: logstash
  template:
    metadata:
      labels: 
        app: logstash
        name: logstash
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:7.12.0
        ports:
        - containerPort: 5044
          protocol: TCP
        - containerPort: 9600
          protocol: TCP

        volumeMounts:
        - name: logstash-config
          #mountPath: /usr/share/logstash/logstash-simple.conf
          #mountPath: /usr/share/logstash/config/logstash-sample.conf
          mountPath: /usr/share/logstash/pipeline/logstash.conf
          subPath: logstash.conf
        #ports:
        #  - containerPort: 80
        #    protocol: TCP


      volumes:
      - name: logstash-config
        configMap:
          #defaultMode: 0644
          name: logstash-config

---
apiVersion: v1
kind: Service
metadata:
  namespace: halashow
  name: logstash
  labels:
    app: logstash
spec:
  type: ClusterIP
  ports:
  - port: 5044
    name: logstash
  selector: 
    app: logstash

5.部署redis5.0 

apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: elk-redis
  labels: 
    app: elk-redis
data: 
  redis.conf: |-
    bind 0.0.0.0
    daemonize no
    pidfile "/var/run/redis.pid"
    port 6380
    timeout 300
    loglevel warning
    logfile "redis.log"
    databases 16
    rdbcompression yes
    dbfilename "redis.rdb"
    dir "/data"
    requirepass "123456"
    masterauth "123456"
    maxclients 10000
    maxmemory 1000mb
    maxmemory-policy allkeys-lru
    appendonly yes
    appendfsync always
---
apiVersion: apps/v1 
kind: StatefulSet 
metadata: 
  name: elk-redis 
  labels: 
    app: elk-redis 
spec:
  replicas: 1 
  selector: 
    matchLabels: 
      app: elk-redis 
  template: 
    metadata: 
      labels: 
        app: elk-redis 
    spec: 
      containers: 
      - name: redis 
        image: redis:5.0.7 
        command: 
          - "sh" 
          - "-c" 
          - "redis-server /usr/local/redis/redis.conf" 
        ports: 
        - containerPort: 6379 
        resources: 
          limits: 
            cpu: 1000m 
            memory: 1024Mi 
          requests: 
            cpu: 1000m 
            memory: 1024Mi 
        livenessProbe: 
          tcpSocket: 
            port: 6379 
          initialDelaySeconds: 300 
          timeoutSeconds: 1 
          periodSeconds: 10 
          successThreshold: 1 
          failureThreshold: 3 
        readinessProbe: 
          tcpSocket: 
            port: 6379 
          initialDelaySeconds: 5 
          timeoutSeconds: 1 
          periodSeconds: 10 
          successThreshold: 1 
          failureThreshold: 3 
        volumeMounts:
        - name: data
          mountPath: /data
        # 时区设置
        - name: timezone
          mountPath: /etc/localtime
        - name: config 
          mountPath:  /usr/local/redis/redis.conf 
          subPath: redis.conf 		  
      volumes:
      - name: config 
        configMap: 
          name: elk-redis
      - name: timezone                             
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
      - name: data
        hostPath:
          type: DirectoryOrCreate 
          path: /data/elk/elk-redis
      nodeName: gem-yxyw-t-c02
--- 

为了提升redis的性能需要关闭持久化

、redis默认是开启持久化的

2、默认持久化方式为RDB

1、注释掉原来的持久化规则

  1. # save 3600 1 300 100 60 10000

2、把 save 节点设置为空

save ""

3、删除 dump.rdb 转储文件

rm -f dump.rdb

1、设置 appendonly 的值为 no 即可

 

6. 部署filebeat,部署k8s上没有成功,改成源码部署到主机成功了

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.8.0-linux-x86_64.tar.gz

tar -zxvf filebeat-7.8.0-linux-x86_64.tar.gz

vi /data/elk/filebeat/filebeat-7.8.0-linux-x86_64/filebeat.yml 

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /data/test-logs/osale-uc-test/*.log
  fields:
  tags: ["osale-uc-test"]
- type: log
  enabled: true
  paths:
    - /data/test-logs/osale-jindi-client-test/*.log
  fields:
  tags: ["osale-jindi-client-test"]
- type: log
  enabled: true
  paths:
    - /data/test-logs/osale-admin-weixin-test/*/osale-admin-weixin/*.log
  fields:
  tags: ["osale-admin-weixin"]
- type: log
  enabled: true
  paths:
    - /data/tengine/logs/*.log
  fields:
  tags: ["nginx.log"]

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.redis:
  enabled: true
  hosts: ["10.36.21.220:30079"]
  password: "123456"
  db: 0
  key: localhost
  worker: 4
  timeout: 5
  max_retries: 3
  datatype: list
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config-to-logstash
  namespace: halashow
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /logm/*.log
    output.logstash:
      hosts: ['logstash:5044']

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: filebeat
  namespace: halashow
  labels: 
    name: filebeat
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: filebeat
  template:
    metadata:
      labels: 
        app: filebeat
        name: filebeat
    spec:
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.12.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]

        volumeMounts:
        - mountPath: /logm
          name: logm
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml

      volumes:
      - name: logm 
        emptyDir: {}
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config-to-logstash

cd /data/elk/filebeat-7.8.0-linux-x86_64 
 
sudo ./filebeat -e -c filebeat.yml -d "publish"        #前台启动filebeat
 
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1&   #后台启动
 

6 部署kibana  

6.1 准备资源配置清单

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: halashow
  labels:
    name: kibana
spec:
  serviceName: halashow
  replicas: 1
  selector:
    matchLabels:
      name: kibana
  template:
    metadata:
      labels:
        app: kibana
        name: kibana
    spec:
      restartPolicy: Always
      containers:
        - name: kibana
          image: kibana:7.12.0
          imagePullPolicy: Always
          ports:
            - containerPort: 5601
          resources:
            requests:
              memory: 1024Mi
              cpu: 50m
            limits:
              memory: 1024Mi
              cpu: 1000m
          volumeMounts:
            - name: kibana-config
              mountPath: /usr/share/kibana/config/kibana.yml
              subPath: kibana.yml
      volumes:
        - name: kibana-config
          configMap:
            name: kibana-cm
            items:
            - key: "kibana.yml"
              path: "kibana.yml"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: kibana
  name: kibana
  namespace: halashow
spec:
  type: NodePort
  ports:
    - name: kibana
      port: 5601
      nodePort: 30102
      protocol: TCP
      targetPort: 5601
  selector:
    app: kibana

server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: "zh-CN"  #kibana汉化

在k8s集群部署ELK

在k8s集群部署ELK

location /  #必须是/否则代码不上

passpoxry http:ip:port

 1.Kibana设置elasticsearch索引过期时间,到期自动删除

首先创建一个索引,索引必须名字后面*这样所有日期都能检索出。然后在创建一个十天的日志生命周期管理,在创建一个索引模板,索引模板可以检索出所有需要检索的日志,这个索引模板可以直接复制日志生命周期管理代码,也可之后日志生命周期里面加入这个索引模板。

在k8s集群部署ELK在k8s集群部署ELK

 在k8s集群部署ELK

2.创建一个索引生命周期策略 

Index Management    索引管理
Index Lifecycle Policies   索引生命周期策略

Delete phase  删除阶段

在k8s集群部署ELK 在k8s集群部署ELK

 在k8s集群部署ELK在k8s集群部署ELK

在k8s集群部署ELK

 在k8s集群部署ELK

3.创建一个索引模板用来管理所有索引

Index Templates    索引模板

在k8s集群部署ELK

 在k8s集群部署ELK在k8s集群部署ELK

 在k8s集群部署ELK

 在k8s集群部署ELK

{
  "index": {
    "lifecycle": {
      "name": "gdnb-tes*"
    }
  }
}

 可将索引周期管理的代码复制过去,也可直接到索引周期管理里面选择gdnb-test-10day这个索引模板

在k8s集群部署ELK在k8s集群部署ELK在k8s集群部署ELK

 在k8s集群部署ELK

3.将需要保存十天的日志索引模板加入刚创建的周期生命管理 

 在k8s集群部署ELK

 文章来源地址https://www.toymoban.com/news/detail-441664.html

到了这里,关于在k8s集群部署ELK的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Kubeadm 部署 k8s 集群

    目录 1.初始化设置 2.所有节点安装docker 3.k8s节点安装kubeadm,kubelet和kubectl 4.部署K8S集群  5.部署Dashboard 6.部署harbor私有仓库 名称 设置 组件 master 192.168.116.70(2C/4G,cpu核心数要求大于2) docker、kubeadm、kubelet、kubectl、flannel node01 192.168.116.60(2C/2G) docker、kubeadm、kubelet、kubectl、

    2024年02月09日
    浏览(42)
  • CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云实验室】

    好消息好消息,阿里云全线降价,大量服务提供免费甚至永久的试用,赶紧来薅大厂羊毛吧,坐电梯即可直达! 送福利!阿里云热门产品免费领(含ECS),点击进入 :https://click.aliyun.com/m/1000370359/ 送福利!阿里云热门产品免费领(含ECS),点击进入 :https://click.aliyun.com/m/

    2023年04月22日
    浏览(42)
  • 基于kubeadm部署K8S集群

    目录 基于kubeadm部署K8S集群 一、环境准备 1、主机初始化配置 2、配置主机名并绑定hosts,不同主机名称不同 3、主机配置初始化 二、部署docker环境 1、三台主机上分别部署 Docker 环境 2、镜像加速器(所有主机配置) 三、部署kubernetes集群 (一)组件介绍 (二)配置阿里云yu

    2024年02月13日
    浏览(45)
  • 基于kubeadm快速部署k8s集群

    1.所有节点部署docker 环境 2.修改docke的管理进程(修改cgroup的管理进程为systemd)   3.基础准备 4.所有节点安装kubeadm,kubelet,kubectl 5.初始化网络组件 5.初始化master节点 6.拷贝授权文件,用于管理K8S集群 7. 自动补全功能-新手必备 ## 恭喜你!master节点准备完成## 8.node节点加入集

    2024年01月16日
    浏览(51)
  • 基于Kubeadm部署k8s集群:下篇

    继续上篇内容 目录 7、安装flannel 8、节点管理命令 三、安装Dashboard UI 1、部署Dashboard 2、开放端口设置 3、权限配置 7、安装flannel Master 节点NotReady 的原因就是因为没有使用任何的网络插件,此时Node 和Master的连接还不正常。目前最流行的Kubernetes 网络插件有Flannel、Calico、Cana

    2024年02月13日
    浏览(48)
  • CentOS 7/8使用kubeadm部署kubernets(k8s)集群【附阿里云实验室】内有福利

    好消息好消息,阿里云全线降价,大量服务提供免费甚至永久的试用,赶紧来薅大厂羊毛吧,坐电梯即可直达! 送福利!阿里云热门产品免费领(含ECS),点击进入 :https://click.aliyun.com/m/1000370359/ 送福利!阿里云热门产品免费领(含ECS),点击进入 :https://click.aliyun.com/m/

    2023年04月27日
    浏览(47)
  • 在k8s集群部署ELK

    使用kubeadm或者其他方式部署一套k8s集群。 在k8s集群创建一个namespace:halashow 3.1 准备资源配置清单  Deployment中存在一个es的业务容器,和一个init容器,init容器主要是配置vm.max_map_count=262144。 service暴露了9200端口,其他服务可通过service name加端口访问es。 3.1 准备资源配置清单

    2024年02月04日
    浏览(54)
  • kubeadm部署k8s 1.26.0版本高可用集群

    1.前言 本次搭建使用centos7.9系统,并且使用haproxy+keepalived作为高可用架构软件,haproxy实现k8s集群管理节点apiserver服务的负载均衡以实现集群的高可用功能,keepalived保障了hapxoy的高可用,容器引擎使用docker,需要额外引入cri-docker服务,且使用集群内置的etcd服务,并配置etcd的

    2024年02月11日
    浏览(48)
  • Kubeadm - K8S1.20 - 高可用集群部署(博客)

    1.系统设置 注意事项: master节点cpu核心数要求大于2 ●最新的版本不一定好,但相对于旧版本,核心功能稳定,但新增功能、接口相对不稳 ●学会一个版本的 高可用部署,其他版本操作都差不多 ●宿主机尽量升级到CentOS 7.9 ●内核kernel升级到 4.19+ 这种稳定的内核 ●部署k8

    2024年02月05日
    浏览(43)
  • 通过kubeadm部署k8s 1.27高可有集群

    本次部署使用外部etcd集群+LB+K8S集群方案。如下图: 软件列表及软件版本:CentOS7U9, Linux kernel 5.4,docker-ce 23.0.6,cri-dockerd v0.3.1,k8s集群为1.27.1 所有主机均需要操作。 所有主机均需要操作。 所有主机均需要操作。 修改完成后需要重启操作系统,如不重启,可临时关闭,命令为s

    2024年02月13日
    浏览(41)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包