CentOS安装k8s单机/集群及一些命令

这篇具有很好参考价值的文章主要介绍了CentOS安装k8s单机/集群及一些命令。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

目录

前言

1. 安装docker

2. 安装要求

3.准备网络(如果只装单机版可跳过此部)

4. 准备工作

5. 安装

5.1. 配置阿里云yum k8s源

5.2 安装kubeadm、kubectl和kubelet

5.3 初始化,只在master执行,子节点不要执行

5.3.1 一些错误(没有错误直接忽略)

5.4 使用kubectl工具

5.5 子节点加入(单机版可忽略)

5.6 部署CNI网络插件

6. 扩展


前言

        只针对1.24版本以前的k8s, 1.24版本以后删除了内置dockershim插件,原生不再支持docker运行时,需要使用第三方cri接口cri-docker

1. 安装docker

看我上一篇博客

CentOS安装docker及一些命令https://blog.csdn.net/o_CanDou6/article/details/135505341

2. 安装要求

  • 内存大于等于2G,CPU大于等于2核,硬盘大于等于30G。
  • 禁止swap分区。

3.准备网络(如果只装单机版可跳过此部)

新安装的Centos服务器需要配置静态网络:
打开网络配置文件:

vi /etc/sysconfig/network-scripts/ifcfg-enp0s3 

将以下内容添加进去,其中BOOTPROTO="static"表示静态网络,NAME和DEVICE填网卡驱动如果没有驱动需要手动安装即可;下面添加IP、子网掩码以及网关、DNS等内容。

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static #需要修改
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s3
UUID=5c84522d-4102-4260-9a23-4121bd510252
DEVICE=enp0s3
ONBOOT=yes
IPADDR=192.168.2.159 #修改固定ip
NETMASK=255.255.255.0 #同步修改
GATEWAY=192.168.2.1 #同步修改
DNS1=192.168.2.1 #同步修改

准备了两个虚拟机当做演示ip地址如下(按自己的ip为准

角色 名称 IP
主节点 master 192.168.2.159
子节点 node1 192.168.2.64

4. 准备工作

#永久关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

#永久关闭swap
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 单机可以不执行如下命令
## 二台服务器设置主机名
hostnamectl set-hostname <hostname>

## 修改hosts
vi /etc/hosts

# 加入如下数据 按自己ip增加
192.168.2.159 master
192.168.2.64 node1

5. 安装

5.1. 配置阿里云yum k8s源

vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

可查看k8s版本信息

yum list kubelet --showduplicates | sort -r

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

我们以1.21.0版本为例 (注意k8s版本需要对于特定的docker版本,不然安装不成功

以下是一些常见的k8s与Docker版本对应关系:

  • k8s v1.22.x 对应 Docker 20.10.x
  • k8s v1.21.x 对应 Docker 20.10.x
  • k8s v1.20.x 对应 Docker 19.03.x

5.2 安装kubeadm、kubectl和kubelet

yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0

systemctl enable kubelet

5.3 初始化,只在master执行,子节点不要执行

# --apiserver-advertise-address=本机ip
kubeadm init --kubernetes-version=1.21.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=[本机ip] --ignore-preflight-errors=all --image-repository=registry.aliyuncs.com/google_containers

5.3.1 一些错误(没有错误直接忽略

[WARNING FileExisting-tc]: tc not found in system path 错误

# 解决方法
yum install iproute-tc -y 

[WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
, error: exit status 1

# 解决方法
docker pull coredns/coredns:latest
docker tag coredns/coredns:latest registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

5.4 使用kubectl工具

执行完毕后会出现如下内容红框内容在master(本机)上执行,蓝框的在子节点上执行加入集群

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行完可以查看节点了

kubectl get nodes

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

现在只有自己

5.5 子节点加入(单机版可忽略)

子节点不需要执行5.3 初始化的内容,只执行master输出的 kubeadm join 命令加入集群即可

# 执行自己的输出内容不要复制内容不同

kubeadm join 192.168.2.159:6443 --token e5doub.g27604rf65vj02yr \
        --discovery-token-ca-cert-hash sha256:2521d2d4ee37750feba14a00ef0de0dfc390b1141f7abda81b0e259ce01870af 

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

子节点执行完后再次查看节点

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

子节点加入进来了但是这时候子节点还不能使用,需要把master服务器/etc/kubernetes/admin.conf复制到子节点的/etc/kubernetes/文件夹中

然后再子节点中执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

现在子节点可以正常使用了

5.6 部署CNI网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

访问不到的可以复制如下内容自己创建文件 使用 kubectl apply -f  执行

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.24.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.24.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

6. 扩展

在master安装 recommended.yaml 和 dashboard-adminuser.yml 使用图形界面查看 (这两个文件在下方

kubectl apply -f recommended.yaml
kubectl apply -f dashboard-adminuser.yml 

安装完成后可以用火狐访问 https://[ip地址]:32508/#/login 如我的为 https://192.168.2.159:32508/#/login

注意不要用高版本Chrome或者edge登录https没有证书访问不了https://[ip地址]:32508/#/login 如我的为 

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

这里需要token执行下面命令获取token

# 获取登录凭证
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin | awk '{print $1}')

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

粘贴后进入

CentOS安装k8s单机/集群及一些命令,k8s,安装,linux,centos,kubernetes,linux,云原生

recommended.yaml:

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32508
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

dashboard-adminuser.yml:文章来源地址https://www.toymoban.com/news/detail-797119.html

apiVersion: v1
kind: ServiceAccount
metadata:
    name: admin-user
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
    name: admin-user
    annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

到了这里,关于CentOS安装k8s单机/集群及一些命令的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • K8S如何部署Redis(单机、集群)

    在今天的讨论中,我们将深入研究如何将Redis数据库迁移到云端,以便更好地利用云计算的优势提高数据管理的灵活性。 Redis(Remote Dictionary Server)是一个开源的、基于内存的数据结构存储系统,它可以用作数据库、缓存和消息代理。Redis支持多种数据结构,如字符串、列表、集

    2024年02月11日
    浏览(40)
  • Centos7 安装部署 Kubernetes(k8s) 高可用集群

    宿主机系统 集群角色 服务器IP 主机名称 容器 centos7.6 master 192.168.2.150 ks-m1 docker centos7.6 master 192.168.2.151 ks-n1 docker centos7.6 master 192.168.2.152 ks-n2 docker 1.1 服务器初始化及网络配置 VMware安装Centos7并初始化网络使外部可以访问** 注意事项:请一定要看完上面这篇文章再执行下面的操

    2024年02月03日
    浏览(56)
  • 搭建单机版K8S运行Flink集群

    环境要求 操作系统: CentOS 7.x 64位 Kubernetes版本:v1.16.2 Docker版本:19.03.13-ce Flink版本:1.14.3 使用中国YUM及镜像源  1.安装Kubernetes: 1.1 创建文件:/etc/yum.repos.d/kubernetes.repo,内容如下: 1.2  执行安装命令:  1.3 启动kubelet服务并设置开机自启: 2.安装Docker: 2.1 创建文件:

    2023年04月26日
    浏览(42)
  • Centos7部署单机版K8S

    2024年02月04日
    浏览(50)
  • m1使用VMware安装CentOS7并部署k8s高可用集群

    项目 版本 处理器 Apple M1 Max 操作系统 macOS Ventura 13.0 虚拟机应用 VMware Fusion 专业版 12.2.3 虚拟机操作系统 CentOS Linux 7 (AltArch) 容器运行时版本 docker 1.13.1 集群版本 Kubernetes 1.21.0 2.1 安装 VMware 安装包下载地址:VMware Fusion 专业版 12.2.3 应用许可证问题请自行解决 2.2 安装虚拟机 2.

    2024年02月02日
    浏览(47)
  • K8S集群安装与部署(Linux系统)

    一、环境说明:CentOS7、三台主机(Master:10.0.0.132、Node1:10.0.0.133、Node2:10.0.0.134) 二、准备环境: 映射 关闭防火墙 三、etcd集群配置 安装etcd(Master) 修改etcd配置文件/etc/etcd/etcd.conf(Master) 安装K8S节点组件、etcd、flannel以及docker(Node1和Node2) 修改etcd配置文件/etc/etcd/et

    2024年02月11日
    浏览(44)
  • k8s 集群搭建的一些坑

    k8s集群部署的时候会遇到很多的坑,即使看网上的文档也可能遇到各种的坑。 1、虚拟机两台(ip按自己的网络环境相应配置)(master/node) 192.168.100.215 k8s-master 192.168.100.216 k8s-node1 2、关闭防火墙(master/node) systemctl stop firewalld systemctl disable firewalld 3、关闭selinux(master/node) setenforce

    2024年01月19日
    浏览(48)
  • Amazon Linux2使用kubeadm部署安装K8S集群

    在AWS上启动3台Amazon Linux2的服务器,服务器配置为2vcpu 和2GB内存 1. 修改主机名(可选步骤) 2.导入k8s的yum仓库密钥 3. 配置kubernetes源 4. 部署安装kubeadm、kubectl、docker,并且启动docker 5. 在master节点上执行初始化 具体初始化过程如下 [init] Using Kubernetes version: v1.27.1 [preflight] Runni

    2024年02月06日
    浏览(47)
  • centos8.x系统安装K8S,kubernetes集群v1.23.9,docker支持的最后一个版本

    卸载podman,centos默认安装了podman容器(不管有没有,执行下总没错),可能与docker存在冲突 2.1 第一种安装方法 docker安装请参考 Linux系统在线安装docker任意版本完整教程 2.2 第二种安装方法 ** ##执行完毕后请记住如下的信息: **

    2024年02月12日
    浏览(61)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包