Ubuntu22.04搭建k8s集群,看这一篇就够啦!

这篇具有很好参考价值的文章主要介绍了Ubuntu22.04搭建k8s集群,看这一篇就够啦!。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

k8s安装

前言

kuberneter-v1.25.3版本部署,由于自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除,所以我们的 **容器运行时(容器运行时负责运行容器的软件)** 已不在是docker。本文将采用containerd作为 **容器运行时**。
本文所有的所有软件包和配置文件在文章末尾的网盘中。

一、准备开始

系统 CPU RAM IP 网卡 主机名
Ubuntu22.04 2 4G 192.168.247.100 NAT k8s-master
Ubuntu22.04 2 4G 192.168.247.101 NAT k8s-slave1
Ubuntu22.04 2 4G 192.168.247.102 NAT k8s-slave2
最低配置:CPU核心不低于2个,RAM不低于2G。

二、环境配置(所有节点操作)

修改主机名

# master节点
hostnamectl set-hostname k8s-master
# slave1节点
hostnamectl set-hostname k8s-slave1
# slave2节点
hostnamectl set-hostname k8s-slave2

配置hosts映射

cat >> /etc/hosts << EOF
192.168.247.100 k8s-master
192.168.247.101 k8s-slave1
192.168.247.102 k8s-slave2
EOF

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭交换分区(为了保证 kubelet 正常工作,必须禁用交换分区)

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

转发 IPv4 并让 iptables 看到桥接流

#转发 IPv4 并让 iptables 看到桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
lsmod | grep br_netfilter #验证br_netfilter模块
# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

配置 时间同步

# 执行date命令,查看时间是否异常
date
# 更换时区
sudo timedatectl set-timezone Asia/Shanghai
# 安装ntp服务
apt install ntp
# 启动服务
systemctl start ntp

三、安装containerd(所有节点操作)

3.1 安装containerd

下载containerd包

首先访问https://github.com/,搜索containerd,进入项目找到Releases,下拉找到对应版本的tar包,如图所示:

ubuntu搭建kubernetes,kubernetes,docker,容器,ubuntu,云原生

下载完成后,将该压缩包传到三台服务器上。

$ tar Cvzxf /usr/local containerd-1.6.9-linux-amd64.tar.gz

# 通过 systemd 启动 containerd
$ vi /etc/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
# 加载配置、启动
systemctl daemon-reload
systemctl enable --now containerd
# 验证
ctr version
#生成配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

3.2 安装runc

#下载runc地址:https://github.com/opencontainers/runc/releases
# 安装
install -m 755 runc.amd64 /usr/local/sbin/runc
# 验证
runc -v

3.3 安装CNI

#下载CNI地址:https://github.com/containernetworking/plugins/releases
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

3.4 配置加速器

#参考:https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
#添加 config_path = "/etc/containerd/certs.d"
sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.toml
mkdir /etc/containerd/certs.d/docker.io -p

# 下述https://xxxx.mirror.aliyuncs.com为阿里云容器镜像加速器地址,搜索阿里云容器服务复制即可
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://xxxx.mirror.aliyuncs.com"]
  capabilities = ["pull", "resolve"]
EOF
  
systemctl daemon-reload && systemctl restart containerd

四、cgroup 驱动(所有节点操作)

在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组 为 Pod 和容器管理资源 ,如 CPU、内存这类资源设置请求和限制。
若要对接控制组(CGroup),kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。

#把SystemdCgroup = false修改为:SystemdCgroup = true
sed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/g' /etc/containerd/config.toml
#把sandbox_image = "k8s.gcr.io/pause:3.6"修改为:sandbox_image="registry.aliyuncs.com/google_containers/pause:3.8"
sed -i 's/sandbox_image\ =.*/sandbox_image\ =\ "registry.aliyuncs.com\/google_containers\/pause:3.8"/g' /etc/containerd/config.toml|grep sandbox_image

systemctl daemon-reload 
systemctl restart containerd

五、安装crictl(所有节点操作)

kubernetes中使用crictl管理容器,不使用ctr。

crictl 是 CRI 兼容的容器运行时命令行接口。 可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。

# 下载地址 https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz

#配置crictl对接ctr容器运行时。
tar -vzxf crictl-v1.25.0-linux-amd64.tar.gz
mv crictl /usr/local/bin/

cat >>  /etc/crictl.yaml << EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: true
EOF

systemctl restart containerd

六、kubeadm部署集群

6.1 更换阿里云k8s镜像源(所有节点操作)

# 如果不添加镜像源,会报Unable to locate package XXX,使用官方镜像源又太慢,这里使用的阿里的源
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"  | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update

6.2 安装kubeadm、kubelet、kubectl(所有节点操作)

sudo apt install -y kubelet=1.25.3-00 kubeadm=1.25.3-00 kubectl=1.25.3-00

6.3 kubeadm初始化(master节点操作)

#查看我们kubeadm版本,这里为GitVersion:"v1.25.3"
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}


# 生成默认配置文件
$ kubeadm config print init-defaults > kubeadm.yaml
$ vi kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.247.100  # 修改为宿主机ip,主节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master   # 修改为宿主机名
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改为阿里镜像
kind: ClusterConfiguration
kubernetesVersion: 1.25.3  # kubeadm的版本为多少这里就修改为多少
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16   ## 设置pod网段
scheduler: {}

###添加内容:配置kubelet的CGroup为systemd
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

编辑完成之后,开始拉取镜像并初始化

#下载镜像
$ kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers  --kubernetes-version=v1.25.3
#初始化
$ kubeadm init --config kubeadm.yaml
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.247.100:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f
# master节点执行
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config


# slave1节点执行
[root@k8s-slave1 ~]#  kubeadm join 192.168.247.100:6443 --token abcdef.0123456789abcdef         --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f
# slave2节点执行
[root@k8s-slave2 ~]#  kubeadm join 192.168.247.100:6443 --token abcdef.0123456789abcdef         --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f

查看节点是否成功添加进来

# master节点执行
[root@k8s-master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   3m25s   v1.25.4
k8s-slave1   NotReady   <none>          128s    v1.25.4
k8s-slave2   NotReady   <none>          118s    v1.25.4

6.4 部署网络(master节点操作)

经过上述步骤,可以看到slave节点成功添加进来,但是所有节点的的STATUS都为NotReady,我们需要安装网络插件,常见的网络插件有flannel、calico等。这里以flannel为例。

# 创建flannel.yaml配置文件
# flannel.yaml 配置文件官网地址 https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
vim flannel.yaml

将下列内容添加进去

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

注意,如果如果此时执行 kubectl apply -f flannel.yaml,那么由于镜像是国外的地址,导致下述情形。
ubuntu搭建kubernetes,kubernetes,docker,容器,ubuntu,云原生

这时我们需要对上述官方的配置文件镜像修改,替换其中的install-cni-plugin、install-cni、kube-flannel的镜像地址。修改后的配置文件如下所示。

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

修改完之后,执行 kubectl apply -f flannel.yaml即可(等待一会)

root@k8s-master:~# kubectl get pod -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-5bj7j            1/1     Running   1 (13m ago)   11h
kube-flannel   kube-flannel-ds-xvfkd            1/1     Running   1 (13m ago)   11h
kube-flannel   kube-flannel-ds-z9tcm            1/1     Running   1 (10m ago)   11h
kube-system    coredns-c676cc86f-72ntm          1/1     Running   1 (10m ago)   11h
kube-system    coredns-c676cc86f-r876l          1/1     Running   1 (10m ago)   11h
kube-system    etcd-master                      1/1     Running   1 (10m ago)   11h
kube-system    kube-apiserver-master            1/1     Running   1 (10m ago)   11h
kube-system    kube-controller-manager-master   1/1     Running   1 (10m ago)   11h
kube-system    kube-proxy-b92z7                 1/1     Running   1 (10m ago)   11h
kube-system    kube-proxy-jjsws                 1/1     Running   1 (13m ago)   11h
kube-system    kube-proxy-pqw6n                 1/1     Running   1 (13m ago)   11h
kube-system    kube-scheduler-master            1/1     Running   1 (10m ago)   11h
root@k8s-master:~# kubectl get node -A
NAME         STATUS   ROLES           AGE   VERSION
k8s-slave1   Ready    <none>          13h   v1.25.3
k8s-slave2   Ready    <none>          13h   v1.25.3
master       Ready    control-plane   13h   v1.25.3

至此,k8s的集群环境搭建完成,一路走来,踩了不少坑,很多是因为机器环境、镜像地址、版本的等问题,希望本篇文章能帮助到大家!
链接:https://pan.baidu.com/s/1Jgp8B1FhAyNew-y_fE8beg?pwd=6iot
提取码:6iot文章来源地址https://www.toymoban.com/news/detail-596254.html

到了这里,关于Ubuntu22.04搭建k8s集群,看这一篇就够啦!的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Ubuntu22.04部署K8S1.27.2版本集群

    一、设置主机名并在 hosts 文件中添加条目 1、登录节点使用 hostnamectl 命令设置 hostname #在master中:      172.18.10.11 $ sudo hostnamectl set-hostname \\\"k8s-master\\\" #在work1节点中:  172.18.10.12 $ sudo hostnamectl set-hostname \\\"k8s-worker1\\\" #在work2节点中:  172.18.10.13 $ sudo hostnamectl set-hostname \\\"k8s-worke

    2024年02月09日
    浏览(54)
  • 基于Ubuntu22.04部署生产级K8S集群v1.27(规划和核心组件部署篇)

    本文档主要根据k8s官网文档和其插件的官网文档,参考部分他人优秀经验,在实际操作中逐渐完成,比较详尽,适合在境内学习者和实践者参考。 实操环境基于VMware Workstation 17 pro,采用ubuntu22.04操作系统(有时也提到rhel系列系统),采用kubeadm1.27.4(部分地方提到了1.28)部署

    2024年02月02日
    浏览(58)
  • 【云原生】搭建k8s高可用集群—更新于2023.04

    假设现在有3个node节点和3个master节点,node1到底是连到master1还是连到master2还是master3,需要有人来分配,这个中间人就是load balancer,load balancer起到两个作用,一是负载。二是检查master状态,如果master1异常,就会使node连到master2上,如果master1正常,则正常提供服务。由于节点

    2023年04月16日
    浏览(39)
  • Ubuntu22.04部署Kubernetes集群(亲测可用)

    本文将使用kubeadm在Ubuntu22.04上部署k8s集群,kubeadm 是官方社区推出的一个用于快速部署kubernetes 集群的工具,用于快速部署Kubernetes 集群。 下载ubuntu22.04镜像,使用vmware部署三台ubuntu22.04虚拟机并配置静态ip和主机名,节点配置如下: 修改为阿里云镜像源 参考文章ubuntu修改apt为

    2024年02月09日
    浏览(62)
  • 【Ubuntu】Ubuntu22.04搭建Samba服务

    1987年,微软公司和英特尔公司共同制定了SMB(Server Messages Block,服务器消息块)协议,旨在解决局域网内的文件或打印机等资源的共享问题,这也使得在多个主机之间共享文件变得越来越简单。到了1991年,当时还在读大学的Tridgwell为了 解决Linux系统与Windows系统之间的文件共

    2024年02月09日
    浏览(46)
  • 【深度学习】环境搭建ubuntu22.04

    清华官网的conda源 https://mirrors.tuna.tsinghua.edu.cn/help/anaconda/ 安装torch conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia 2.2.2 conda install 指引看这里: ref:https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#package-manager-metas invidia toolkit的指引在这里,看起来,drive

    2024年04月09日
    浏览(61)
  • 【一】ubuntu20.04上搭建containerd版( 1.2.4 以上)k8s及kuboard V3

    #k8s #服务器 #部署 #云原生 栏目全部章节 一、ubuntu20.04上搭建containerd版( 1.2.4 以上)k8s及kuboard V3 二、kubernetes master单节点拓展为集群 三、kubernetes kuboard部署分布式系统 k8s 部署全程在超级用户下进行 本文请根据目录大纲顺序阅读! 1、安装docker 使用apt安装containerd 新版k8s已

    2024年02月12日
    浏览(47)
  • ubuntu22.04搭建verilator仿真环境

    操作系统为 Ubuntu(22.04.2 LTS),本次安装verilator开源verilog仿真工具,进行RTL功能仿真。下面构建版本为5.008的verilator仿真环境。先看一下我系统的版本: 安装依赖 获取源码,选择版本为5.008 进行编译 安装后查看版本,大功告成

    2024年02月10日
    浏览(73)
  • 搭建Ubuntu 22.04可视化界面

    搭建Ubuntu 22.04的可视化界面通常包括安装图形用户界面(GUI)和桌面环境。在Ubuntu中,常用的桌面环境有GNOME、KDE、XFCE、LXQt等。以下是一些通用的步骤,但请注意,具体步骤可能因桌面环境而异。 执行以下命令,清空缓存,更新您的软件包列表。 执行以下命令,安装桌面环

    2024年02月12日
    浏览(45)
  • ubuntu 22.04搭建OpenVPN服务器

    为了公司与分公司之前的内部服务器和办公电脑之间能够相互访问,打算使用VPN,对于VPN,以前用得多的是PPTP; 但是PPTP相对于openvpn来说,没有openvpn安全,而且PPTP在linux下命令行支持不是很好,稳定性也不如openvpn。所以最后就选择openvpn来搭建VPN. 如上图所示,红线为VPN访问效

    2024年02月14日
    浏览(48)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包