基于Ubuntu下安装kubernetes集群指南

这篇具有很好参考价值的文章主要介绍了基于Ubuntu下安装kubernetes集群指南。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

基础版本与环境信息:

MacBook Pro Apple M2 Max

VMware Fusion Player 版本 13.0.2 (21581413)

ubuntu-22.04.2-live-server-arm64

k8s-v1.27.3

docker 24.0.2

MacBook上安装VMware Fusion,再虚拟化出6个ubuntu节点,采用kubeadm来安装k8s+containerd,组成一个非高可用的k8s集群,网络采用flannel插件。vmware和ubuntu的安装已这里不回介绍,网上参考的文章很多。本次的实验k8s集群,共有6个ubuntu节点,1个作为master,5个作为worker。接下来的所有操作都是非root用户下进行,笔者使用的linux用户名为: zhangzk。

hostname IP地址 k8s角色 配置
zzk-1 192.168.19.128 worker 2核&4G
zzk-2 192.168.19.130 worker 2核&4G
zzk-3         192.168.19.131 worker 2核&4G
zzk-4         192.168.19.132 worker 2核&4G
zzk-5 192.168.19.133 worker 2核&4G
test 192.168.19.134 master 2核&4G

1、更新环境(所有节点)

sudo apt update

2、永久关闭swap(所有节点)

sudo swapon --show

如果启用了Swap分区,会看到Swap分区文件的路径及其大小

也可以通过free命令进行检查:

free -h

如果启用了Swap分区则会显示Swap分区的总大小和使用情况。

运行以下命令以禁用Swap分区

sudo swapoff -a

然后删除Swap分区文件:

sudo rm /swap.img

接下来修改fstab文件,以便在系统重新启动后不会重新创建Swap分区文件。 注释或删除/etc/fstab的:

/swap.img none swap sw 0 0

再次运行sudo swapon –show检查是否已禁用,如果禁用的话该命令应该没有输出。

3、关闭防火墙(所有节点)

查看当前的防火墙状态:sudo ufw status

inactive状态是防火墙关闭状态 active是开启状态。

关闭防火墙: sudo ufw disable

4、允许 iptables 检查桥接流量(所有节点)

1.加载overlay和br_netfilter两个内核模块

sudo modprobe overlay && sudo modprobe br_netfilter

持久化加载上述两个模块,避免重启失效。

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

通过运行 lsmod | grep br_netfilter 来验证 br_netfilter 模块是否已加载

通过运行 lsmod | grep overlay 来验证 overlay模块是否已加载

2.修改内核参数,确保二层的网桥在转发包时也会被iptables的FORWARD规则所过滤

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

应用 sysctl 参数而不重新启动

 sudo sysctl --system

K8s为啥要启用bridge-nf-call-iptables内核参数?用案例给你讲明白

开启bridge功能是什么意思(k8s安装时开启bridge-nf-call-iptables)

5、安装 docker(所有节点)

直接使用docker的最新版,安装docker的时候,会自动安装containerd.

具体安装过程,参考官方文档:

Install Docker Engine on Ubuntu | Docker Documentation

Uninstall old versions

Before you can install Docker Engine, you must first make sure that any conflicting packages are uninstalled.

for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done

Install using the apt repository

Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.

Set up the repository
  1. Update the apt package index and install packages to allow apt to use a repository over HTTPS:

    sudo apt-get update
    sudo apt-get install ca-certificates curl gnupg
    
  2. Add Docker’s official GPG key:

    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
  3. Use the following command to set up the repository:

    echo \
      "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    

    Note

    If you use an Ubuntu derivative distro, such as Linux Mint, you may need to use UBUNTU_CODENAME instead of VERSION_CODENAME.

   Install Docker Engine
  1. Update the apt package index:

    sudo apt-get update
    
  2. Install Docker Engine, containerd, and Docker Compose.

    • Latest
    • Specific version
     

    To install the latest version, run:

     sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    

  3. Verify that the Docker Engine installation is successful by running the hello-world image.

    sudo docker run hello-world
    

    This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.

6、安装 k8s(所有节点)

参考官方文档:Installing kubeadm | Kubernetes

1. the apt package index and install packages needed to use the Kubernetes apt repository:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

2.Download the Google Cloud public signing key:

     #使用google源(需要科学上网)

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

     #使用aliyun源(建议使用)

curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

 3.Add the Kubernetes apt repository:

    #使用google源(需要科学上网)

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

     #使用aliyun源(建议使用)

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

    4.Update apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

备注:安装指定版本的kubectl\kubeadm\kubelet,这里以1.23.6为例:

#安装1.23.6版本

apt-get install kubelet=1.23.6-00

apt-get install kubeadm=1.23.6-00

apt-get install kubectl=1.23.6-00

#查看版本

kubectl version --client && kubeadm version && kubelet --version

#开机启动

systemctl enable kubelet

7、修改运行时containerd 配置(所有节点)

    首先生成containerd的默认配置文件:

containerd config default | sudo tee /etc/containerd/config.toml

    1.第一个修改是启用 systemd:

    修改文件:/etc/containerd/config.toml

    找到 containerd.runtimes.runc.options 修改 SystemdCgroup = true

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]

Root = ""

ShimCgroup = ""

SystemdCgroup = true

     2.第二个修改是把远程下载地址从 Google 家的改为阿里云的:

    修改文件:/etc/containerd/config.toml

    把这行:sandbox_image = "registry.k8s.io/pause:3.6"

    改成:sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

[plugins."io.containerd.grpc.v1.cri"]

restrict_oom_score_adj = false

#sandbox_image = "registry.k8s.io/pause:3.6"

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

selinux_category_range = 1024

     注意,不改为“pause:3.9”会导致用kubeadm来初始化master的时候有如下错误:

W0628 15:02:49.331359 21652 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.

    重启并且设置为开机自启动

sudo systemctl restart containerd
sudo systemctl enable containerd

    查看镜像版本号

kubeadm config images list

8、master节点初始化(仅master节点)

生成初始化配置信息:

kubeadm config print init-defaults > kubeadm.conf

修改kubeadm.conf配置

criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: test #修改为master节点的主机名
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certifiapiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.19.134 #修改为master机器的地址
  bindPort: 6443
nodeRegistration:
  catesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #修改为阿里云源
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16    # 新增 
  serviceSubnet: 10.96.0.0/12
scheduler: {}

完成主节点master的初始化

sudo kubeadm init --config=kubeadm.conf

输出日志如下:

[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local test] and IPs [10.96.0.1 192.168.19.134]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost test] and IPs [192.168.19.134 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost test] and IPs [192.168.19.134 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 3.501515 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node test as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node test as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.19.134:6443 --token abcdef.0123456789abcdef 
--discovery-token-ca-cert-hash sha256:f8139e292a34f8371b4b541a86d8f360f166363886348a596e31f2ebd5c1cdbf

由于采用的非root用户,所以需要在master上执行如下命令来配置kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

10、配置网络(仅master节点)

在master节点,添加网络插件fannel

​kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

上述如果超时了,多试几次,持续失败则需要用科学上网。

 也可以把kube-flannel.yml内容下载到本地后再配置网络。

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

kubectl apply -f kube-flannel.yml

11、worker节点加入(非master节点执行)

在master节点执行如下命令,打印join命令

kubeadm token create --print-join-command

在需要加入集群的worker节点执行上述join命令

sudo kubeadm join 192.168.19.134:6443 --token lloamy.3g9y3tx0bjnsdhqk --discovery-token-ca-cert-hash sha256:f8139e292a34f8371b4b541a86d8f360f166363886348a596e31f2ebd5c1cdbf

配置worker节点的kubectl

将master节点中的“/etc/kubernetes/admin.conf”文件拷贝到从节点相同目录下,再在从节点执行如下命令(非root):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群节点

zhangzk@test:~$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

test Ready control-plane 13h v1.27.3

zzk-1 Ready 2m21s v1.27.3

zzk-2 Ready 89s v1.27.3

zzk-3 Ready 75s v1.27.3

zzk-4 Ready 68s v1.27.3

zzk-5 Ready 42s v1.27.3

12、常用命令

获取所有namespace下的运行的所有pod: kubectl get po --all-namespaces

获取所有namespace下的运行的所有pod的标签: kubectl get po --show-labels

获取该节点的所有命名空间: kubectl get namespace

查看节点: kubectl get nodes

至此,可以进入云原生的世界开始飞翔了。文章来源地址https://www.toymoban.com/news/detail-666422.html

到了这里,关于基于Ubuntu下安装kubernetes集群指南的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Ubuntu22.04部署Kubernetes集群(亲测可用)

    本文将使用kubeadm在Ubuntu22.04上部署k8s集群,kubeadm 是官方社区推出的一个用于快速部署kubernetes 集群的工具,用于快速部署Kubernetes 集群。 下载ubuntu22.04镜像,使用vmware部署三台ubuntu22.04虚拟机并配置静态ip和主机名,节点配置如下: 修改为阿里云镜像源 参考文章ubuntu修改apt为

    2024年02月09日
    浏览(61)
  • window11系统基于 wsl2 安装Linux发行版ubuntu,并安装docker及vscode

    WSL是“Windows Subsystem for Linux”的缩写,顾名思义,WSL就是Windows系统的Linux子系统,其作为Windows组件搭载在Windows10周年更新(1607)后的Windows系统中。 既然WSL是“子系统”,那么WSL的地位我们能大概推测出—— “子系统”作为系统层的一部分,相较于应用层(虚拟机)会消耗

    2024年02月13日
    浏览(59)
  • Xilinx Ubuntu环境下docker&Vitis AI 3.0基于GPU的环境安装

    图1 Visiti AI用户开发环境需求 Xilinx官网Vitis AI入门指南 Xilinx Github Vitis AI资料 首先参考官网资料中的入门部分进行环境设置,显卡驱动如何安装这里就不作介绍了。 1.克隆 Vitis AI 存储库以获取示例、参考代码和脚本。 2.安装 Docker。     这里不建议参考官方链接给出的官方

    2024年02月16日
    浏览(42)
  • 基于Docker的K8s(Kubernetes)集群部署

    开始搭建k8s集群 三台服务器修改主机名称 关闭对话窗口,重新连接 三台主机名称呢就修改成功了。 接下来修改每台节点的 hosts 文件 所有节点关闭 setLinux 查看是否关闭成功 为每个节点添加 k8s 数据源 如果安装docker数据源找不到yum-config 所有节点安装kubelet kubelet安装中… k

    2024年02月08日
    浏览(97)
  • Ubuntu22.04 安装单机版kubernetes

    上期讲到要实现.net 6框架下的EF Core操作数据库基本增删改查,没有及时兑现。没有兑现的原因就是因为安装kubernetes。安装kubernetes的过程是灾难性的,也是十分顺利的。灾难性是因为在安装kubernetes过程中误操作,在/etc下执行了一个重置的命令导致我的工作站系统崩塌了。索

    2024年02月06日
    浏览(60)
  • Ubuntu 20.04 LTS 安装Kubernetes 1.26

    1、环境配置 (1)添加主机名称解析记录 (2)禁止K8s使用虚拟内存 (3)开启内核ipv4转发 (4) 安装ipvsadm,加载ipvs模块 cat /etc/modules-load.d/ipvs.conf EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF 加载模块,并进行检查 modprobe --all ip_vs ip_vs_rr ip_vs_wrr  ip_vs_sh  nf_conntrack lsmod|grep -e ip_vs -e nf

    2024年02月09日
    浏览(46)
  • 【基于Ubuntu20.04的Autoware.universe安装过程】方案三:Docker | 详细记录 | 全过程图文 by.Akaxi

    目录 一、Autoware.universe背景 二、安装说明 三、安装git 四、克隆autoware 五、安装cuda 六、安装Docker软件 七、安装Nvidia Container Toolkit 八、安装Rocker 九、拷贝Autoware的Docker镜像 十、创建autoware_map文件夹 十一、启动docker 十二、源码拷贝 十三、安装依赖 十四、编译Autoware.universe 十

    2024年04月10日
    浏览(59)
  • nanopc t6 的ubuntu22.04用docker 安装openwrt 23.05.2,基于arm64 (aarch64)架构

    我是用nanopc t6的官方系统,rk3588-XYZ-ubuntu-jammy-x11-desktop-arm64-YYYYMMDD.img.gz,可以从官方的百度网盘下载,根据官方教材刷入nanopc t6中即可。官方网址如下: NanoPC-T6/zh - FriendlyELEC WiKi 毕竟刚刷的系统,先设置root密码 默认的账户和密码都是pi,我是自己添加了sudo用户,删除了pi用

    2024年01月17日
    浏览(66)
  • linux ubuntu Mysql 安装指南

    第一部分安装: 步骤 1: 更新 apt 软件包索引 在开始安装之前,建议先更新 apt 软件包索引,以确保你安装的软件版本是最新的: 步骤 2: 安装 MySQL 服务器 运行以下命令来安装 MySQL 服务器软件包: 在安装过程中,系统会要求你设置 root 用户的密码。请注意,安全起见,应该设

    2024年02月21日
    浏览(35)
  • Ubuntu 16.04——Hadoop集群安装配置

    hadoop 集群安装配置分为两个部分,一个部分是 主节点(Master) 和 从节点(Slave) ,两个部分需要完成的配置有一定的不同;总的来说,Master 所需要做的会比 Slave 会多一些。下面会演示两个部分需要做的事情;由于各台主机情况会有所不同,所以在报错的时候也会有一些的

    2024年02月02日
    浏览(48)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包