kubespray部署kubernetes(containerd + cilium)

这篇具有很好参考价值的文章主要介绍了kubespray部署kubernetes(containerd + cilium)。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

参考地址:https://kubespray.io/#/文章来源地址https://www.toymoban.com/news/detail-535095.html

创建虚拟机

# 安装虚拟机管理工具
$ brew install multipass
# 创建虚拟节点
$ multipass launch -n kubespray -m 1G -c 1 -d 30G
$ multipass launch -n cilium-node1 -m 2G -c 2 -d 30G
$ multipass launch -n cilium-node2 -m 2G -c 2 -d 30G
$ multipass launch -n cilium-node3 -m 2G -c 2 -d 30G
# 查看节点
$ multipass list
Name                    State             IPv4             Image
kubespray               Running           192.168.64.7     Ubuntu 22.04 LTS
cilium-node1            Running           192.168.64.8     Ubuntu 22.04 LTS
cilium-node2            Running           192.168.64.9     Ubuntu 22.04 LTS
cilium-node3            Running           192.168.64.10    Ubuntu 22.04 LTS

配置kubespray节点

root@kubespray:~# sudo apt update
root@kubespray:~# sudo apt install git python3 python3-pip -y
root@kubespray:~# git clone https://github.com/kubernetes-incubator/kubespray.git
root@kubespray:~# cd kubespray
root@kubespray:~# pip install -r requirements.txt

验证 Ansible 版本,运行:

root@kubespray:~# ansible --version
ansible [core 2.14.6]
  config file = /root/kubespray/ansible.cfg
  configured module search path = ['/root/kubespray/library']
  ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (/usr/bin/python3)
  jinja version = 3.1.2
  libyaml = True

创建主机清单,运行以下命令,替换你部署的节点 IP 地址:

root@kubespray:~# cp -rfp inventory/sample inventory/mycluster
root@kubespray:~# declare -a IPS=(192.168.64.8 192.168.64.9 192.168.64.10)
root@kubespray:~# CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

规划集群节点名称、角色及分组

root@kubespray# vim inventory/mycluster/hosts.yaml
all:
  hosts:
    node1:
      ansible_host: 192.168.64.8
      ip: 192.168.64.8
      access_ip: 192.168.64.8
    node2:
      ansible_host: 192.168.64.9
      ip: 192.168.64.9
      access_ip: 192.168.64.9
    node3:
      ansible_host: 192.168.64.10
      ip: 192.168.64.10
      access_ip: 192.168.64.10
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube_node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}

国内能否安装的关键

root@kubespray# cp inventory/mycluster/group_vars/all/offline.yml inventory/mycluster/group_vars/all/mirror.yml
root@kubespray# sed -i -E '/# .*\{\{ files_repo/s/^# //g' inventory/mycluster/group_vars/all/mirror.yml
root@kubespray# tee -a inventory/mycluster/group_vars/all/mirror.yml <<EOF
gcr_image_repo: "gcr.m.daocloud.io"
kube_image_repo: "k8s.m.daocloud.io"
docker_image_repo: "docker.m.daocloud.io"
quay_image_repo: "quay.m.daocloud.io"
github_image_repo: "ghcr.m.daocloud.io"
files_repo: "https://files.m.daocloud.io"
EOF

配置集群版本、网络插件、运行时等

root@kubespray# vim inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

# 选择网络插件,支持 cilium, calico, weave 和 flannel
# 这里我选择的是cilium
kube_network_plugin: cilium
 
# 如果ip和我不一样的,一定要确认一下这两个网段是否有冲突
# 设置 Service 网段
kube_service_addresses: 10.233.0.0/18
 
# 设置 Pod 网段
kube_pods_subnet: 10.233.64.0/18
 
# 支持 docker, crio 和 containerd,推荐 containerd.
container_manager: containerd
 
# 是否开启 kata containers
kata_containers_enabled: false
 
# 集群名称 因为带了.符号,所以不建议使用自带的,可以改为自己的
cluster_name: k8s-cilium

# 如果想配置Kuberenetes 仪表板和入口控制器等插件请修改下面的文件
root@kubespray# vim inventory/mycluster/group_vars/k8s_cluster/addons.yml

# 校验改动是否成功
root@kubespray# cat inventory/mycluster/group_vars/all/all.yml
root@kubespray# cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

确保每台服务器都能root登录(后面用到root用户,非root用户可以自己探索)

sudo sed -i '/PermitRootLogin /c PermitRootLogin yes' /etc/ssh/sshd_config
sudo systemctl restart sshd
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa

配置免密登录

root@kubespray:~# ssh-copy-id root@192.168.64.8
root@kubespray:~# ssh-copy-id root@192.168.64.9
root@kubespray:~# ssh-copy-id root@192.168.64.10
# 如果提示权限不够,手动将kubespray节点的/root/.ssh/id_rsa.pub的内容复制后添加在其他三个节点/root/.ssh/authorized_keys,注意需要另起一行添加

禁用防火墙并启用 IPV4 转发

# 要在所有节点上禁用防火墙,请从 Ansible 节点运行以下 ansible 命令:
root@kubespray:~# cd kubespray
root@kubespray:~# ansible all -i inventory/mycluster/hosts.yaml -m shell -a "sudo systemctl stop firewalld && sudo systemctl disable firewalld"

# 运行以下 ansible 命令以在所有节点上启用 IPv4 转发和禁用交换:
root@kubespray:~# ansible all -i inventory/mycluster/hosts.yaml -m shell -a "echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf"
root@kubespray:~# ansible all -i inventory/mycluster/hosts.yaml -m shell -a "sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab && sudo swapoff -a"

启动Kubernetes部署

root@kubespray:~# cd kubespray
root@kubespray:~# ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

# 等待约30分钟(取决于网络环境)
PLAY RECAP *******************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
node1                      : ok=747  changed=125  unreachable=0    failed=0    skipped=1245 rescued=0    ignored=5   
node2                      : ok=534  changed=76   unreachable=0    failed=0    skipped=736  rescued=0    ignored=1   

Tuesday 25 July 2023  10:46:02 +0800 (0:00:00.112)       0:15:28.991 **********
===============================================================================
download : download_file | Download item ---------------------------------------------------------------------- 57.79s 
download : download_container | Download image if required ---------------------------------------------------- 52.43s 
kubernetes/kubeadm : Join to cluster -------------------------------------------------------------------------- 41.03s 
download : download_file | Download item ---------------------------------------------------------------------- 40.75s 
download : download_container | Download image if required ---------------------------------------------------- 36.81s 
download : download_container | Download image if required ---------------------------------------------------- 36.53s 
download : download_file | Validate mirrors ------------------------------------------------------------------- 36.14s 
download : download_file | Validate mirrors ------------------------------------------------------------------- 36.10s 
download : download_container | Download image if required ---------------------------------------------------- 34.11s 
download : download_file | Download item ---------------------------------------------------------------------- 19.51s 
container-engine/containerd : download_file | Download item --------------------------------------------------- 18.23s 
download : download_container | Download image if required ---------------------------------------------------- 15.44s 
container-engine/crictl : download_file | Download item ------------------------------------------------------- 12.44s 
network_plugin/cilium : Cilium | Wait for pods to run --------------------------------------------------------- 10.75s 
download : download_container | Download image if required ---------------------------------------------------- 10.70s 
download : download_file | Download item ---------------------------------------------------------------------- 10.44s 
container-engine/runc : download_file | Download item --------------------------------------------------------- 9.63s 
etcd : reload etcd -------------------------------------------------------------------------------------------- 9.33s 
container-engine/nerdctl : download_file | Download item ------------------------------------------------------ 9.23s 
download : download_container | Download image if required ---------------------------------------------------- 8.71s

验证集群

root@cilium-node1:~# kubectl get node
NAME    STATUS   ROLES           AGE     VERSION
node1   Ready    control-plane   3h23m   v1.24.6
node2   Ready    control-plane   3h21m   v1.24.6
node3   Ready    <none>          3h17m   v1.24.6
root@cilium-node1:~# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
root@cilium-node1:~# kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS      AGE
kube-system   cilium-operator-f6648bc78-5qqn8   1/1     Running   2 (17m ago)   67m
kube-system   cilium-operator-f6648bc78-v44zf   1/1     Running   1 (17m ago)   67m
kube-system   cilium-qwkkf                      1/1     Running   0             67m
kube-system   cilium-s4rdr                      1/1     Running   0             67m
kube-system   cilium-v2dnw                      1/1     Running   1 (58m ago)   67m
kube-system   coredns-665c4cc98d-k47qm          1/1     Running   0             56m
kube-system   coredns-665c4cc98d-l7mwz          1/1     Running   0             54m
kube-system   dns-autoscaler-6567c8b74f-ql5bw   1/1     Running   0             62m
kube-system   kube-apiserver-node1              1/1     Running   1             3h23m
kube-system   kube-apiserver-node2              1/1     Running   1             3h21m
kube-system   kube-controller-manager-node1     1/1     Running   6 (77m ago)   3h23m
kube-system   kube-controller-manager-node2     1/1     Running   7 (45m ago)   3h21m
kube-system   kube-proxy-2dhd6                  1/1     Running   0             69m
kube-system   kube-proxy-92vgt                  1/1     Running   0             69m
kube-system   kube-proxy-n9vjn                  1/1     Running   0             69m
kube-system   kube-scheduler-node1              1/1     Running   5 (66m ago)   3h23m
kube-system   kube-scheduler-node2              1/1     Running   8 (45m ago)   3h21m
kube-system   nginx-proxy-node3                 1/1     Running   0             3h17m
kube-system   nodelocaldns-jwvjq                1/1     Running   0             61m
kube-system   nodelocaldns-l5plk                1/1     Running   0             61m
kube-system   nodelocaldns-sjjwk                1/1     Running   0             61m

添加节点

root@abba3b870324:/kubespray# vim inventory/mycluster/hosts.yaml
all:
  hosts:
    node1:
      ansible_host: 192.168.64.8
      ip: 192.168.64.8
      access_ip: 192.168.64.8
    node2:
      ansible_host: 192.168.64.9
      ip: 192.168.64.9
      access_ip: 192.168.64.9
    node3:
      ansible_host: 192.168.64.10
      ip: 192.168.64.10
      access_ip: 192.168.64.10
    node4: # 添加节点
      ansible_host: 192.168.64.11
      ip: 192.168.64.11
      access_ip: 192.168.64.11
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube_node:
      hosts:
        node1:
        node2:
        node3:
        node4: # 添加节点
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
root@kubespray:~# ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root scale.yml -v -b

移除某个节点

root@kubespray:~# ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root remove-node.yml -v -b --extra-vars "node=node3" # 这里移除node3,注意,节点名称是kubespray自动分配的

一键清理集群

root@kubespray:~# ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root reset.yml 

到了这里,关于kubespray部署kubernetes(containerd + cilium)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Kubespray v2.21.0 离线部署 Kubernetes v1.25.6 集群

    Kubespray 是 Kubernetes incubator 中的项目,目标是提供 Production Ready Kubernetes 部署方案,该项目基础是通过 Ansible Playbook 来定义系统与 Kubernetes 集群部署的任务,具有以下几个特点: 可以部署在 AWS, GCE, Azure, OpenStack 以及裸机上. 部署 High Available Kubernetes 集群. 可组合性 (Composable),

    2024年02月09日
    浏览(44)
  • Kubespray-offline v2.21.0-1 下载 Kubespray v2.22.1 离线部署 kubernetes v1.25.6

    本篇将说明如何通过 Kubespray 部署 Kubernetes 至裸机节点,安装版本如下所示: rocky linux 8.8 Kubernetes v1.25.6 kubespray v2.21.0-1 系统: rocky linux 8.8 192.168.23.30-rocky-8.8-bastion01 bastion01 (这里下载介质与部署节点为同一节点,如果非同一节点,需要介质下载搬运) 192.168.23.30(联网下载介质

    2024年02月14日
    浏览(52)
  • kubernetes最新版安装单机版v1.25.2,containerd启动容器

    我是华为云主机,内网IP:192.168.0.218,外网IP是:49.0.248.144 安装K8S用的是内网IP,对外访问用的是外网IP 关闭防火墙和关闭selinux和关闭swap,修改主机名 将桥接的IPv4 流量传递到iptables 的链 /etc/containerd/config.toml #系统一样,直接复制如下的信息进去,containerd 镜像加速器都配置好

    2024年01月25日
    浏览(51)
  • 《Kubernetes部署篇:Ubuntu20.04基于containerd部署kubernetes1.24.12单master集群》

    如下图所示: 主机名 K8S版本 系统版本 内核版本 IP地址 备注 k8s-master-62 1.24.12 Ubuntu 20.04.5 LTS 5.15.0-69-generic 192.168.1.62 master节点 k8s-worker-63 1.24.12 Ubuntu 20.04.5 LTS 5.15.0-69-generic 192.168.1.63 worker节点 k8s-worker-64 1.24.12 Ubuntu 20.04.5 LTS 5.15.0-69-generic 192.168.1.64 worker节点 说明:分别在对应的

    2023年04月25日
    浏览(46)
  • Kubernetes高可用集群二进制部署(Runtime Containerd)

    Kubernetes(简称为:k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效,Kubernetes提供了资源调度、部署管理、服务发现、扩容缩容、监控,维护等一整套功能

    2024年02月08日
    浏览(87)
  • [kubernetes]二进制部署k8s集群-基于containerd

    k8s从1.24版本开始不再直接支持docker,但可以自行调整相关配置,实现1.24版本后的k8s还能调用docker。其实docker自身也是调用containerd,与其k8s通过docker再调用containerd,不如k8s直接调用containerd,以减少性能损耗。 除了containerd,比较流行的容器运行时还有podman,但是podman官方安装

    2024年02月12日
    浏览(53)
  • Kubernetes 1.25.0 基于containerd的部署(rockylinux8.6)

    1、环境说明(安装时配置IP及主机名) 序号 主机IP 主机名 系统 备注 1 192.168.3.114 master rockylinux8.6最小化安装 控制节点 2 192.168.3.115 node1 rockylinux8.6最小化安装 工作节点 3 192.168.3.116 node2 rockylinux8.6最小化安装 工作节点 2、关闭selinux,firewalld及swap分区(在三台设备上执行) 注:sw

    2024年02月02日
    浏览(42)
  • Kubernetes 集群管理:Kurator or Kubespray

    摘要: Kubespray 和 Kurator 就是这类开源工具的典型代表。本文将对这两款工具进行比较。 本文分享自华为云社区《Kubernetes 集群管理:Kurator or Kubespray-华为云云原生团队》,作者: 云容器大未来 。 随着云计算技术的飞速发展,Kubernetes 已经成为了容器编排领域的事实标准。用

    2024年02月08日
    浏览(34)
  • 云原生Kubernetes:K8S集群实现容器运行时迁移(docker → containerd) 与 版本升级(v1.23.14 → v1.24.1)

    目录 一、理论 1.K8S集群升级 2.环境 3.升级策略 4.master1节点迁移容器运行时(docker → containerd)  5.master2节点迁移容器运行时(docker → containerd)  6.node1节点容器运行时迁移(docker → containerd)  7.升级集群计划(v1.23.14 → v1.24.1) 8.升级master1节点版本(v1.24.1) 9.升级master2节点版本

    2024年02月03日
    浏览(66)
  • kubernetes 系列教程之部署 BusyBox 容器

    Kubernetes版本 v1.19.14 BusyBox 是一个轻量级的 Unix 工具集合,它将许多常用的 Unix 工具打包在一个可执行文件中。在 Kubernetes 中,可以使用 BusyBox 容器作为调试工具,快速执行命令或检查容器内部的状态。本篇博客将介绍如何在 Kubernetes 集群上部署和使用 BusyBox 容器。 步骤一:

    2024年02月16日
    浏览(55)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包