K8s主机IP地址变更集群恢复

这篇具有很好参考价值的文章主要介绍了K8s主机IP地址变更集群恢复。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

1、测试环境

k8s版本

v1.23.6

docker版本

20.10.6

节点名称

原IP

新IP

k8s-master

192.168.6.100

192.168.6.200

k8s-node01

192.168.6.110

192.168.6.210

k8s-node02

192.168.6.120

192.168.6.220

未调整IP前集群信息如下:

[root@k8s-master ~]# kubectl get node -o wide
NAME         STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready    control-plane,master   14h   v1.23.6   192.168.6.100   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://20.10.6
k8s-node01   Ready    worker                 14h   v1.23.6   192.168.6.110   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://20.10.6
k8s-node02   Ready    worker                 14h   v1.23.6   192.168.6.120   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://20.10.6
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE
default       nginx-deployment-8d545c96d-9kmbp           1/1     Running   0               82s
default       nginx-deployment-8d545c96d-dn98t           1/1     Running   0               82s
default       nginx-deployment-8d545c96d-k9cf6           1/1     Running   0               82s
kube-system   calico-kube-controllers-677cd97c8d-bv2zj   1/1     Running   2 (4m24s ago)   14h
kube-system   calico-node-79jf6                          1/1     Running   1 (4m52s ago)   14h
kube-system   calico-node-f4xcw                          1/1     Running   1 (13h ago)     14h
kube-system   calico-node-qqm2h                          1/1     Running   1 (13h ago)     14h
kube-system   coredns-6d8c4cb4d-6mcs5                    1/1     Running   1 (13h ago)     14h
kube-system   coredns-6d8c4cb4d-wvq85                    1/1     Running   1 (13h ago)     14h
kube-system   etcd-k8s-master                            1/1     Running   1 (4m52s ago)   14h
kube-system   kube-apiserver-k8s-master                  1/1     Running   1 (4m52s ago)   14h
kube-system   kube-controller-manager-k8s-master         1/1     Running   2 (4m52s ago)   14h
kube-system   kube-proxy-227rt                           1/1     Running   1 (13h ago)     14h
kube-system   kube-proxy-lz7xb                           1/1     Running   1 (13h ago)     14h
kube-system   kube-proxy-tv7s4                           1/1     Running   1 (4m52s ago)   14h
kube-system   kube-scheduler-k8s-master                  1/1     Running   2 (4m52s ago)   14h

调整k8s-master节点IP后,重启机器,显示如下:

[root@k8s-master ~]# kubectl get pod
Unable to connect to the server: dial tcp 192.168.6.100:6443: connect: no route to host

2、集群恢复

master节点

1. 所有机器修改hosts解析文件

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.6.200 k8s-master
192.168.6.210 k8s-node01
192.168.6.220 k8s-node02

 2. 把/etc/kubernetes/*.conf中所有的旧ip换成新ip

[root@k8s-master ~]# cd /etc/kubernetes
[root@k8s-master kubernetes]# find . -type f | xargs sed -i "s/192.168.6.100/192.168.6.200/g"

3. 替换$HOME/.kube/config文件中的旧ip为新ip(注意sudo的话需要改root下的)

[root@k8s-master kubernetes]# cd $HOME/.kube/
[root@k8s-master .kube]# find . -type f | xargs sed -i "s/192.168.6.100/192.168.6.200/g"

4. 修改$HOME/.kube/cache/discovery/ 下的文件夹名改成新的ip

[root@k8s-master .kube]# cd $HOME/.kube/cache/discovery/
[root@k8s-master discovery]# pwd
/root/.kube/cache/discovery
[root@k8s-master discovery]# ls
192.168.6.100_6443
[root@k8s-master discovery]# mv 192.168.6.100_6443/ 192.168.6.200_6443/

5. 重新生成证书

[root@k8s-master discovery]# cd /etc/kubernetes/pki
[root@k8s-master pki]# mv apiserver.key apiserver.key.bak
[root@k8s-master pki]# mv apiserver.crt apiserver.crt.bak
[root@k8s-master pki]# kubeadm init phase certs apiserver  --apiserver-advertise-address  192.168.6.200
I0925 20:46:29.798106    8933 version.go:255] remote version is much newer: v1.28.2; falling back to: stable-1.23
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.6.200]

6. 重启kubelet,编辑 ConfigMap,将旧 IP 替换成新的 IP,若coredns、cluster-info如果没有IP地址,则不用替换

[root@k8s-master pki]# systemctl restart kubelet
[root@k8s-master pki]# kubectl -n kube-system edit cm kubeadm-config
Edit cancelled, no changes made.
[root@k8s-master pki]# kubectl -n kube-system edit cm kube-proxy
configmap/kube-proxy edited
[root@k8s-master pki]# kubectl edit cm -n kube-system coredns
Edit cancelled, no changes made.
[root@k8s-master pki]# kubectl edit cm -n kube-public cluster-info
configmap/cluster-info edited

7. 重启master节点服务器,创建加入集群令牌

[root@k8s-master ~]# reboot
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.6.200:6443 --token 5cxtc8.i8clppnjdzryqove --discovery-token-ca-cert-hash sha256:cba86bdb61980525c3e93734e60befed1a6d126da1ffe8473ab14b55b045495b 

node节点

[root@k8s-node01 ~]# kubeadm  reset
[root@k8s-node01 ~]# kubeadm join 192.168.6.200:6443 --token 5cxtc8.i8clppnjdzryqove --discovery-token-ca-cert-hash sha256:cba86bdb61980525c3e93734e60befed1a6d126da1ffe8473ab14b55b045495b 
[root@k8s-node02 ~]# kubeadm  reset
[root@k8s-node02 ~]# kubeadm join 192.168.6.200:6443 --token 5cxtc8.i8clppnjdzryqove --discovery-token-ca-cert-hash sha256:cba86bdb61980525c3e93734e60befed1a6d126da1ffe8473ab14b55b045495b 

master节点服务器查看信息

[root@k8s-master ~]# kubectl get node -o wide
NAME         STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready    control-plane,master   14h   v1.23.6   192.168.6.200   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://20.10.6
k8s-node01   Ready    worker                 14h   v1.23.6   192.168.6.210   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://20.10.6
k8s-node02   Ready    worker                 14h   v1.23.6   192.168.6.220   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://20.10.6
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE
default       nginx-deployment-8d545c96d-9kmbp           1/1     Running   1               25m
default       nginx-deployment-8d545c96d-dn98t           1/1     Running   1               25m
default       nginx-deployment-8d545c96d-k9cf6           1/1     Running   1               25m
kube-system   calico-kube-controllers-677cd97c8d-bv2zj   1/1     Running   4 (5m24s ago)   14h
kube-system   calico-node-79jf6                          1/1     Running   4 (7m31s ago)   14h
kube-system   calico-node-f4xcw                          1/1     Running   2               14h
kube-system   calico-node-qqm2h                          1/1     Running   2               14h
kube-system   coredns-6d8c4cb4d-6mcs5                    1/1     Running   2               14h
kube-system   coredns-6d8c4cb4d-wvq85                    1/1     Running   2               14h
kube-system   etcd-k8s-master                            1/1     Running   1 (7m31s ago)   9m46s
kube-system   kube-apiserver-k8s-master                  1/1     Running   1 (7m30s ago)   9m46s
kube-system   kube-controller-manager-k8s-master         1/1     Running   4 (7m31s ago)   14h
kube-system   kube-proxy-227rt                           1/1     Running   2               14h
kube-system   kube-proxy-lz7xb                           1/1     Running   2               14h
kube-system   kube-proxy-tv7s4                           1/1     Running   3 (7m31s ago)   14h
kube-system   kube-scheduler-k8s-master                  1/1     Running   4 (7m31s ago)   14h

查看证书情况

[root@k8s-master ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Sep 22, 2033 11:41 UTC   9y              ca                      no      
apiserver                  Sep 25, 2024 01:53 UTC   364d            ca                      no      
apiserver-etcd-client      Sep 22, 2033 11:41 UTC   9y              etcd-ca                 no      
apiserver-kubelet-client   Sep 22, 2033 11:41 UTC   9y              ca                      no      
controller-manager.conf    Sep 22, 2033 11:41 UTC   9y              ca                      no      
etcd-healthcheck-client    Sep 22, 2033 11:41 UTC   9y              etcd-ca                 no      
etcd-peer                  Sep 22, 2033 11:41 UTC   9y              etcd-ca                 no      
etcd-server                Sep 22, 2033 11:41 UTC   9y              etcd-ca                 no      
front-proxy-client         Sep 22, 2033 11:41 UTC   9y              front-proxy-ca          no      
scheduler.conf             Sep 22, 2033 11:41 UTC   9y              ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Sep 22, 2033 11:29 UTC   9y              no      
etcd-ca                 Sep 22, 2033 11:29 UTC   9y              no      
front-proxy-ca          Sep 22, 2033 11:29 UTC   9y              no      

发现apiserver 证书期限修改为一年,重新执行证书申请

[root@k8s-master k8s-install]# ./update-kubeadm-cert.sh all
CERTIFICATE                                       EXPIRES                       
/etc/kubernetes/controller-manager.config         Sep 22 11:41:11 2033 GMT      
/etc/kubernetes/scheduler.config                  Sep 22 11:41:11 2033 GMT      
/etc/kubernetes/admin.config                      Sep 22 11:41:12 2033 GMT      
/etc/kubernetes/pki/ca.crt                        Sep 22 11:29:38 2033 GMT      
/etc/kubernetes/pki/apiserver.crt                 Sep 25 01:53:30 2024 GMT      
/etc/kubernetes/pki/apiserver-kubelet-client.crt  Sep 22 11:41:11 2033 GMT      
/etc/kubernetes/pki/front-proxy-ca.crt            Sep 22 11:29:38 2033 GMT      
/etc/kubernetes/pki/front-proxy-client.crt        Sep 22 11:41:12 2033 GMT      
/etc/kubernetes/pki/etcd/ca.crt                   Sep 22 11:29:39 2033 GMT      
/etc/kubernetes/pki/etcd/server.crt               Sep 22 11:41:11 2033 GMT      
/etc/kubernetes/pki/etcd/peer.crt                 Sep 22 11:41:11 2033 GMT      
/etc/kubernetes/pki/etcd/healthcheck-client.crt   Sep 22 11:41:11 2033 GMT      
/etc/kubernetes/pki/apiserver-etcd-client.crt     Sep 22 11:41:11 2033 GMT      
[2023-09-26T10:05:44.04+0800][INFO] backup /etc/kubernetes to /etc/kubernetes.old-20230926
[2023-09-26T10:05:44.04+0800][INFO] updating...
[2023-09-26T10:05:44.08+0800][INFO] updated /etc/kubernetes/pki/etcd/server.conf
[2023-09-26T10:05:44.11+0800][INFO] updated /etc/kubernetes/pki/etcd/peer.conf
[2023-09-26T10:05:44.14+0800][INFO] updated /etc/kubernetes/pki/etcd/healthcheck-client.conf
[2023-09-26T10:05:44.18+0800][INFO] updated /etc/kubernetes/pki/apiserver-etcd-client.conf
[2023-09-26T10:05:44.46+0800][INFO] restarted etcd
[2023-09-26T10:05:44.51+0800][INFO] updated /etc/kubernetes/pki/apiserver.crt
[2023-09-26T10:05:44.55+0800][INFO] updated /etc/kubernetes/pki/apiserver-kubelet-client.crt
[2023-09-26T10:05:44.59+0800][INFO] updated /etc/kubernetes/controller-manager.conf
[2023-09-26T10:05:44.63+0800][INFO] updated /etc/kubernetes/scheduler.conf
[2023-09-26T10:05:44.67+0800][INFO] updated /etc/kubernetes/admin.conf
[2023-09-26T10:05:44.67+0800][INFO] backup /root/.kube/config to /root/.kube/config.old-20230926
[2023-09-26T10:05:44.68+0800][INFO] copy the admin.conf to /root/.kube/config
[2023-09-26T10:05:44.68+0800][INFO] does not need to update kubelet.conf
[2023-09-26T10:05:44.71+0800][INFO] updated /etc/kubernetes/pki/front-proxy-client.crt
[2023-09-26T10:05:54.97+0800][INFO] restarted apiserver
[2023-09-26T10:05:55.22+0800][INFO] restarted controller-manager
[2023-09-26T10:05:55.25+0800][INFO] restarted scheduler
[2023-09-26T10:05:55.29+0800][INFO] restarted kubelet
[2023-09-26T10:05:55.29+0800][INFO] done!!!
CERTIFICATE                                       EXPIRES                       
/etc/kubernetes/controller-manager.config         Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/scheduler.config                  Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/admin.config                      Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/pki/ca.crt                        Sep 22 11:29:38 2033 GMT      
/etc/kubernetes/pki/apiserver.crt                 Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/pki/apiserver-kubelet-client.crt  Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/pki/front-proxy-ca.crt            Sep 22 11:29:38 2033 GMT      
/etc/kubernetes/pki/front-proxy-client.crt        Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/pki/etcd/ca.crt                   Sep 22 11:29:39 2033 GMT      
/etc/kubernetes/pki/etcd/server.crt               Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/pki/etcd/peer.crt                 Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/pki/etcd/healthcheck-client.crt   Sep 23 02:05:44 2033 GMT      
/etc/kubernetes/pki/apiserver-etcd-client.crt     Sep 23 02:05:44 2033 GMT     
[root@k8s-master k8s-install]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Sep 23, 2033 02:05 UTC   9y              ca                      no      
apiserver                  Sep 23, 2033 02:05 UTC   9y              ca                      no      
apiserver-etcd-client      Sep 23, 2033 02:05 UTC   9y              etcd-ca                 no      
apiserver-kubelet-client   Sep 23, 2033 02:05 UTC   9y              ca                      no      
controller-manager.conf    Sep 23, 2033 02:05 UTC   9y              ca                      no      
etcd-healthcheck-client    Sep 23, 2033 02:05 UTC   9y              etcd-ca                 no      
etcd-peer                  Sep 23, 2033 02:05 UTC   9y              etcd-ca                 no      
etcd-server                Sep 23, 2033 02:05 UTC   9y              etcd-ca                 no      
front-proxy-client         Sep 23, 2033 02:05 UTC   9y              front-proxy-ca          no      
scheduler.conf             Sep 23, 2033 02:05 UTC   9y              ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Sep 22, 2033 11:29 UTC   9y              no      
etcd-ca                 Sep 22, 2033 11:29 UTC   9y              no      
front-proxy-ca          Sep 22, 2033 11:29 UTC   9y              no      

至此,IP修改完成,可以看到证书与k8s集群状态正常。文章来源地址https://www.toymoban.com/news/detail-861252.html

到了这里,关于K8s主机IP地址变更集群恢复的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • K8s集群重启与恢复-Master节点启停

    1 应用场景 场景 :在实际工作中,可能某个 Master 节点需要维护,迁移,我们需要平滑的停止、启动该节点,尽量减少启停中对集群造成的影响 注意 : 为了确保 K8s 集群能够安全恢复,请在操作前对 K8s 数据进行备份 为了确保重启 Master 节点期间 K8s 集群能够使用,集群中

    2023年04月08日
    浏览(44)
  • k8s-1.22.3集群etcd备份与恢复

    kubeadm-1.22.3-0.x86_64 kubelet-1.22.3-0.x86_64 kubectl-1.22.3-0.x86_64 kubernetes-cni-0.8.7-0.x86_64 主机名 IP VIP k8s-master01 192.168.10.61 192.168.10.70 k8s-master02 192.168.10.62 k8s-master03 192.168.10.63 k8s-node01 192.168.10.64 k8s-node02 192.168.10.65 注:etcd最新的API版本是v3,与v2相比,v3更高效更清晰。k8s默认使用的etcd

    2024年02月13日
    浏览(38)
  • k8s外接etcd集群服务异常,使用snapshot恢复etcd集群

    master节点 主机 IP 版本 master01 192.168.66.50 k8s-1.23.17 master02 192.168.66.55 k8s-1.23.17 master03 192.168.66.56 k8s-1.23.17 etcd集群节点 主机 IP 版本 etcd01 192.168.66.58 3.5.6 etcd02 192.168.66.59 3.5.6 etcd03 192.168.66.57 3.5.6 生产环境中我们为了避免出现误操作或者是服务器硬件出见异常导致宕机,我们的虚拟

    2024年01月17日
    浏览(42)
  • 使用Velero备份、恢复k8s集群上的资源

    一、Velero简介 Velero提供备份和恢复 Kubernetes 集群资源和持久卷的工具。 Velero功能: 对群集进行备份,并在丢失时进行还原。 将集群资源迁移到其他集群。 Velero 包括: 在群集上运行的服务器 在本地运行的命令行客户端 开源地址:https://github.com/vmware-tanzu/velero 官方文档:

    2024年02月04日
    浏览(35)
  • k8s pod获取ip地址过程

    在学习 Kubernetes 网络模型的过程中,了解各种网络组件的作用以及如何交互非常重要。本文就介绍了各种网络组件在 Kubernetes 集群中是如何交互的,以及如何帮助每个 Pod 都能获取 IP 地址。 Kubernetes 网络模型的核心要求之一是每个 Pod 都拥有自己的 IP 地址并可以使用该 IP 地址

    2024年02月08日
    浏览(44)
  • k8s1.20集群域名与集群ip解析详解及使用案例

    目录 一.k8s中的域名解析浅析 1.单机 2.k8s的容器中 二.k8s不同版本对应的dns域名服务组件

    2024年02月09日
    浏览(38)
  • 【博客628】k8s pod访问集群外域名原理以及主机开启了systemd-resolved的不同情况

    没有使用systemd-resolved的linux主机上访问外部域名一般是按照以下步骤来的: 从dns缓存里查找域名与ip的映射关系 从/etc/hosts里查找域名与ip的映射关系 从/etc/resolv.conf里查找dns server,并发起解析请求 /etc/resolv.conf的内容一般如下: nameserver 8.8.8.8 使用systemd-resolved的linux主机上访

    2024年02月04日
    浏览(35)
  • 【故障排查】VMware挂起后恢复,k8s集群无法ping/curl通pod/svc/ingress

    一、master/node节点,去curl pod IP,一直卡着,没反应。timeout。 二、挂起恢复后,harbor服务无法正常访问503 ,需要重启harbor服务。 进容器curl localhost,是正常的。 而网络CNI 、flannel 、 coreDNS等都是running状态。 (发现restarts的次数有点多) .这里的metrics-server一直失败的。 可参考

    2023年04月17日
    浏览(42)
  • 在kubernetes(k8s)集群内查看master节点api-server地址信息及服务证书key

    在k8s集群内查找master节点方式:  获取集群信息如下: k8s master的api-server信息 在ROLES中带有master节点的服务器内查找apiserver地址信息   查看k8s服务证书key  

    2024年02月12日
    浏览(43)
  • 【博客682】k8s apiserver bookmarks机制以更高效检测变更

    List-Watch 是kubernetes中server和client通信的最核心的机制, 比如说api-server监听etcd, kubelet监听api-server, scheduler监听api-server等等,其实其他模块监听api-server相当于监听etcd,因为在k8s的设计中,只有api-server能跟etcd通信,其他模块需要etcd的数据就只好监听api-server了。 etcd默认保留

    2024年02月15日
    浏览(43)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包