记Kubernetes(k8s)初始化报错:“Error getting node“ err=“node \“k8s-master\“ not found“

这篇具有很好参考价值的文章主要介绍了记Kubernetes(k8s)初始化报错:“Error getting node“ err=“node \“k8s-master\“ not found“。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。


💖The Begin💖点点关注,收藏不迷路💖

1、报错详情

"Error getting node" err="node \"k8s-master\" not found"

查看日志报错:

[root@k8s-master ~]# journalctl -u kubelet

Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.267736    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.368592    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.469741    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.571557    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.671768    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.772126    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.872910    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
Apr 01 21:39:47 k8s-master kubelet[9638]: E0401 21:39:47.973850    9638 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"

2、问题排查

1、操作系统centos7.9

[root@k8s-master ~]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
[root@k8s-master ~]# 

2、docker 版本检查

[root@k8s-master ~]# docker -v
Docker version 24.0.5, build ced0996
[root@k8s-master ~]# 

3、kubelet 版本检查

[root@k8s-master ~]# kubelet --version
Kubernetes v1.24.1
[root@k8s-master ~]# 

查找资料:

Kubernetes在v1.24版本之后正式放弃了对Docker的支持。这意味着Kubernetes的官方支持不再包括Docker作为容器运行时。相反,官方现在推荐使用Containerd或CRI-O作为容器运行时。

3、尝试问题解决

1、降级到Kubernetes的v1.23.6版本。需要卸载所有当前安装的1.24版本的kubelet、kubectl和kubeadm。

sudo yum remove kubelet kubectl kubeadm

error getting node

2、下载并安装Kubernetes v1.23.6版本的kubelet、kubectl和kubeadm。

sudo yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6 --disableexcludes=kubernetes

3、验证安装是否成功,可以运行以下命令:

kubelet --version
kubectl version --short
kubeadm version

error getting node

4、重置本地机器上的Kubernetes集群状态

重置本地机器上的Kubernetes集群状态。将本地机器上的Kubernetes相关配置、数据和状态恢复到初始状态。

删除etcd中的所有数据。
删除所有由kubeadm创建的配置文件和系统单元文件。
恢复iptables和ipvs等网络配置。
删除CNI插件配置。

kubeadm reset

error getting node

这个命令通常在需要清理Kubernetes集群环境、重新初始化集群或者彻底卸载Kubernetes时使用。执行kubeadm reset命令后,可以重新初始化一个全新的Kubernetes集群。

要不会报错:

[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-6443]: Port 6443 is in use
	[ERROR Port-10259]: Port 10259 is in use
	[ERROR Port-10257]: Port 10257 is in use
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[ERROR Port-10250]: Port 10250 is in use
	[ERROR Port-2379]: Port 2379 is in use
	[ERROR Port-2380]: Port 2380 is in use
	[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

5、再次初始化,指定Kubernetes的版本为v1.23.6。重新初始化一个全新的Kubernetes集群。

kubeadm init \
  --apiserver-advertise-address=192.168.234.20 \
  --control-plane-endpoint=k8s-master \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.23.6 \
  --service-cidr=10.11.0.0/16 \
  --pod-network-cidr=172.30.0.0/16 \
  --cri-socket unix:///var/run/cri-dockerd.sock

error getting node

————————————问题解决,初始化成功——————————

[root@k8s-master ~]# kubeadm init \
>   --apiserver-advertise-address=192.168.234.20 \
>   --control-plane-endpoint=k8s-master \
>   --image-repository registry.aliyuncs.com/google_containers \
>   --kubernetes-version v1.23.6 \
>   --service-cidr=10.11.0.0/16 \
>   --pod-network-cidr=172.30.0.0/16 \
>   --cri-socket unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.11.0.1 192.168.234.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.234.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.234.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.003345 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: lad5yi.ib6gterchvmkw2xd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token lad5yi.ib6gterchvmkw2xd \
	--discovery-token-ca-cert-hash sha256:eb567a446cd6a0d79da694f4ab23b5c7bf2be4df86f4aecfadef07716fbabd2b \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token lad5yi.ib6gterchvmkw2xd \
	--discovery-token-ca-cert-hash sha256:eb567a446cd6a0d79da694f4ab23b5c7bf2be4df86f4aecfadef07716fbabd2b 
[root@k8s-master ~]# 

4、报错2:kubeadm init后报错

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

解决:

docker的cgroup驱动程序默认设置为system。默认情况下Kubernetes cgroup为systemd,我们需要更改Docker cgroup驱动,

解决办法:

kubeadm reset

vim /etc/docker/daemon.json
添加:  "exec-opts": ["native.cgroupdriver=systemd"]
{
  "registry-mirrors": ["https://lqwtpdhx.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
~  

error getting node

error getting node文章来源地址https://www.toymoban.com/news/detail-850651.html


💖The End💖点点关注,收藏不迷路💖

到了这里,关于记Kubernetes(k8s)初始化报错:“Error getting node“ err=“node \“k8s-master\“ not found“的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • k8s初始化报错[kubelet-check] Initial timeout of 40s passed.

    使用kubeadm进行k8s部署,在初始化的时候,遇到如下 一直卡在了[kubelet-check] Initial timeout of 40s passed.,查看kubelet日志 就出现这个错误,我几个node的IP地址检查过好多遍也没问题 上网找了很多方法都解决不了 查了下资料k8s 已经弃用了docker了,如果安装的kubelet kubeadm kubectl 是V

    2024年02月17日
    浏览(49)
  • 初始化K8S集群

    当刚开始搭建k8s集群报错(端口/数据已经生成)或k8s集群正在使用时遇到了不可修复的问题需要初始化集群时可以使用。 谨慎使用 名称 版本 操作系统 IP 备注 K8S集群 1.20.15 Centos7.9 192.168.11.21 192.168.11.22 192.168.11.23 11.21为k8s-master01 11.22为k8s-node01 11.23为k8s-node02 11.21: 11.21: no

    2024年02月16日
    浏览(57)
  • 初始化k8s(启动)

    一位普通的程序员,慢慢在努力变强! 温馨提示:初始化k8s的前置配置,请查看以下连接! k8s初始化前的基础配置 提示:如果你是单机,需要开启污点 本章节完成了,各位正在努力的程序员们,如果你们觉得本文章对您有用的话,或者是你学到了一些东西,希望用您那漂亮

    2024年02月11日
    浏览(51)
  • 如果遇见k8s初始化报错:It seems like the kubelet isn‘t running or healthy.

    报错命令如下: 其解决方法: 进入docker配置文件: 添加: #这里需要注意的这里是字典,然后需要在配置前面一句加上逗号 最后重启docker和 然后重新初始化: 我这里重新初始化再一次报错 报错主要代码: 报错全部内容 : 我一看应该是初始化阿里源有个地方不对,通过摸索

    2024年02月03日
    浏览(57)
  • 云计算(一)K8S初始化问题

            作者公司使用的是K8S底层做云计算,这天有个节点发布的时候卡住了,解决方式分为长短期。                作者跟运维做了一些分析讨论和解决方案,涉及到许多K8S相关的知识,有兴趣的同学可以看看这个原理分析过程。     云计算是一种基于互联网的计算

    2024年02月04日
    浏览(47)
  • 【K8S系列】快速初始化⼀个最⼩集群

    走得最慢的人,只要不丧失目标,也比漫无目的地徘徊的人走得快。 文章标记颜色说明: 黄色 :重要标题 红色 :用来标记结论 绿色 :用来标记一级重要 蓝色 :用来标记二级重要 希望这篇文章能让你不仅有一定的收获,而且可以愉快的学习,如果有什么建议,都可以留言

    2024年02月04日
    浏览(53)
  • K8S集群重新初始化--详细过程

    在引导k8s集群的过程时可能因为这个或那个的原因导致需要重新引导集群 。 下面整理了我在实际工作中初始化k8s集群的详细过程。 k8s环境部署总览 ip地址 类型 操作系统 服务配置 192.168.162.31 Master01 Centos7.6 2核CPU 2G内存 20G硬盘 192.168.162.41 node1 Centos7.6 2核CPU 2G内存 20G硬盘 192

    2024年02月02日
    浏览(49)
  • k8s初始化报错[kubelet-check] It seems like the kubelet isn‘t running or healthy.

    执行 kubeadm init 命令时报错,报错详情如下: 修改docker配置文件 vi /etc/docker/daemon.json ,添加以下内容 如图: 重启docker 重启kubelet 再次初始化即可 完成

    2024年02月12日
    浏览(43)
  • 初始化k8s时,报错[kubelet-check] It seems like the kubelet isn‘t running or healthy.

    1、初始化k8s时出现以下错误 2、修改docker配置文件 添加以下内容 3、重启docker 4、重启kubelet

    2024年02月12日
    浏览(70)
  • Rancher部署k8s集群测试安装nginx(节点重新初始化方法,亲测)

    一、安装前准备工作 计算机 机器名 IP地址 部署内容 rancher 172.16.5.221 rancher k8smaster 172.16.5.222 Control Plane, Etcd k8sworker01 172.16.5.223 worker k8sworker02 172.16.5.224 worker k8sworker03 172.16.5.225 worker 需在每个节点都进行操作,可以使用xshell工具分屏进行批量操作。 升级linux内核 时间同步 Hos

    2024年01月20日
    浏览(51)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包