解决K8s安装中节点初始化时 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ 问题.

这篇具有很好参考价值的文章主要介绍了解决K8s安装中节点初始化时 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ 问题.。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

问题描述:
安装K8s时,在节点初始化过程中出现[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.报错。

#在安装K8s初始化主节点过程中,出现如下报错:
queena@queena-Lenovo:~$ sudo kubeadm init --apiserver-advertise-address=192.168.31.245 --pod-network-cidr=10.244.0.0/16  --kubernetes-version=v1.22.3
[init] Using Kubernetes version: v1.22.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local queena-lenovo] and IPs [10.96.0.1 192.168.31.245]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost queena-lenovo] and IPs [192.168.31.245 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost queena-lenovo] and IPs [192.168.31.245 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

......
    Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
#根据上述官方提供的方法进行检查。
queena@queena-Lenovo:~$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: exit-code) since Mon 2022-12-12 16:53:20 CST; 8s ago
     Docs: https://kubernetes.io/docs/home/
  Process: 5208 ExecStart=/usr/bin/kubelet (code=exited, status=1/FAILURE)
 Main PID: 5208 (code=exited, status=1/FAILURE)
 
queena@queena-Lenovo:~$ journalctl -xeu kubelet
-- 
-- Automatic restarting of the unit kubelet.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
1212 16:54:21 queena-Lenovo systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: kubelet.service 单元已结束停止操作
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- kubelet.service 单元已结束停止操作。
1212 16:54:21 queena-Lenovo systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: kubelet.service 单元已结束启动
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- kubelet.service 单元已结束启动。
-- 
-- 启动结果为“RESULT”。
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.657289    6039 server.go:440] "Kubelet version" kubeletVersion="v1.22.3"
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.657859    6039 server.go:600] "Standalone mode, no API client"
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.737923    6039 server.go:488] "No api server defined - no events will be sent to API server"
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.737948    6039 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  d
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738162    6039 container_manager_linux.go:280] "Container manager verified user specified cgroup-
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738244    6039 container_manager_linux.go:285] "Creating Container Manager object based on Node C
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738337    6039 topology_manager.go:133] "Creating topology manager with policy per scope" topolog
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738350    6039 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabl
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738378    6039 state_mem.go:36] "Initialized new in-memory state store"
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738420    6039 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fled
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738454    6039 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/r
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738468    6039 client.go:97] "Start docker client with request timeout" timeout="2m0s"
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.744849    6039 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling ba
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.744875    6039 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
1212 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.744966    6039 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.748677    6039 docker_service.go:257] "Docker cri networking managed by the network plugin" netwo
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.756004    6039 docker_service.go:264] "Docker Info" dockerInfo=&{ID:H56N:KEB3:V52D:LJPL:GLGW:XBHY
12月 12 16:54:21 queena-Lenovo kubelet[6039]: E1212 16:54:21.756040    6039 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguratio
1212 16:54:21 queena-Lenovo systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
1212 16:54:21 queena-Lenovo systemd[1]: kubelet.service: Failed with result 'exit-code'.

解决方法:

1、驱动问题,即docker的驱动与kubelet 驱动不一致

#查看docker驱动.
queena@queena-Lenovo:~$ docker info | grep Cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
WARNING: No swap limit support

#查看kubelet驱动.
queena@queena-Lenovo:~$ sudo cat /var/lib/kubelet/config.yaml | grep cgroup
cgroupDriver: systemd

#修改docker驱动,查看/etc/docker/daemon.json文件,没有的话,手动创建,添加以下内容:
queena@queena-Lenovo:~$ vim /etc/docker/daemon.json
#在该文件中加入,   "exec-opts": ["native.cgroupdriver=systemd"]
{
  "registry-mirrors": ["https://dpxn2pal.mirror.aliyuncs.com"],
  "exec-opts": [ "native.cgroupdriver=systemd" ]
}

#重启docker 
queena@queena-Lenovo:~$ systemctl daemon-reload
queena@queena-Lenovo:~$ systemctl restart docker
#重启kubelet
queena@queena-Lenovo:~$ systemctl restart kubelet
queena@queena-Lenovo:~$ sudo kubeadm reset  #此处重置,没关系,反正原来的也起不来.

#下面这两行用来验证cgroupdriver 修改生效,都得出systemd.
queena@queena-Lenovo:~$ docker info -f {{.CgroupDriver}}
systemd
queena@queena-Lenovo:~$ docker info | grep -i cgroup
 Cgroup Driver: systemd
 Cgroup Version: 1
WARNING: No swap limit support
#再次执行K8s初始化主节点,即可成功.
queena@queena-Lenovo:~$ sudo kubeadm init --apiserver-advertise-address=192.168.31.245 --pod-network-cidr=10.244.0.0/16  --kubernetes-version=v1.22.3
...
Your Kubernetes control-plane has initialized successfully!
...

2、10-kubeadm.conf配置文件不存在

(1)先去查看: /etc/systemd/system/kubelet.service.d下是否有10-kubeadm.conf.如果没有,则需要创建这个文件,并输入如下内容:
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ sudo vim 10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. 
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
(2)如果有这个文件,则在文件中加入以下代码:
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
(3)重新启动kubelet.service
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ systemctl daemon-reload
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ systemctl restart kubelet.service
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ sudo kubeadm reset
...
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ sudo kubeadm init --apiserver-advertise-address=192.168.31.245 --pod-network-cidr=10.244.0.0/16  --kubernetes-version=v1.22.3
...
Your Kubernetes control-plane has initialized successfully!
...

至此,问题得以解决!文章来源地址https://www.toymoban.com/news/detail-596805.html

到了这里,关于解决K8s安装中节点初始化时 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ 问题.的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 初始化k8s时,报错[kubelet-check] It seems like the kubelet isn‘t running or healthy.

    1、初始化k8s时出现以下错误 2、修改docker配置文件 添加以下内容 3、重启docker 4、重启kubelet

    2024年02月12日
    浏览(70)
  • K8s-k8s集群部署-6-主节点初始化和calico安装

    本文章属个人学习整理的对应笔记,学习内容来尚硅谷的培训课,有兴趣的同学可以跳转下方链接 【云原生Java架构师的第一课K8s+Docker+KubeSphere+DevOps】https://www.bilibili.com/video/BV13Q4y1C7hS?p=71vd_source=08192e8d3b82bf20dfe6807a2901dd9e kubeadm init   --apiserver-advertise-address=192.168.181.100   --

    2024年02月22日
    浏览(49)
  • 通过containerd部署k8s集群环境及初始化时部分报错解决

    目录 一.基础环境配置(每个节点都做) 1.hosts解析 2.防火墙和selinux 3.安装基本软件并配置时间同步 4.禁用swap分区 5.更改内核参数 6.配置ipvs 7.k8s下载 (1)配置镜像下载相关软件 (2)配置kubelet上的cgroup 二.下载containerd(每个节点都做) 1.下载基本软件 2.添加软件仓库信息 3.更

    2024年02月07日
    浏览(48)
  • 初始化K8S集群

    当刚开始搭建k8s集群报错(端口/数据已经生成)或k8s集群正在使用时遇到了不可修复的问题需要初始化集群时可以使用。 谨慎使用 名称 版本 操作系统 IP 备注 K8S集群 1.20.15 Centos7.9 192.168.11.21 192.168.11.22 192.168.11.23 11.21为k8s-master01 11.22为k8s-node01 11.23为k8s-node02 11.21: 11.21: no

    2024年02月16日
    浏览(57)
  • 初始化k8s(启动)

    一位普通的程序员,慢慢在努力变强! 温馨提示:初始化k8s的前置配置,请查看以下连接! k8s初始化前的基础配置 提示:如果你是单机,需要开启污点 本章节完成了,各位正在努力的程序员们,如果你们觉得本文章对您有用的话,或者是你学到了一些东西,希望用您那漂亮

    2024年02月11日
    浏览(51)
  • k8s初始化报错:[ERROR CRI]: container runtime is not running(已解决)

    如有错误,敬请谅解! 此文章仅为本人学习笔记,仅供参考,如有冒犯,请联系作者删除!!          在网上找了好几天解决方案,大部分都是下述方案:         但是当我们尝试之后仍无法解决问题。 如有错误,请联系作者删除 并恳请同行朋友予以斧正,万分感谢!

    2024年02月07日
    浏览(53)
  • 云计算(一)K8S初始化问题

            作者公司使用的是K8S底层做云计算,这天有个节点发布的时候卡住了,解决方式分为长短期。                作者跟运维做了一些分析讨论和解决方案,涉及到许多K8S相关的知识,有兴趣的同学可以看看这个原理分析过程。     云计算是一种基于互联网的计算

    2024年02月04日
    浏览(47)
  • 【K8S系列】快速初始化⼀个最⼩集群

    走得最慢的人,只要不丧失目标,也比漫无目的地徘徊的人走得快。 文章标记颜色说明: 黄色 :重要标题 红色 :用来标记结论 绿色 :用来标记一级重要 蓝色 :用来标记二级重要 希望这篇文章能让你不仅有一定的收获,而且可以愉快的学习,如果有什么建议,都可以留言

    2024年02月04日
    浏览(53)
  • K8S集群重新初始化--详细过程

    在引导k8s集群的过程时可能因为这个或那个的原因导致需要重新引导集群 。 下面整理了我在实际工作中初始化k8s集群的详细过程。 k8s环境部署总览 ip地址 类型 操作系统 服务配置 192.168.162.31 Master01 Centos7.6 2核CPU 2G内存 20G硬盘 192.168.162.41 node1 Centos7.6 2核CPU 2G内存 20G硬盘 192

    2024年02月02日
    浏览(49)
  • 云原生之深入解析如何使用Vagrant Kubespray快速初始化K8S集群

    Vagrant 是一款用于构建及配置虚拟开发环境的软件,采用 Ruby 编写,主要以命令行方式运行。其主要使用 Oracle VirtualBox 提供虚拟化系统,与 Chef,Salt,Puppet 等环境配置管理软件搭配使用,可以实现快速搭建虚拟开发环境。 Kubespray 是利用 Ansible 工具,通过 SSH 协议批量让指定

    2024年02月15日
    浏览(48)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包