问题描述:
安装K8s时,在节点初始化过程中出现[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp 127.0.0.1:10248: connect: connection refused.报错。
#在安装K8s初始化主节点过程中,出现如下报错:
queena@queena-Lenovo:~$ sudo kubeadm init --apiserver-advertise-address=192.168.31.245 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.22.3
[init] Using Kubernetes version: v1.22.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local queena-lenovo] and IPs [10.96.0.1 192.168.31.245]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost queena-lenovo] and IPs [192.168.31.245 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost queena-lenovo] and IPs [192.168.31.245 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
......
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
#根据上述官方提供的方法进行检查。
queena@queena-Lenovo:~$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2022-12-12 16:53:20 CST; 8s ago
Docs: https://kubernetes.io/docs/home/
Process: 5208 ExecStart=/usr/bin/kubelet (code=exited, status=1/FAILURE)
Main PID: 5208 (code=exited, status=1/FAILURE)
queena@queena-Lenovo:~$ journalctl -xeu kubelet
--
-- Automatic restarting of the unit kubelet.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
12月 12 16:54:21 queena-Lenovo systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: kubelet.service 单元已结束停止操作
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- kubelet.service 单元已结束停止操作。
12月 12 16:54:21 queena-Lenovo systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: kubelet.service 单元已结束启动
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- kubelet.service 单元已结束启动。
--
-- 启动结果为“RESULT”。
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.657289 6039 server.go:440] "Kubelet version" kubeletVersion="v1.22.3"
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.657859 6039 server.go:600] "Standalone mode, no API client"
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.737923 6039 server.go:488] "No api server defined - no events will be sent to API server"
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.737948 6039 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified. d
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738162 6039 container_manager_linux.go:280] "Container manager verified user specified cgroup-
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738244 6039 container_manager_linux.go:285] "Creating Container Manager object based on Node C
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738337 6039 topology_manager.go:133] "Creating topology manager with policy per scope" topolog
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738350 6039 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabl
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738378 6039 state_mem.go:36] "Initialized new in-memory state store"
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738420 6039 kubelet.go:314] "Using dockershim is deprecated, please consider using a full-fled
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738454 6039 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/r
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.738468 6039 client.go:97] "Start docker client with request timeout" timeout="2m0s"
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.744849 6039 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling ba
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.744875 6039 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.744966 6039 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.748677 6039 docker_service.go:257] "Docker cri networking managed by the network plugin" netwo
12月 12 16:54:21 queena-Lenovo kubelet[6039]: I1212 16:54:21.756004 6039 docker_service.go:264] "Docker Info" dockerInfo=&{ID:H56N:KEB3:V52D:LJPL:GLGW:XBHY
12月 12 16:54:21 queena-Lenovo kubelet[6039]: E1212 16:54:21.756040 6039 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguratio
12月 12 16:54:21 queena-Lenovo systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
12月 12 16:54:21 queena-Lenovo systemd[1]: kubelet.service: Failed with result 'exit-code'.
解决方法:文章来源:https://www.toymoban.com/news/detail-596805.html
1、驱动问题,即docker的驱动与kubelet 驱动不一致
#查看docker驱动.
queena@queena-Lenovo:~$ docker info | grep Cgroup
Cgroup Driver: cgroupfs
Cgroup Version: 1
WARNING: No swap limit support
#查看kubelet驱动.
queena@queena-Lenovo:~$ sudo cat /var/lib/kubelet/config.yaml | grep cgroup
cgroupDriver: systemd
#修改docker驱动,查看/etc/docker/daemon.json文件,没有的话,手动创建,添加以下内容:
queena@queena-Lenovo:~$ vim /etc/docker/daemon.json
#在该文件中加入, "exec-opts": ["native.cgroupdriver=systemd"]
{
"registry-mirrors": ["https://dpxn2pal.mirror.aliyuncs.com"],
"exec-opts": [ "native.cgroupdriver=systemd" ]
}
#重启docker
queena@queena-Lenovo:~$ systemctl daemon-reload
queena@queena-Lenovo:~$ systemctl restart docker
#重启kubelet
queena@queena-Lenovo:~$ systemctl restart kubelet
queena@queena-Lenovo:~$ sudo kubeadm reset #此处重置,没关系,反正原来的也起不来.
#下面这两行用来验证cgroupdriver 修改生效,都得出systemd.
queena@queena-Lenovo:~$ docker info -f {{.CgroupDriver}}
systemd
queena@queena-Lenovo:~$ docker info | grep -i cgroup
Cgroup Driver: systemd
Cgroup Version: 1
WARNING: No swap limit support
#再次执行K8s初始化主节点,即可成功.
queena@queena-Lenovo:~$ sudo kubeadm init --apiserver-advertise-address=192.168.31.245 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.22.3
...
Your Kubernetes control-plane has initialized successfully!
...
2、10-kubeadm.conf配置文件不存在
(1)先去查看: /etc/systemd/system/kubelet.service.d下是否有10-kubeadm.conf.如果没有,则需要创建这个文件,并输入如下内容:
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ sudo vim 10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
(2)如果有这个文件,则在文件中加入以下代码:
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
(3)重新启动kubelet.service
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ systemctl daemon-reload
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ systemctl restart kubelet.service
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ sudo kubeadm reset
...
queena@queena-Lenovo:/etc/systemd/system/kubelet.service.d$ sudo kubeadm init --apiserver-advertise-address=192.168.31.245 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.22.3
...
Your Kubernetes control-plane has initialized successfully!
...
至此,问题得以解决!文章来源地址https://www.toymoban.com/news/detail-596805.html
到了这里,关于解决K8s安装中节点初始化时 [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ 问题.的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!