kubeadmin kube-apiserver Exited 始终起不来查因记录

这篇具有很好参考价值的文章主要介绍了kubeadmin kube-apiserver Exited 始终起不来查因记录。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

kubeadmin kube-apiserver Exited 始终起不来查因记录

[root@k8s-master01 log]# crictl ps -a
CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
b7af23a98302e       fce326961ae2d       16 seconds ago       Running             etcd                      29                  16fc6f83a01d2       etcd-k8s-master01
1d0efa5c0c12d       a31e1d84401e6       About a minute ago   Exited              kube-apiserver            1693                dfc4c0a0c3e03       kube-apiserver-k8s-master01
275f08ddab851       fce326961ae2d       6 minutes ago        Exited              etcd                      28                  16fc6f83a01d2       etcd-k8s-master01
b11025cbc4661       5d7c5dfd3ba18       6 hours ago          Running             kube-controller-manager   27                  0ff05b544ff48       kube-controller-manager-k8s-master01
4db4688c2687f       dafd8ad70b156       30 hours ago         Running             kube-scheduler            25                  5f4d13cedf450       kube-scheduler-k8s-master01
b311bf0e66852       54637cb36d4a1       7 days ago           Running             calico-node               0                   ff2f4ac3783bb       calico-node-2zqhn
108695e1af006       a1a5060fe43dc       9 days ago           Running             kuboard                   2                   7bee3baf06a62       kuboard-cc79974cd-t9jth
536a8cdfb0a9b       115053965e86b       9 days ago           Running             metrics-scraper           2                   046881f3feea3       metrics-scraper-7f4896c5d7-6w6ld
c91c3382c9c9d       556768f31eb1d       9 days ago           Running             kube-proxy                6                   ce658d774a03b       kube-proxy-gsv75
[root@k8s-master01 log]#

查日志

 cat /var/log/messages|grep kube-apiserver|grep -i error
 
Feb 16 12:08:16 k8s-master01 kubelet: E0216 12:08:16.192310    8996 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-master01_kube-system(d6b90b54ef1ec678fb3557edd6baf627)\"" pod="kube-system/kube-apiserver-k8s-master01" podUID=d6b90b54ef1ec678fb3557edd6baf627

由于 kube-apiserver 起不来
kubectl describe
kubectl logs
之类查问题的命令都用不了

用的containerd.service 容器时
crictl logs 命令可用

[root@k8s-master01 log]# crictl  ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
b3c010e0082be       a31e1d84401e6       39 seconds ago      Exited              kube-apiserver            1701                dfc4c0a0c3e03       kube-apiserver-k8s-master01
401eba6ca6507       fce326961ae2d       2 minutes ago       Running             etcd                      36                  16fc6f83a01d2       etcd-k8s-master01
b11025cbc4661       5d7c5dfd3ba18       7 hours ago         Running             kube-controller-manager   27                  0ff05b544ff48       kube-controller-manager-k8s-master01
4db4688c2687f       dafd8ad70b156       31 hours ago        Running             kube-scheduler            25                  5f4d13cedf450       kube-scheduler-k8s-master01
b311bf0e66852       54637cb36d4a1       7 days ago          Running             calico-node               0                   ff2f4ac3783bb       calico-node-2zqhn
108695e1af006       a1a5060fe43dc       9 days ago          Running             kuboard                   2                   7bee3baf06a62       kuboard-cc79974cd-t9jth
536a8cdfb0a9b       115053965e86b       9 days ago          Running             metrics-scraper           2                   046881f3feea3       metrics-scraper-7f4896c5d7-6w6ld
c91c3382c9c9d       556768f31eb1d       9 days ago          Running             kube-proxy                6                   ce658d774a03b       kube-proxy-gsv75
[root@k8s-master01 log]# crictl  logs b3c010e0082be
I0216 04:41:58.479545       1 server.go:555] external host was not specified, using 192.168.40.240
I0216 04:41:58.480162       1 server.go:163] Version: v1.26.0
I0216 04:41:58.480188       1 server.go:165] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 04:41:58.662208       1 shared_informer.go:273] Waiting for caches to sync for node_authorizer
I0216 04:41:58.662957       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0216 04:41:58.662977       1 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
E0216 04:42:18.665556       1 run.go:74] "command failed" err="context deadline exceeded"
W0216 04:42:18.665571       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
  "Addr": "127.0.0.1:2379",
  "ServerName": "127.0.0.1",
  "Attributes": null,
  "BalancerAttributes": null,
  "Type": 0,
  "Metadata": null
}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
[root@k8s-master01 log]#

貌似连接2379 etcd服务端口异常

crictl  logs 860b2d24e75c3  #etcd容器的id
{"level":"info","ts":"2023-02-16T04:48:39.238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 9331fa9e272a9c3f at term 107"}
{"level":"warn","ts":"2023-02-16T04:48:40.151Z","caller":"etcdserver/server.go:2075","msg":"failed to publish local member to cluster through raft","local-member-id":"e7a1aee40b9b8621","local-member-attributes":"{Name:k8s-master01 ClientURLs:[https://192.168.40.240:2379]}","request-path":"/0/members/e7a1aee40b9b8621/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 is starting a new election at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 became pre-candidate at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 received MsgPreVoteResp from e7a1aee40b9b8621 at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 5db16f545fa302b7 at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 9331fa9e272a9c3f at term 107"}
{"level":"warn","ts":"2023-02-16T04:48:40.447Z","caller":"etcdhttp/metrics.go:173","msg":"serving /health false; no leader"}
{"level":"warn","ts":"2023-02-16T04:48:40.448Z","caller":"etcdhttp/metrics.go:86","msg":"/health error","output":"{\"health\":\"false\",\"reason\":\"RAFT NO LEADER\"}","status-code":503}
.......
{"level":"warn","ts":"2023-02-16T06:55:15.139Z","caller":"etcdserver/server.go:2075","msg":"failed to publish local member to cluster through raft","local-member-id":"e7a1aee40b9b8621","local-member-attributes":"{Name:k8s-master01 ClientURLs:[https://192.168.40.240:2379]}","request-path":"/0/members/e7a1aee40b9b8621/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
[root@k8s-master01 log]#

从日志看,像是etcd内部异常
参考
http://www.caotama.com/1864029.html
受启发,etcd备节点需要先启动,主节点接入备节点才能完整工作。

查备节点,发现因为多网卡IP是动态

[root@k8s-master03 manifests]# crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
a62d091cb77d6       a31e1d84401e6       24 seconds ago      Exited              kube-apiserver            92                  f32dfab0c6d61       kube-apiserver-k8s-master03
580675496bfae       fce326961ae2d       21 minutes ago      Exited              etcd                      88                  40fa395fc5f61       etcd-k8s-master03
d68f4e7158815       5d7c5dfd3ba18       7 hours ago         Running             kube-controller-manager   29                  e46f277014232       kube-controller-manager-k8s-master03
7e42849ff064e       dafd8ad70b156       7 hours ago         Running             kube-scheduler            28                  95d3e74185619       kube-scheduler-k8s-master03
69a554f8f249e       5d7c5dfd3ba18       3 days ago          Exited              kube-controller-manager   28                  d435a4f2a2550       kube-controller-manager-k8s-master03
927c7330986d4       dafd8ad70b156       3 days ago          Exited              kube-scheduler            27                  6bd230494824d       kube-scheduler-k8s-master03
2b61030c4f724       5185b96f0becf       13 days ago         Exited              coredns                   1                   417cff0be932c       coredns-567c556887-gqxc2
e62bfc4d92bf2       54637cb36d4a1       13 days ago         Exited              calico-node               2                   65cb679894ec0       calico-node-4qgnf
e419b9d4ed335       54637cb36d4a1       13 days ago         Exited              mount-bpffs               0                   65cb679894ec0       calico-node-4qgnf
bce9d5d9b2faa       628dd70880410       13 days ago         Exited              install-cni               0                   65cb679894ec0       calico-node-4qgnf
1508b3b2344e8       556768f31eb1d       13 days ago         Exited              kube-proxy                2                   4d7e52b0ee4b5       kube-proxy-qn6lx
ee1622dfb7d36       628dd70880410       13 days ago         Exited              upgrade-ipam              2                   65cb679894ec0       calico-node-4qgnf
[root@k8s-master03 manifests]# 
[root@k8s-master03 manifests]# 
[root@k8s-master03 manifests]# crictl logs 580675496bfae
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.217.2:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/eted","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.217.2:2380","--initial-cluster=k8s-master03=https://172.19.217.2:2380,k8s-master02=https://172.19.217.32:2380,k8s-master01=https://192.168.40.240:2380","--initial-cluster-state=existing","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.217.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.217.2:2380","--name=k8s-master03","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/eted","dir-type":"member"}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.217.2:2380"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"k8s-master03","data-dir":"/var/lib/eted","advertise-peer-urls":["https://172.19.217.2:2380"],"advertise-client-urls":["https://172.19.217.2:2379"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"k8s-master03","data-dir":"/var/lib/eted","advertise-peer-urls":["https://172.19.217.2:2380"],"advertise-client-urls":["https://172.19.217.2:2379"]}
{"level":"fatal","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 172.19.217.2:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:32\nruntime.main\n\truntime/proc.go:225"}
[root@k8s-master03 manifests]# 

https://172.19.217.2:2380
应该是因为多网卡动态获取的ip,eth1网卡为 ssh连接专用网卡,但eth0-IP已经变化
将eth0网卡静态成该地址172.19.217.2。

修复 。文章来源地址https://www.toymoban.com/news/detail-456139.html

到了这里,关于kubeadmin kube-apiserver Exited 始终起不来查因记录的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Hadoop--万恶NameNode各种起不来!!!

    每次用到Hadoop集群时NameNode总有各种各样奇奇怪怪的问题启动不了或者hdfs用9870端口访问不了!!! 以前写过一篇Hadoop集群缺少node结点 ,一下↓ 是对NameNode结点一些常见问题 目录 NameNode启动不了 HDFS可视化网页打不开 Last but not least   111 配置文件错误  不管你的NameNode在哪个

    2024年02月05日
    浏览(29)
  • docker服务起不来原因及解决

    报错 : Failed to start Docker Application Container Engine. Failed to find iptables: exec: \\\"iptables\\\": executable file not found in $PATH failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: Iptables not found docker指令无法使用,docker服务未正确运行。

    2024年03月19日
    浏览(37)
  • Flutter运行报错程序起不来解决办法

    解决办法: 运行 : flutter clean OK!!!

    2024年02月11日
    浏览(35)
  • Jenkins 执行远程shell脚本部署jar文件问题起不来

    如图:最开始的时候没有加: source /etc/profile 这一行, run.sh里面的java -jar xxxx.jar 一直执行不来。 一开始以为是Jenkins执行退出后会kill一切它启动的进程,所以加了在run.sh里面加了export BUILD_ID=dontKillMe,还是不行。后来考虑是环境变量问题,加上source /etc/profile就好了,同时最

    2024年02月07日
    浏览(38)
  • docker获取不到镜像,pod容器(dop-registry)起不来

    问题: docker获取不到镜像,报错信息为:The push refers to repository [hub.dtwarebase.tech/dop/dop-casbin] Get http://hub.dtwarebase.tech/v2/: dial tcp 10.254.10.10:80: connect: network is unreachable ,pod容器(dop-registry)起不来; 小记: dop-registry 是指 \\\"Docker Official Images Registry\\\"(Docker 官方镜像注册表)。是

    2024年04月14日
    浏览(23)
  • 非华为电脑装华为电脑管家蓝屏和协同起不来的解决方案

    部分电脑因为网卡原因, 装上立马蓝屏 。 解决蓝屏后, 无法协同 连接华为手机,管家图标无法常驻任务栏(都是libbolt.dll异常导致),可以在安装前,运行如下批处理 重装华为电脑管家无数次 ,换了每个版本,最后尝试出来的解决方案,路过的朋友,麻烦点个 赞 (*╹▽╹

    2024年02月11日
    浏览(113)
  • 计算节点systemctl status openstack-nova-compute.service起不来的解决方案

    报错 [root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service Job for openstack-nova-compute.service failed because the control process exited with error code. See \\\"systemctl status openstack-nova-compute.service\\\" and \\\"journalctl -xe\\\" for details. ● openstack-nova-compute.service - OpenStack Nova Compute Server    Loaded: loade

    2024年02月03日
    浏览(29)
  • IDC机房服务器搬迁之运行了几年的服务器没关过机,今天关机下架,再上架突然起不来了,怎么快速处理?

    戴尔R420 服务器 1U 2台直连存储 4U CentOS 7 IDC机房服务器搬迁之运行了几年的服务器没关过机,今天关机下架,再上架突然起不来了,怎么快速处理? 服务器上电开机就出现进入紧急模式 Welcome to emergency mode! After logging in, type “journalctl -xb” to view GHXWsystem logs, “systemctl reboot”

    2024年01月19日
    浏览(37)
  • Kubernetes_APIServer_APIServer简介

    api server 是 Kubernetes 集群的网关。它是 Kubernetes 集群中的所有用户、自动化和组件都可以访问的中心接触点。API Server通过 HTTP 实现 RESTful API,执行所有 API 操作,并负责将 API 对象存储到持久存储后端。 APIServer组件两个核心知识点: (1) APIServer 提供的所有接口、访问这些接口

    2024年02月06日
    浏览(33)
  • 【云原生】Kubeadmin部署Kubernetes集群

    目录 ​编辑 一、环境准备 1.2调整内核参数 二、所有节点部署docker 三、所有节点安装kubeadm,kubelet和kubectl 3.1定义kubernetes源 3.2开机自启kubelet 四、部署K8S集群 4.1查看初始化需要的镜像 4.2在 master 节点上传 v1.20.11.zip 压缩包至 /opt 目录 4.3复制镜像和脚本到 node 节点,并在 nod

    2024年02月09日
    浏览(24)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包