kubeadmin kube-apiserver Exited 始终起不来查因记录
[root@k8s-master01 log]# crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
b7af23a98302e fce326961ae2d 16 seconds ago Running etcd 29 16fc6f83a01d2 etcd-k8s-master01
1d0efa5c0c12d a31e1d84401e6 About a minute ago Exited kube-apiserver 1693 dfc4c0a0c3e03 kube-apiserver-k8s-master01
275f08ddab851 fce326961ae2d 6 minutes ago Exited etcd 28 16fc6f83a01d2 etcd-k8s-master01
b11025cbc4661 5d7c5dfd3ba18 6 hours ago Running kube-controller-manager 27 0ff05b544ff48 kube-controller-manager-k8s-master01
4db4688c2687f dafd8ad70b156 30 hours ago Running kube-scheduler 25 5f4d13cedf450 kube-scheduler-k8s-master01
b311bf0e66852 54637cb36d4a1 7 days ago Running calico-node 0 ff2f4ac3783bb calico-node-2zqhn
108695e1af006 a1a5060fe43dc 9 days ago Running kuboard 2 7bee3baf06a62 kuboard-cc79974cd-t9jth
536a8cdfb0a9b 115053965e86b 9 days ago Running metrics-scraper 2 046881f3feea3 metrics-scraper-7f4896c5d7-6w6ld
c91c3382c9c9d 556768f31eb1d 9 days ago Running kube-proxy 6 ce658d774a03b kube-proxy-gsv75
[root@k8s-master01 log]#
查日志
cat /var/log/messages|grep kube-apiserver|grep -i error
Feb 16 12:08:16 k8s-master01 kubelet: E0216 12:08:16.192310 8996 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-master01_kube-system(d6b90b54ef1ec678fb3557edd6baf627)\"" pod="kube-system/kube-apiserver-k8s-master01" podUID=d6b90b54ef1ec678fb3557edd6baf627
由于 kube-apiserver 起不来
kubectl describe
kubectl logs
之类查问题的命令都用不了
用的containerd.service 容器时
crictl logs 命令可用
[root@k8s-master01 log]# crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
b3c010e0082be a31e1d84401e6 39 seconds ago Exited kube-apiserver 1701 dfc4c0a0c3e03 kube-apiserver-k8s-master01
401eba6ca6507 fce326961ae2d 2 minutes ago Running etcd 36 16fc6f83a01d2 etcd-k8s-master01
b11025cbc4661 5d7c5dfd3ba18 7 hours ago Running kube-controller-manager 27 0ff05b544ff48 kube-controller-manager-k8s-master01
4db4688c2687f dafd8ad70b156 31 hours ago Running kube-scheduler 25 5f4d13cedf450 kube-scheduler-k8s-master01
b311bf0e66852 54637cb36d4a1 7 days ago Running calico-node 0 ff2f4ac3783bb calico-node-2zqhn
108695e1af006 a1a5060fe43dc 9 days ago Running kuboard 2 7bee3baf06a62 kuboard-cc79974cd-t9jth
536a8cdfb0a9b 115053965e86b 9 days ago Running metrics-scraper 2 046881f3feea3 metrics-scraper-7f4896c5d7-6w6ld
c91c3382c9c9d 556768f31eb1d 9 days ago Running kube-proxy 6 ce658d774a03b kube-proxy-gsv75
[root@k8s-master01 log]# crictl logs b3c010e0082be
I0216 04:41:58.479545 1 server.go:555] external host was not specified, using 192.168.40.240
I0216 04:41:58.480162 1 server.go:163] Version: v1.26.0
I0216 04:41:58.480188 1 server.go:165] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 04:41:58.662208 1 shared_informer.go:273] Waiting for caches to sync for node_authorizer
I0216 04:41:58.662957 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0216 04:41:58.662977 1 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
E0216 04:42:18.665556 1 run.go:74] "command failed" err="context deadline exceeded"
W0216 04:42:18.665571 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
[root@k8s-master01 log]#
貌似连接2379 etcd服务端口异常
crictl logs 860b2d24e75c3 #etcd容器的id
{"level":"info","ts":"2023-02-16T04:48:39.238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 9331fa9e272a9c3f at term 107"}
{"level":"warn","ts":"2023-02-16T04:48:40.151Z","caller":"etcdserver/server.go:2075","msg":"failed to publish local member to cluster through raft","local-member-id":"e7a1aee40b9b8621","local-member-attributes":"{Name:k8s-master01 ClientURLs:[https://192.168.40.240:2379]}","request-path":"/0/members/e7a1aee40b9b8621/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 is starting a new election at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 became pre-candidate at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 received MsgPreVoteResp from e7a1aee40b9b8621 at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 5db16f545fa302b7 at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 9331fa9e272a9c3f at term 107"}
{"level":"warn","ts":"2023-02-16T04:48:40.447Z","caller":"etcdhttp/metrics.go:173","msg":"serving /health false; no leader"}
{"level":"warn","ts":"2023-02-16T04:48:40.448Z","caller":"etcdhttp/metrics.go:86","msg":"/health error","output":"{\"health\":\"false\",\"reason\":\"RAFT NO LEADER\"}","status-code":503}
.......
{"level":"warn","ts":"2023-02-16T06:55:15.139Z","caller":"etcdserver/server.go:2075","msg":"failed to publish local member to cluster through raft","local-member-id":"e7a1aee40b9b8621","local-member-attributes":"{Name:k8s-master01 ClientURLs:[https://192.168.40.240:2379]}","request-path":"/0/members/e7a1aee40b9b8621/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
[root@k8s-master01 log]#
从日志看,像是etcd内部异常
参考
http://www.caotama.com/1864029.html
受启发,etcd备节点需要先启动,主节点接入备节点才能完整工作。
查备节点,发现因为多网卡IP是动态
[root@k8s-master03 manifests]# crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
a62d091cb77d6 a31e1d84401e6 24 seconds ago Exited kube-apiserver 92 f32dfab0c6d61 kube-apiserver-k8s-master03
580675496bfae fce326961ae2d 21 minutes ago Exited etcd 88 40fa395fc5f61 etcd-k8s-master03
d68f4e7158815 5d7c5dfd3ba18 7 hours ago Running kube-controller-manager 29 e46f277014232 kube-controller-manager-k8s-master03
7e42849ff064e dafd8ad70b156 7 hours ago Running kube-scheduler 28 95d3e74185619 kube-scheduler-k8s-master03
69a554f8f249e 5d7c5dfd3ba18 3 days ago Exited kube-controller-manager 28 d435a4f2a2550 kube-controller-manager-k8s-master03
927c7330986d4 dafd8ad70b156 3 days ago Exited kube-scheduler 27 6bd230494824d kube-scheduler-k8s-master03
2b61030c4f724 5185b96f0becf 13 days ago Exited coredns 1 417cff0be932c coredns-567c556887-gqxc2
e62bfc4d92bf2 54637cb36d4a1 13 days ago Exited calico-node 2 65cb679894ec0 calico-node-4qgnf
e419b9d4ed335 54637cb36d4a1 13 days ago Exited mount-bpffs 0 65cb679894ec0 calico-node-4qgnf
bce9d5d9b2faa 628dd70880410 13 days ago Exited install-cni 0 65cb679894ec0 calico-node-4qgnf
1508b3b2344e8 556768f31eb1d 13 days ago Exited kube-proxy 2 4d7e52b0ee4b5 kube-proxy-qn6lx
ee1622dfb7d36 628dd70880410 13 days ago Exited upgrade-ipam 2 65cb679894ec0 calico-node-4qgnf
[root@k8s-master03 manifests]#
[root@k8s-master03 manifests]#
[root@k8s-master03 manifests]# crictl logs 580675496bfae
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.217.2:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/eted","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.217.2:2380","--initial-cluster=k8s-master03=https://172.19.217.2:2380,k8s-master02=https://172.19.217.32:2380,k8s-master01=https://192.168.40.240:2380","--initial-cluster-state=existing","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.217.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.217.2:2380","--name=k8s-master03","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/eted","dir-type":"member"}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.217.2:2380"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"k8s-master03","data-dir":"/var/lib/eted","advertise-peer-urls":["https://172.19.217.2:2380"],"advertise-client-urls":["https://172.19.217.2:2379"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"k8s-master03","data-dir":"/var/lib/eted","advertise-peer-urls":["https://172.19.217.2:2380"],"advertise-client-urls":["https://172.19.217.2:2379"]}
{"level":"fatal","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 172.19.217.2:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:32\nruntime.main\n\truntime/proc.go:225"}
[root@k8s-master03 manifests]#
https://172.19.217.2:2380
应该是因为多网卡动态获取的ip,eth1网卡为 ssh连接专用网卡,但eth0-IP已经变化
将eth0网卡静态成该地址172.19.217.2。文章来源:https://www.toymoban.com/news/detail-456139.html
修复 。文章来源地址https://www.toymoban.com/news/detail-456139.html
到了这里,关于kubeadmin kube-apiserver Exited 始终起不来查因记录的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!