通过kubeadm安装好kubernetes集群后,在kubernetes集群开始飞的时候,先看下集群中都有些什么。kubernetes集群可以部署为3种模式,1)独立组建模式,master节点和node节点各组件直接以守护进程方式运行,二进制程序部署就属于这种类型。2)静态Pod模式,控制平面各组件以静态Pod对象形式运行在Master节点主机上,Node主机上的kubelet和containerd运行为系统守护级别进程,kube-proxy托管于集群上的DaemonSet控制器。3)自托管模式,类似于第2种,但是控制平面的各组件运行为Pod对象(非静态),并且这些Pod对象同样托管运行在集群上,同样受控于DaemonSet类型的控制器。由于实验集群是通过通过kubeadm来安装的,也没有激活selfHosting选项,属于上述第2种部署模式。
首先列出来全部的pod,输出如下。
zhangzk@zzk-1:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-flannel kube-flannel-ds-5v98d 1/1 Running 1 (<invalid> ago) 28h 192.168.19.131 zzk-3
kube-flannel kube-flannel-ds-7p4mt 1/1 Running 1 (<invalid> ago) 28h 192.168.19.133 zzk-5
kube-flannel kube-flannel-ds-d2qzw 1/1 Running 1 (<invalid> ago) 28h 192.168.19.130 zzk-2
kube-flannel kube-flannel-ds-j42lk 1/1 Running 1 (<invalid> ago) 28h 192.168.19.132 zzk-4
kube-flannel kube-flannel-ds-q6fkt 1/1 Running 1 (11h ago) 28h 192.168.19.134 test
kube-flannel kube-flannel-ds-x464d 1/1 Running 2 (<invalid> ago) 28h 192.168.19.128 zzk-1
kube-system coredns-7bdc4cb885-2qgtj 1/1 Running 1 (11h ago) 41h 10.244.0.5 test
kube-system coredns-7bdc4cb885-wj5cr 1/1 Running 1 (11h ago) 41h 10.244.0.4 test
kube-system etcd-test 1/1 Running 1 (11h ago) 41h 192.168.19.134 test
kube-system kube-apiserver-test 1/1 Running 7 (4h59m ago) 41h 192.168.19.134 test
kube-system kube-controller-manager-test 1/1 Running 8 (139m ago) 41h 192.168.19.134 test
kube-system kube-proxy-8hkml 1/1 Running 1 (11h ago) 41h 192.168.19.134 test
kube-system kube-proxy-kzspb 1/1 Running 1 (<invalid> ago) 28h 192.168.19.132 zzk-4
kube-system kube-proxy-l9tz7 1/1 Running 1 (<invalid> ago) 28h 192.168.19.133 zzk-5
kube-system kube-proxy-qs8bt 1/1 Running 1 (<invalid> ago) 28h 192.168.19.131 zzk-3
kube-system kube-proxy-xgvz8 1/1 Running 2 (<invalid> ago) 28h 192.168.19.128 zzk-1
kube-system kube-proxy-xxbdn 1/1 Running 1 (<invalid> ago) 28h 192.168.19.130 zzk-2
kube-system kube-scheduler-test 1/1 Running 8 (139m ago) 41h 192.168.19.134 test
1、kube-flannel命名空间
namespace为 kube-flannel的pod很好理解,由于网络插件使用的是flannel,所以各节点都要有flannel进程来实现overlay网络。
kube-flannel中的pod是由DaemonSet管理的,如下方式可以查看。
zhangzk@zzk-1:~$ kubectl get ds -n kube-flannel -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-flannel-ds 6 6 6 6 6 <none> 29h kube-flannel docker.io/flannel/flannel:v0.22.0 app=flannel,k8s-app=flannel
2、 kube-system命名空间
namespace为 kube-system中有很多pod,其中kube-proxy-XXX的pod很好理解,各节点都要有kube-proxy,同样也是由DaemonSet来管理的。
zhangzk@zzk-1:~$ kubectl get ds -n kube-system -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-proxy 6 6 6 6 6 kubernetes.io/os=linux 42h kube-proxy registry.aliyuncs.com/google_containers/kube-proxy:v1.27.0 k8s-app=kube-proxy
命名空间 kube-system 中还有4个pod的名字是以后缀test结尾,名字列表如下:
etcd-test
kube-apiserver-test
kube-controller-manager-test
kube-scheduler-test
这些都是控制平面的pod,属于static Pod,只存在于master节点,不像DaemonSet、ReplicaSet等由控制平面来管理,而是直接由kubelet来进行本地化管理。
参考官方文档:kubeadm实现细节
静态 Pod 在指定的节点上由 kubelet 守护进程直接管理,不需要 API 服务器监管。 与由控制面管理的 Pod(例如,Deployment) 不同;kubelet 监视每个静态 Pod(在它失败之后重新启动)。
静态 Pod 始终都会绑定到特定节点的 Kubelet 上。
kubelet 会尝试通过 Kubernetes API 服务器为每个静态 Pod 自动创建一个镜像 Pod。 这意味着节点上运行的静态 Pod 对 API 服务来说是可见的,但是不能通过 API 服务器来控制。 Pod 名称将把以连字符开头的节点主机名作为后缀。
命名空间kube-system中还有2个pod的名字是以“coredns-”打头的,很好理解,是属于coredns的,也是由Deployment/ReplicaSet来管理。
zhangzk@test:~$ kubectl get deployment -n kube-system -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
coredns 2/2 2 2 44h coredns registry.aliyuncs.com/google_containers/coredns:v1.10.1 k8s-app=kube-dns
3、kubernetes节点上的进程
master节点上的kube进程如下,包括kubelet、kube-proxy、flanneld、etcd、fkube-apiserver、kube-controller-manager、kube-scheduler。
其中kubelet、kube-proxy是每个kubernetes节点都有控制平面进程,flanneld是网络插件flannel的进程,其余4个进程etcd、fkube-apiserver、kube-controller-manager、kube-scheduler是master才有的控制平面进程,和上面的static Pod一一对应。
zhangzk@test:~$ ps -aef|grep kube
root 761 1 1 10:40 ? 00:03:08 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
root 1326 1122 1 10:40 ? 00:02:52 etcd --advertise-client-urls=https://192.168.19.134:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.19.134:2380 --initial-cluster=test=https://192.168.19.134:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.19.134:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.19.134:2380 --name=test --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root 1639 1449 0 10:40 ? 00:00:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=test
root 1866 1411 0 10:40 ? 00:00:13 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 18419 1094 2 11:39 ? 00:04:30 kube-apiserver --advertise-address=192.168.19.134 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root 57299 1107 0 14:08 ? 00:00:12 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true
root 57379 1116 0 14:08 ? 00:00:04 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
zhangzk 64806 60598 0 14:37 pts/0 00:00:00 grep --color=auto kube
master节点上还有2个coredns进程,参考如下:
zhangzk@test:~$ ps -aef|grep core
root 2093 2034 0 10:40 ? 00:00:24 /coredns -conf /etc/coredns/Corefile
root 2214 2161 0 10:40 ? 00:00:25 /coredns -conf /etc/coredns/Corefile
worker节点的进程如下,只有kubelet、kube-proxy、flanneld这3个进程。文章来源:https://www.toymoban.com/news/detail-513508.html
zhangzk@zzk-1:~$ ps -aef |grep kube
root 755 1 0 11:42 ? 00:01:43 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
root 1176 1079 0 11:42 ? 00:00:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=zzk-1
root 1442 1090 0 11:42 ? 00:00:15 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
注意:上述节点都有dockerd进程存在,没有拿出来讲述。文章来源地址https://www.toymoban.com/news/detail-513508.html
到了这里,关于kubernetes控制平面分析的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!