详解K8s 镜像缓存管理kube-fledged

这篇具有很好参考价值的文章主要介绍了详解K8s 镜像缓存管理kube-fledged。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

本文分享自华为云社区《K8s 镜像缓存管理 kube-fledged 认知》,作者: 山河已无恙。

我们知道 k8s 上的容器调度需要在调度的节点行拉取当前容器的镜像,在一些特殊场景中,

  • 需要快速启动和/或扩展的应用程序。例如,由于数据量激增,执行实时数据处理的应用程序需要快速扩展。
  • 镜像比较庞大,涉及多个版本,节点存储有限,需要动态清理不需要的镜像
  • 无服务器函数通常需要在几分之一秒内立即对传入事件和启动容器做出反应。
  • 在边缘设备上运行的 IoT 应用程序,需要容忍边缘设备和镜像镜像仓库之间的间歇性网络连接。
  • 如果需要从专用仓库中拉取镜像,并且无法授予每个人从此镜像仓库拉取镜像的访问权限,则可以在群集的节点上提供镜像。
  • 如果集群管理员或操作员需要对应用程序进行升级,并希望事先验证是否可以成功拉取新镜像。

kube-fledged 是一个 kubernetes operator,用于直接在 Kubernetes 集群的 worker 节点上创建和管理容器镜像缓存。它允许用户定义镜像列表以及这些镜像应缓存到哪些工作节点上(即拉取)。因此,应用程序 Pod 几乎可以立即启动,因为不需要从镜像仓库中提取镜像。

kube-fledged 提供了 CRUD API 来管理镜像缓存的生命周期,并支持多个可配置的参数,可以根据自己的需要自定义功能。

Kubernetes 具有内置的镜像垃圾回收机制。节点中的 kubelet 会定期检查磁盘使用率是否达到特定阈值(可通过标志进行配置)。一旦达到这个阈值,kubelet 会自动删除节点中所有未使用的镜像。

需要在建议的解决方案中实现自动和定期刷新机制。如果镜像缓存中的镜像被 kubelet 的 gc 删除,下一个刷新周期会将已删除的镜像拉入镜像缓存中。这可确保镜像缓存是最新的。

设计流程

https://github.com/senthilrch/kube-fledged/blob/master/docs/kubefledged-architecture.png

部署 kube-fledged

Helm 方式部署

──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$mkdir  kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$cd kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$export KUBEFLEDGED_NAMESPACE=kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$kubectl create namespace ${KUBEFLEDGED_NAMESPACE}
namespace/kube-fledged created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm repo add kubefledged-charts https://senthilrch.github.io/kubefledged-charts/
"kubefledged-charts" has been added to your repositories
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubefledged-charts" chart repository
...Successfully got an update from the "kubescape" chart repository
...Successfully got an update from the "rancher-stable" chart repository
...Successfully got an update from the "skm" chart repository
...Successfully got an update from the "openkruise" chart repository
...Successfully got an update from the "awx-operator" chart repository
...Successfully got an update from the "botkube" chart repository
Update Complete. ⎈Happy Helming!⎈
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$helm install --verify kube-fledged kubefledged-charts/kube-fledged -n ${KUBEFLEDGED_NAMESPACE} --wait

实际部署中发现,由于网络问题,chart 无法下载,所以通过 make deploy-using-yaml 使用 yaml 方式部署

Yaml 文件部署

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$git clone https://github.com/senthilrch/kube-fledged.git
正克隆到 'kube-fledged'...
remote: Enumerating objects: 10613, done.
remote: Counting objects: 100% (1501/1501), done.
remote: Compressing objects: 100% (629/629), done.
remote: Total 10613 (delta 845), reused 1357 (delta 766), pack-reused 9112
接收对象中: 100% (10613/10613), 34.58 MiB | 7.33 MiB/s, done.
处理 delta 中: 100% (4431/4431), done.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$ls
kube-fledged
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$cd kube-fledged/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make deploy-using-yaml
kubectl apply -f deploy/kubefledged-namespace.yaml

第一次部署,发现镜像拉不下来

┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl get all -n kube-fledged
NAME                                               READY   STATUS                  RESTARTS         AGE
pod/kube-fledged-controller-df69f6565-drrqg        0/1     CrashLoopBackOff        35 (5h59m ago)   21h
pod/kube-fledged-webhook-server-7bcd589bc4-b7kg2   0/1     Init:CrashLoopBackOff   35 (5h58m ago)   21h
pod/kubefledged-controller-55f848cc67-7f4rl        1/1     Running                 0                21h
pod/kubefledged-webhook-server-597dbf4ff5-l8fbh    0/1     Init:CrashLoopBackOff   34 (6h ago)      21h

NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kube-fledged-webhook-server   ClusterIP   10.100.194.199   <none>        3443/TCP   21h
service/kubefledged-webhook-server    ClusterIP   10.101.191.206   <none>        3443/TCP   21h

NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kube-fledged-controller       0/1     1            0           21h
deployment.apps/kube-fledged-webhook-server   0/1     1            0           21h
deployment.apps/kubefledged-controller        0/1     1            0           21h
deployment.apps/kubefledged-webhook-server    0/1     1            0           21h

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/kube-fledged-controller-df69f6565        1         1         0       21h
replicaset.apps/kube-fledged-webhook-server-7bcd589bc4   1         1         0       21h
replicaset.apps/kubefledged-controller-55f848cc67        1         1         0       21h
replicaset.apps/kubefledged-webhook-server-597dbf4ff5    1         1         0       21h
┌──[root@vms100.liruilongs.github.io]-[~]
└─$

这里我们找一下要拉取的镜像

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat *.yaml | grep image:
      - image: senthilrch/kubefledged-controller:v0.10.0
      - image: senthilrch/kubefledged-webhook-server:v0.10.0
      - image: senthilrch/kubefledged-webhook-server:v0.10.0

单独拉取一些,当前使用 ansible 在所有工作节点批量操作

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible k8s_node -m shell -a "docker pull docker.io/senthilrch/kubefledged-cri-client:v0.10.0" -i host.yaml

其他相关的镜像都拉取一下

操作完成之后容器状态全部正常

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl -n kube-fledged get all
NAME                                               READY   STATUS    RESTARTS   AGE
pod/kube-fledged-controller-df69f6565-wdb4g        1/1     Running   0          13h
pod/kube-fledged-webhook-server-7bcd589bc4-j8xxp   1/1     Running   0          13h
pod/kubefledged-controller-55f848cc67-klxlm        1/1     Running   0          13h
pod/kubefledged-webhook-server-597dbf4ff5-ktbsh    1/1     Running   0          13h

NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kube-fledged-webhook-server   ClusterIP   10.100.194.199   <none>        3443/TCP   36h
service/kubefledged-webhook-server    ClusterIP   10.101.191.206   <none>        3443/TCP   36h

NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kube-fledged-controller       1/1     1            1           36h
deployment.apps/kube-fledged-webhook-server   1/1     1            1           36h
deployment.apps/kubefledged-controller        1/1     1            1           36h
deployment.apps/kubefledged-webhook-server    1/1     1            1           36h

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/kube-fledged-controller-df69f6565        1         1         1       36h
replicaset.apps/kube-fledged-webhook-server-7bcd589bc4   1         1         1       36h
replicaset.apps/kubefledged-controller-55f848cc67        1         1         1       36h
replicaset.apps/kubefledged-webhook-server-597dbf4ff5    1         1         1       36h

验证是否安装成功

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$kubectl get pods -n kube-fledged -l app=kubefledged
NAME                                          READY   STATUS    RESTARTS   AGE
kubefledged-controller-55f848cc67-klxlm       1/1     Running   0          16h
kubefledged-webhook-server-597dbf4ff5-ktbsh   1/1     Running   0          16h
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$kubectl get imagecaches -n kube-fledged
No resources found in kube-fledged namespace.

使用 kubefledged

创建镜像缓存对象

根据 Demo 文件,创建镜像缓存对象

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$cd deploy/
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
  # Name of the image cache. A cluster can have multiple image cache objects
  name: imagecache1
  namespace: kube-fledged
  # The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
  labels:
    app: kubefledged
    kubefledged: imagecache
spec:
  # The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
  cacheSpec:
  # Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
  - images:
    - ghcr.io/jitesoft/nginx:1.23.1
  # Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
  - images:
    - us.gcr.io/k8s-artifacts-prod/cassandra:v7
    - us.gcr.io/k8s-artifacts-prod/etcd:3.5.4-0
    nodeSelector:
      tier: backend
  # Specifies a list of image pull secrets to pull images from private repositories into the cache
  imagePullSecrets:
  - name: myregistrykey

官方的 Demo 中对应的 镜像拉取不下来,所以换一下

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$docker pull us.gcr.io/k8s-artifacts-prod/cassandra:v7
Error response from daemon: Get "https://us.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

为了测试选择器标签的使用,我们找一个节点的标签单独做镜像缓存

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get nodes  --show-labels

同时我们直接从公有仓库拉取镜像,所以不需要 imagePullSecrets 对象

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$vim kubefledged-imagecache.yaml

修改后的 yaml 文件

  • 添加了一个所有节点的 liruilong/my-busybox:latest 镜像缓存
  • 添加了一个 kubernetes.io/hostname: vms105.liruilongs.github.io 对应标签选择器的 liruilong/hikvision-sdk-config-ftp:latest 镜像缓存
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
  # Name of the image cache. A cluster can have multiple image cache objects
  name: imagecache1
  namespace: kube-fledged
  # The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
  labels:
    app: kubefledged
    kubefledged: imagecache
spec:
  # The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
  cacheSpec:
  # Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
  - images:
    - liruilong/my-busybox:latest
  # Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
  - images:
    - liruilong/hikvision-sdk-config-ftp:latest
    nodeSelector:
      kubernetes.io/hostname: vms105.liruilongs.github.io
  # Specifies a list of image pull secrets to pull images from private repositories into the cache
  #imagePullSecrets:
  #- name: myregistrykey
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

直接创建报错了

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl create -f kubefledged-imagecache.yaml
Error from server (InternalError): error when creating "kubefledged-imagecache.yaml": Internal error occurred: failed calling webhook "validate-image-cache.kubefledged.io": failed to call webhook: Post "https://kubefledged-webhook-server.kube-fledged.svc:3443/validate-image-cache?timeout=1s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubefledged.io")
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get imagecaches -n kube-fledged
No resources found in kube-fledged namespace.
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

解决办法,删除对应的对象,重新创建

我在当前项目的一个 issues 下面找到了解决办法 https://github.com/senthilrch/kube-fledged/issues/76

看起来这是因为 Webhook CA 是硬编码的,但是当 webhook 服务器启动时,会生成一个新的 CA 捆绑包并更新 webhook 配置。当发生另一个部署时,将重新应用原始 CA 捆绑包,并且 Webhook 请求开始失败,直到再次重新启动 Webhook 组件以修补捆绑包init-server

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make remove-kubefledged-and-operator
# Remove kubefledged
kubectl delete -f deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml
error: resource mapping not found for name: "kube-fledged" namespace: "kube-fledged" from "deploy/kubefledged-operator/deploy/crds/charts.helm.kubefledged.io_v1alpha2_kubefledged_cr.yaml": no matches for kind "KubeFledged" in version "charts.helm.kubefledged.io/v1alpha2"
ensure CRDs are installed first
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged]
└─$make deploy-using-yaml
kubectl apply -f deploy/kubefledged-namespace.yaml
namespace/kube-fledged created
kubectl apply -f deploy/kubefledged-crd.yaml
customresourcedefinition.apiextensions.k8s.io/imagecaches.kubefledged.io unchanged
....................
kubectl rollout status deployment kubefledged-webhook-server -n kube-fledged --watch
Waiting for deployment "kubefledged-webhook-server" rollout to finish: 0 of 1 updated replicas are available...
deployment "kubefledged-webhook-server" successfully rolled out
kubectl get pods -n kube-fledged
NAME                                          READY   STATUS    RESTARTS   AGE
kubefledged-controller-55f848cc67-76c4v       1/1     Running   0          112s
kubefledged-webhook-server-597dbf4ff5-56h6z   1/1     Running   0          66s

重新创建缓存对象,创建成功

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl create -f kubefledged-imagecache.yaml
imagecache.kubefledged.io/imagecache1 created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl get imagecaches -n kube-fledged
NAME          AGE
imagecache1   10s
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

查看当前被纳管的镜像缓存

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$kubectl get imagecaches imagecache1 -n kube-fledged -o json
{
    "apiVersion": "kubefledged.io/v1alpha2",
    "kind": "ImageCache",
    "metadata": {
        "creationTimestamp": "2024-03-01T15:08:42Z",
        "generation": 83,
        "labels": {
            "app": "kubefledged",
            "kubefledged": "imagecache"
        },
        "name": "imagecache1",
        "namespace": "kube-fledged",
        "resourceVersion": "20169836",
        "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
    },
    "spec": {
        "cacheSpec": [
            {
                "images": [
                    "liruilong/my-busybox:latest"
                ]
            },
            {
                "images": [
                    "liruilong/hikvision-sdk-config-ftp:latest"
                ],
                "nodeSelector": {
                    "kubernetes.io/hostname": "vms105.liruilongs.github.io"
                }
            }
        ]
    },
    "status": {
        "completionTime": "2024-03-02T01:06:47Z",
        "message": "All requested images pulled succesfully to respective nodes",
        "reason": "ImageCacheRefresh",
        "startTime": "2024-03-02T01:05:33Z",
        "status": "Succeeded"
    }
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged]
└─$

通过 ansible 来验证

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.102 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.101 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.103 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.105 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.100 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.106 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/hikvision-sdk-config-ftp" -i host.yaml
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | CHANGED | rc=0 >>
liruilong/hikvision-sdk-config-ftp                                          latest            a02cd03b4342   4 months ago    830MB
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

开启自动刷新

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl annotate imagecaches imagecache1 -n kube-fledged kubefledged.io/refresh-imagecache=
imagecache.kubefledged.io/imagecache1 annotated
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

添加镜像缓存

添加一个新的镜像缓存

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io  -n kube-fledged  imagecache1 -o json
{
    "apiVersion": "kubefledged.io/v1alpha2",
    "kind": "ImageCache",
    "metadata": {
        "creationTimestamp": "2024-03-01T15:08:42Z",
        "generation": 92,
        "labels": {
            "app": "kubefledged",
            "kubefledged": "imagecache"
        },
        "name": "imagecache1",
        "namespace": "kube-fledged",
        "resourceVersion": "20175233",
        "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
    },
    "spec": {
        "cacheSpec": [
            {
                "images": [
                    "liruilong/my-busybox:latest",
                    "liruilong/jdk1.8_191:latest"
                ]
            },
            {
                "images": [
                    "liruilong/hikvision-sdk-config-ftp:latest"
                ],
                "nodeSelector": {
                    "kubernetes.io/hostname": "vms105.liruilongs.github.io"
                }
            }
        ]
    },
    "status": {
        "completionTime": "2024-03-02T01:43:32Z",
        "message": "All requested images pulled succesfully to respective nodes",
        "reason": "ImageCacheUpdate",
        "startTime": "2024-03-02T01:40:34Z",
        "status": "Succeeded"
    }
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

通过 ansible 确认

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.101 | CHANGED | rc=0 >>
liruilong/jdk1.8_191                                                        latest    17dbd4002a8c   5 years ago     170MB
192.168.26.102 | CHANGED | rc=0 >>
liruilong/jdk1.8_191                                                        latest    17dbd4002a8c   5 years ago     170MB
192.168.26.100 | CHANGED | rc=0 >>
liruilong/jdk1.8_191                                                        latest    17dbd4002a8c   5 years ago     170MB
192.168.26.103 | CHANGED | rc=0 >>
liruilong/jdk1.8_191                                                        latest                                      17dbd4002a8c   5 years ago     170MB
192.168.26.105 | CHANGED | rc=0 >>
liruilong/jdk1.8_191                                                        latest            17dbd4002a8c   5 years ago     170MB
192.168.26.106 | CHANGED | rc=0 >>
liruilong/jdk1.8_191                                                        latest            17dbd4002a8c   5 years ago     170MB
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

删除镜像缓存

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
imagecache.kubefledged.io/imagecache1 edited
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io  -n kube-fledged  imagecache1 -o json
{
    "apiVersion": "kubefledged.io/v1alpha2",
    "kind": "ImageCache",
    "metadata": {
        "creationTimestamp": "2024-03-01T15:08:42Z",
        "generation": 94,
        "labels": {
            "app": "kubefledged",
            "kubefledged": "imagecache"
        },
        "name": "imagecache1",
        "namespace": "kube-fledged",
        "resourceVersion": "20175766",
        "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
    },
    "spec": {
        "cacheSpec": [
            {
                "images": [
                    "liruilong/jdk1.8_191:latest"
                ]
            },
            {
                "images": [
                    "liruilong/hikvision-sdk-config-ftp:latest"
                ],
                "nodeSelector": {
                    "kubernetes.io/hostname": "vms105.liruilongs.github.io"
                }
            }
        ]
    },
    "status": {
        "message": "Image cache is being updated. Please view the status after some time",
        "reason": "ImageCacheUpdate",
        "startTime": "2024-03-02T01:48:03Z",
        "status": "Processing"
    }
}

通过 Ansible 确认,可以看到无论是 mastere 上的节点还是 work 的节点,对应的镜像缓存都被清理

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.102 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.101 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | CHANGED | rc=0 >>
liruilong/my-busybox                                                        latest    497b83a63aad   11 months ago   1.24MB
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/my-busybox" -i host.yaml
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

这里需要注意如果清除所有的镜像缓存,那么需要把 images 下的数组 写成 "".

┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
imagecache.kubefledged.io/imagecache1 edited
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible all -m shell -a "docker images | grep liruilong/jdk1.8_191" -i host.yaml
192.168.26.102 | FAILED | rc=1 >>
non-zero return code
192.168.26.101 | FAILED | rc=1 >>
non-zero return code
192.168.26.100 | FAILED | rc=1 >>
non-zero return code
192.168.26.105 | FAILED | rc=1 >>
non-zero return code
192.168.26.103 | FAILED | rc=1 >>
non-zero return code
192.168.26.106 | FAILED | rc=1 >>
non-zero return code
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get imagecaches.kubefledged.io  -n kube-fledged  imagecache1 -o json
{
    "apiVersion": "kubefledged.io/v1alpha2",
    "kind": "ImageCache",
    "metadata": {
        "creationTimestamp": "2024-03-01T15:08:42Z",
        "generation": 98,
        "labels": {
            "app": "kubefledged",
            "kubefledged": "imagecache"
        },
        "name": "imagecache1",
        "namespace": "kube-fledged",
        "resourceVersion": "20176849",
        "uid": "3a680a57-d8ab-444f-b9c9-4382459c5c72"
    },
    "spec": {
        "cacheSpec": [
            {
                "images": [
                    ""
                ]
            },
            {
                "images": [
                    "liruilong/hikvision-sdk-config-ftp:latest"
                ],
                "nodeSelector": {
                    "kubernetes.io/hostname": "vms105.liruilongs.github.io"
                }
            }
        ]
    },
    "status": {
        "completionTime": "2024-03-02T01:52:16Z",
        "message": "All cached images succesfully deleted from respective nodes",
        "reason": "ImageCacheUpdate",
        "startTime": "2024-03-02T01:51:47Z",
        "status": "Succeeded"
    }
}
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$

如果通过下面的方式删除,直接注释调对应的标签

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$cat kubefledged-imagecache.yaml
---
apiVersion: kubefledged.io/v1alpha2
kind: ImageCache
metadata:
  # Name of the image cache. A cluster can have multiple image cache objects
  name: imagecache1
  namespace: kube-fledged
  # The kubernetes namespace to be used for this image cache. You can choose a different namepace as per your preference
  labels:
    app: kubefledged
    kubefledged: imagecache
spec:
  # The "cacheSpec" field allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled).
  cacheSpec:
  # Specifies a list of images (nginx:1.23.1) with no node selector, hence these images will be cached in all the nodes in the cluster
  #- images:
    #- liruilong/my-busybox:latest
  # Specifies a list of images (cassandra:v7 and etcd:3.5.4-0) with a node selector, hence these images will be cached only on the nodes selected by the node selector
  - images:
    - liruilong/hikvision-sdk-config-ftp:latest
    nodeSelector:
      kubernetes.io/hostname: vms105.liruilongs.github.io
  # Specifies a list of image pull secrets to pull images from private repositories into the cache
  #imagePullSecrets:
  #- name: myregistrykey
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$

那么会报下面的错

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kube-fledged/kube-fledged/deploy]
└─$kubectl edit imagecaches imagecache1 -n kube-fledged
error: imagecaches.kubefledged.io "imagecache1" could not be patched: admission webhook "validate-image-cache.kubefledged.io" denied the request: Mismatch in no. of image lists
You can run `kubectl replace -f /tmp/kubectl-edit-4113815075.yaml` to try this update again.

博文部分内容参考

© 文中涉及参考链接内容版权归原作者所有,如有侵权请告知,如果你认可它不要吝啬星星哦 :)

https://github.com/senthilrch/kube-fledged

 文章来源地址https://www.toymoban.com/news/detail-852062.html

点击关注,第一时间了解华为云新鲜技术~

 

到了这里,关于详解K8s 镜像缓存管理kube-fledged的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包赞助服务器费用

相关文章

  • k8s拉取镜像的策略详解

    k8s拉取镜像的策略详解

    imagePullPolicy 是 Kubernetes 中 Deployment 和 Pod 配置中的一个重要字段,用于指定容器拉取镜像的策略。它可以控制 Kubernetes 在何时拉取容器镜像。以下是各个策略的详细说明: Always: 当设置为 \\\"Always\\\" 时,Kubernetes 会始终忽略本地的缓存镜像,每次都重新拉取指定的镜像。这意味着

    2024年02月06日
    浏览(7)
  • ctr-k8s镜像管理命令,将k8s正在使用的镜像推送仓库

    ctr-k8s镜像管理命令,将k8s正在使用的镜像推送仓库

    ​ 一.k8s镜像管理命令 查出k8s中pod在使用的镜像 kubectl get -o wide deploy -n yxyw-uat |awk ‘{print $7}’ 查出镜像地址,勾选正则开头配上镜像推送命令 gem-acr-p-a01-registry-vpc.cn-shenzhen.cr.aliyuncs.com/osale/gyx-admin:yxyw-pre-2023-06-05-11-18-10 ​ 二.将pod在使用的镜像推送到阿里云镜像仓库 三.cric

    2024年02月13日
    浏览(12)
  • K8S应用流程安全(镜像安全 配置管理 访问安全)

    K8S应用流程安全(镜像安全 配置管理 访问安全)

    1.1.1 构建原则 学习目标 这一节,我们从 基础知识、原则解读、小结 三个方面来学习。 基础知识 k8s平台使用业务环境 需求 镜像的使用流程 Docker镜像加载 UnionFS 原则解读 构建样式 构建原则 实践原则 分层效果 功能效果 小结 1.1.2 Dockerfile实践 学习目标 这一节,我们从 基础

    2024年02月13日
    浏览(11)
  • k8s安装kube-promethues(0.7版本)

    目录 k8s安装kube-promethues(0.7版本) 一.检查本地k8s版本,下载对应安装包 二.安装前准备 1.文件分类整理 2.查看K8s集群是否安装NFS持久化存储,如果没有则需要安装配置 1).安装NFS服务 2).k8s注册nfs服务 3.修改Prometheus 持久化 4.修改grafana持久化配置 5.修改 promethus和Grafana的Service 端口

    2024年02月08日
    浏览(9)
  • k8s安装promethues,kube-promethues安装法

    目录 k8s安装kube-promethues(0.7版本) 一.检查本地k8s版本,下载对应安装包 二.安装前准备 1.文件分类整理 2.查看K8s集群是否安装NFS持久化存储,如果没有则需要安装配置 1).安装NFS服务 2).k8s注册nfs服务 3.修改Prometheus 持久化 4.修改grafana持久化配置 5.修改 promethus和Grafana的Service 端口

    2024年02月08日
    浏览(13)
  • k8s安装promethues监控,kube-promethues安装法

    目录 k8s安装kube-promethues(0.7版本) 一.检查本地k8s版本,下载对应安装包 二.安装前准备 1.文件分类整理 2.查看K8s集群是否安装NFS持久化存储,如果没有则需要安装配置 1).安装NFS服务 2).k8s注册nfs服务 3.修改Prometheus 持久化 4.修改grafana持久化配置 5.修改 promethus和Grafana的Service 端口

    2024年02月08日
    浏览(7)
  • CKS之k8s安全基准工具:kube-bench

    CKS之k8s安全基准工具:kube-bench

            CIS Kubernetes Benchmark 由互联网安全中心(CIS)社区维护,旨在提供 Kubernetes 的安全配置基线,旨在为互联网环境提供免费的安全防御方案。CIS是一个非营利性组织,其制定的安全基准覆盖了多个领域,包括操作系统、中间件、应用程序等多个层面。         CIS官网:

    2024年04月10日
    浏览(13)
  • 记录k8s kube-controller-manager-k8s-master kube-scheduler-k8s-master重启

    1、报错如下 I0529 01:47:12.679312       1 event.go:307] \\\"Event occurred\\\" object=\\\"k8s-node-1\\\" fieldPath=\\\"\\\" kind=\\\"Node\\\" apiVersion=\\\"v1\\\" type=\\\"Normal\\\" reason=\\\"CIDRNotAvailable\\\" message=\\\"Node k8s-node-1 status is now: CIDRNotAvailable\\\" E0529 01:48:44.516760       1 controller_utils.go:262] Error while processing Node Add/Delete: failed to allocate cid

    2024年02月09日
    浏览(12)
  • k8s之Pod常用命令详解、镜像拉取策略(imagePullPolicy)

    imagePullPolicy 有三个取值: Always 每次都下载最新镜像 Never 不会尝试获取镜像,如果镜像已经以某种方式存在本地,kubelet 会尝试启动容器;否则,会启动失败 IfNotPresent 只有当镜像在本地不存在时才会拉取 默认镜像拉取策略: 当你(或控制器)向 API 服务器提交一个新的 Po

    2024年02月04日
    浏览(27)
  • 夜莺(Flashcat)V6监控(五):夜莺监控k8s组件(下)---使用kube-state-metrics监控K8s对象

    夜莺(Flashcat)V6监控(五):夜莺监控k8s组件(下)---使用kube-state-metrics监控K8s对象

    目录 (一)前言 (二)categraf作为Daemonset的方式去运行监控k8s组件  (1)1.24版本以下的k8s集群部署方法: ①创建autu.yaml绑定权限 ②Daemonset部署categraf采集监控kubelet,kube-proxy ③测试数据是否采集成功  (2)1.24版本以上的k8s集群部署方法: ①创建secret token 绑定sa账号 ③测试认证 ④Daemo

    2024年02月09日
    浏览(9)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包