解决k8s拉取flannel失败的方法

这篇具有很好参考价值的文章主要介绍了解决k8s拉取flannel失败的方法。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

1. 运行环境

k8s版本:1.26.0
flannel:0.20.2
flannel-cni-plugin:v1.1.0

2. 当前问题

flannel是k8s常用的网络插件,正常的部署步骤为:

  1. 打开flannel项目:https://github.com/flannel-io/flannel
  2. 按照指引执行:kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

但是在国内如果按照该步骤,则会显示

[root@k8s-node1 ~]# kubectl get pod -n kube-flannel
NAMESPACE      NAME                                READY   STATUS                  RESTARTS       AGE
kube-flannel   kube-flannel-ds-2wz5z               0/1     Init:ImagePullBackOff   0              13s
kube-flannel   kube-flannel-ds-n6wcb               0/1     Init:ImagePullBackOff   0              13s
kube-flannel   kube-flannel-ds-wk62c               0/1     Init:ImagePullBackOff   0              13s

此时查看任意pod报错原因,显示为:

[root@k8s-node1 ~]# kubectl describe pod kube-flannel-ds-2wz5z -n kube-flannel
Name:                 kube-flannel-ds-2wz5z
Namespace:            kube-flannel
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      flannel
Node:                 k8s-node3/10.15.0.23
Start Time:           Thu, 22 Jun 2023 13:57:58 +0800
Labels:               app=flannel
                      controller-revision-hash=6d89ffc7b6
                      pod-template-generation=1
                      tier=node
Annotations:          <none>
Status:               Pending
IP:                   10.15.0.23
IPs:
  IP:           10.15.0.23
Controlled By:  DaemonSet/kube-flannel-ds
Init Containers:
  install-cni-plugin:
    Container ID:  
    Image:         docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/cni/bin from cni-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nj6jn (ro)
  install-cni:
    Container ID:  
    Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nj6jn (ro)
Containers:
  kube-flannel:
    Container ID:  
    Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:           kube-flannel-ds-2wz5z (v1:metadata.name)
      POD_NAMESPACE:      kube-flannel (v1:metadata.namespace)
      EVENT_QUEUE_DEPTH:  5000
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nj6jn (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
  cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-nj6jn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason          Age                From               Message
  ----     ------          ----               ----               -------
  Normal   Scheduled       49s                default-scheduler  Successfully assigned kube-flannel/kube-flannel-ds-2wz5z to k8s-node3
  Warning  FailedMount     48s                kubelet            MountVolume.SetUp failed for volume "kube-api-access-nj6jn" : failed to sync configmap cache: timed out waiting for the condition
  Normal   Pulling         47s                kubelet            Pulling image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0"
  Warning  Failed          22s                kubelet            Failed to pull image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0": failed to resolve reference "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0": failed to do request: Head "https://registry-1.docker.io/v2/rancher/mirrored-flannelcni-flannel-cni-plugin/manifests/v1.1.0": net/http: TLS handshake timeout
  Warning  Failed          22s                kubelet            Error: ErrImagePull
  Normal   SandboxChanged  15s (x7 over 21s)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   BackOff         15s (x7 over 21s)  kubelet            Back-off pulling image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0"
  Warning  Failed          15s (x7 over 21s)  kubelet            Error: ImagePullBackOff

原因也很明显,k8s在拉取docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0时失败了,默认拉取的镜像仓库在国外,被墙了

3. 解决方案

作为一个完全不懂k8s和docker的新手,我琢磨出以下曲线解决方案

3.1. 安装并配置docker国内源

这一步网上资料很多,不再赘述,只要保证执行docker infoRegistry Mirrors中能看到国内镜像源,执行docker pull hello-world成功即可

[root@k8s-node1 ~]# docker info
Client: Docker Engine - Community
 Version:    24.0.2
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.5
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.18.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 3
 Server Version: 24.0.2
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
 runc version: v1.1.7-0-g860f061
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 3.10.0-1062.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 1.777GiB
 Name: k8s-node1
 ID: 30778c8d-86b5-4c16-84e9-990ea7cf4d22
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://dockerproxy.com/
  https://hub-mirror.c.163.com/
  https://mirror.baidubce.com/
  https://ccr.ccs.tencentyun.com/
 Live Restore Enabled: false
 [root@k8s-node1 containerd]# docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
Digest: sha256:c2e23624975516c7e27b1b25be3682a8c6c4c0cea011b791ce98aa423b5040a0
Status: Image is up to date for hello-world:latest
docker.io/library/hello-world:latest
[root@k8s-node1 containerd]# docker images
REPOSITORY                                                          TAG       IMAGE ID       CREATED         SIZE
hello-world                                                         latest    9c7a54a9a43c   6 weeks ago     13.3kB

3.2. 从docker国内源拉取flannel到宿主机中

在步骤2中,我们展示了命令kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml,实际上是kubectl在线拉取了https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml,我们可以wget或者直接网页打开该地址,获取该yaml文件内容,保存到本地,文件名仍然为kube-flannel.yml,此时执行grep image kube-flannel.yml可得到需要的镜像的地址,此时就可以用docker pull进行下载了

[root@k8s-node1 ~]# grep image kube-flannel.yml 
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
[root@k8s-node1 ~]# docker pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
v1.1.0: Pulling from rancher/mirrored-flannelcni-flannel-cni-plugin
Digest: sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b
Status: Image is up to date for rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
[root@k8s-node1 ~]# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
v0.20.2: Pulling from rancher/mirrored-flannelcni-flannel
Digest: sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
Status: Image is up to date for rancher/mirrored-flannelcni-flannel:v0.20.2
docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2

3.3. 将flannel上传到个人镜像仓库中

这里使用的阿里云个人镜像仓库,网上有很多教程如何注册,不再赘述
此处使用的广州节点,然后随便脸滚键盘写个命名空间ahiusydqiu,然后在里面建立仓库flannelflannel-cni-plugin,再将步骤3.2下载的镜像上传到仓库中

[root@k8s-node1 containerd]# docker images
REPOSITORY                                                          TAG       IMAGE ID       CREATED         SIZE
hello-world                                                         latest    9c7a54a9a43c   6 weeks ago     13.3kB
rancher/mirrored-flannelcni-flannel                                 v0.20.2   b5c6c9203f83   6 months ago    59.6MB
rancher/mirrored-flannelcni-flannel-cni-plugin                      v1.1.0    fcecffc7ad4a   13 months ago   8.09MB
[root@k8s-node1 containerd]# docker tag rancher/mirrored-flannelcni-flannel:v0.20.2 registry.cn-guangzhou.aliyuncs.com/ahiusydqiu/flannel:v0.20.2
[root@k8s-node1 containerd]# docker push registry.cn-guangzhou.aliyuncs.com/ahiusydqiu/flannel:v0.20.2
[root@k8s-node1 containerd]# docker tag rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 registry.cn-guangzhou.aliyuncs.com/ahiusydqiu/flannel-cni-plugin:v1.1.0
[root@k8s-node1 containerd]# docker push registry.cn-guangzhou.aliyuncs.com/ahiusydqiu/flannel-cni-plugin:v1.1.0

3.4. 更改kube-flannel.yml中的image

对比步骤3.2,更改后的kube-flannel.yml的image为:

[root@k8s-node1 ~]# grep image kube-flannel.yml 
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        image: registry.cn-guangzhou.aliyuncs.com/ahiusydqiu/flannel-cni-plugin:v1.1.0
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        image: registry.cn-guangzhou.aliyuncs.com/ahiusydqiu/flannel:v0.20.2
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        image: registry.cn-guangzhou.aliyuncs.com/ahiusydqiu/flannel:v0.20.2

3.5. 终章:拉取flannel

[root@k8s-node1 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8s-node1 ~]# kubectl get pod -n kube-flannel
NAMESPACE      NAME                                READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-h7pvv               1/1     Running   0          21m
kube-flannel   kube-flannel-ds-q5d5x               1/1     Running   0          21m
kube-flannel   kube-flannel-ds-tzfns               1/1     Running   0          21m

如果拉取不下来,使用kubectl describe pod kube-flannel-ds-mp2j6 -n kube-flannel中报错信息有denied等字样,可能是阿里云仓库中设为私有仓库,设为公有仓库即可文章来源地址https://www.toymoban.com/news/detail-598358.html

到了这里,关于解决k8s拉取flannel失败的方法的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Kubernetes(K8S)拉取本地镜像部署Pod 实现类似函数/微服务功能(可设置参数并实时调用)

             以两数相加求和为例,在kubernetes集群 拉取本地的镜像 ,实现如下效果:         1.实现两数相加求和         2.可以通过curl 实时调用 , 参数 以GET方式提供,并得到结果。(类似 调用函数 )         需要准备如下的文件。文件名与对应的功能如下所示

    2024年01月20日
    浏览(34)
  • 解决containerd+k8s集群搭建镜像拉取不到的问题

    之前我写了一篇containerd+k8s搭建集群的文章,文章地址: https://blog.csdn.net/m0_51510236/article/details/130842122 在上面这篇文章中有小伙伴给我反映镜像拉取不到的问题,现在我们就来解决这个问题 因为是对上一篇文章拉取不到镜像问题的解决,所以安装前的准备工作请参考上一篇文

    2024年02月09日
    浏览(39)
  • k8s拉取私有仓库镜像失败:rpc error: code = Unknown desc = failed to pull and unpack image【20221121】

    k8s拉取镜像并不是通过docker拉取,而是通过 crictl拉取的。 失败原因如下: 解决方法: 1、先拷贝一份 2、修改/etc/containerd/config.toml 找到plugins.“io.containerd.grpc.v1.cri”.registry的位置 修改之前: 修改之后: 3、 内容: 4、重启 5、再次拉取

    2024年02月11日
    浏览(37)
  • 【Azure K8S】AKS升级 Kubernetes version 失败问题的分析与解决

    创建Azure Kubernetes Service服务后,需要升级AKS集群的 kubernetes version。在AKS页面的 Cluster configuration 页面中,选择新的版本 1.25.5,确认升级。等待50分钟左右,却等到了升级失败的消息: Failed to save Kubernetes service \\\'xxxx-aks3\\\'. Error: Drain of aks-agentpool-xxxxxxxx-vmss00000j did not complete: Too

    2024年02月08日
    浏览(28)
  • K8s拉取habor镜像

    目录 在daemon.json中添加仓库地址 重新加载daemon.json并重启docker 在目标node节点添加域名 验证目标node是否能正常登录镜像仓库 创建pod资源 加载yml文件 验证 查看pod的ip与端口号 此处需要在创建资源对象所在的节点进行添加 路径: /etc/docker/daemon.json vim /etc/docker/daemon.json 格式:

    2024年04月12日
    浏览(22)
  • K8s拉取Harbor私有仓库镜像

    提示:需要先部署Harbor私有镜像库。 insecure-registries对应可信任的镜像服务地址,如果有多个地址,还可以用“,”隔开,配置多个。 提示:每个k8s节点都需要配置,完成之后需要重启docker服务。 选择需要的命名空间,创建密文。 如果没有密文,后面的配置,都会明文数据

    2024年02月16日
    浏览(35)
  • k8s拉取镜像的策略详解

    imagePullPolicy 是 Kubernetes 中 Deployment 和 Pod 配置中的一个重要字段,用于指定容器拉取镜像的策略。它可以控制 Kubernetes 在何时拉取容器镜像。以下是各个策略的详细说明: Always: 当设置为 \\\"Always\\\" 时,Kubernetes 会始终忽略本地的缓存镜像,每次都重新拉取指定的镜像。这意味着

    2024年02月06日
    浏览(31)
  • 在K8S中,镜像拉取策略有哪些?

    在Kubernetes(简称K8s)中,镜像更新策略主要由 imagePullPolicy 参数控制。当Pod中的容器镜像需要更新时,Kubernetes会根据这个策略决定如何处理镜像拉取行为。 Always : 如果容器的 imagePullPolicy 设置为 Always ,每次创建Pod或者重启容器时,kubelet都会尝试从镜像仓库拉取最新的镜像

    2024年02月19日
    浏览(26)
  • [kubernetes]-k8s调整镜像清理策略

    导语:k8s在磁盘使用率到达80%之后开始清理镜像,导致服务重启后镜像被删除。记录一下大致调整的方法

    2024年02月06日
    浏览(30)
  • k8s 拉取镜像报错 no basic auth credentials

    省流提醒: 本次解决的问题是 docker login 可以正常登录, docker pull 也可以正常拉取镜像,只是 k8s 在启动 pod 的时候,没有指定 imagePullSecrets ,导致没权限拉取 从私有仓库拉取镜像 执行过 docker login 命令后,会在 ~/.docker/config.json 生成凭据文件 参考命令,记得把 去掉 如果有

    2024年02月07日
    浏览(33)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包