liqo学习及安装,k8s,kubernetes多集群互联

这篇具有很好参考价值的文章主要介绍了liqo学习及安装,k8s,kubernetes多集群互联。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

先按照官方的教程在虚拟机安装学习

在开始以下教程之前,您应该确保您的系统上安装了以下软件:

  • Docker,容器运行时。
  • Kubectl,Kubernetes 的命令行工具。
 curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
 kubectl version --client
  • Helm,Kubernetes 的包管理器。
 curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
 helm version
  • curl,通过 HTTP/HTTPS 与教程应用程序交互。
  • Kind,Docker 运行时中的 Kubernetes。(在docker中运行的Kubernetes,主要做测试用)
#For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
#For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
kind version
  • liqoc​​tl与 Liqo 交互的命令行工具。
 #AMD64:
curl --fail -LS "https://github.com/liqotech/liqo/releases/download/v0.10.2/liqoctl-linux-amd64.tar.gz" | tar -xz
sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl
#ARM64:
curl --fail -LS "https://github.com/liqotech/liqo/releases/download/v0.10.2/liqoctl-linux-arm64.tar.gz" | tar -xz
sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl

创建虚拟集群

然后,让我们在计算机上打开一个终端并启动以下脚本,该脚本将使用 Kind 创建一对集群。每个集群由两个节点组成(一个用于控制平面,一个作为简单工作节点):

git clone https://github.com/liqotech/liqo.git
cd liqo
git checkout v0.10.2
cd examples/quick-start
./setup.sh

此步骤如果报错: failed to create cluster: failed to init node with kubeadm: command “docker exec --privileged rome-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6” failed with error: exit status 1
Command Output: I0419 00:55:49.069409 44 initconfiguration.go:254] loading configuration from “/kind/kubeadm.conf”

the HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
liqo学习及安装,k8s,kubernetes多集群互联,liqo,k8s
排查:
直接用 kind create cluster --name rome 是成功的,发现Ensuring node image:kindest/node:v1.29.2
而用./setup.sh执行的是v1.25.0
liqo学习及安装,k8s,kubernetes多集群互联,liqo,k8s
解决办法:
直接改/opt/liqo/examples/quick-start/manifests/cluster.yam里面的kindest/node改成v1.29.2

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  serviceSubnet: "10.90.0.0/12"
  podSubnet: "10.200.0.0/16"
nodes:
  - role: control-plane
    image: kindest/node:v1.25.0  #这里改成v1.29.2
  - role: worker
    image: kindest/node:v1.25.0  #这里改成v1.29.2

详情请点击

在执行:

root@main:~/liqo/examples/quick-start# ./setup.sh 
SUCCESS	No cluster "rome" is running.
SUCCESS	No cluster "milan" is running.
SUCCESS	Cluster "rome" has been created.
SUCCESS	Cluster "milan" has been created.
root@main:~/liqo/examples/quick-start# 

测试集群

您可以通过键入以下内容来检查已部署的集群:

root@main:~/liqo/examples/quick-start# kind get clusters
milan
rome
root@main:~/liqo/examples/quick-start# 

这意味着您的主机上部署并运行了两个 Kind 集群。
默认情况下,两个集群的 kubeconfig 存储在当前目录(./liqo_kubeconf_rome, ./liqo_kubeconf_milan)中。

root@main:~/liqo/examples/quick-start# pwd
/root/liqo/examples/quick-start
root@main:~/liqo/examples/quick-start# ls
liqo_kubeconf_milan  liqo_kubeconf_rome  manifests  setup.sh
root@main:~/liqo/examples/quick-start# 

您可以通过以下方式导出本教程其余部分使用的适当环境变量(即KUBECONFIG和KUBECONFIG_MILAN),并引用它们的位置:

export KUBECONFIG="$PWD/liqo_kubeconf_rome"
export KUBECONFIG_MILAN="$PWD/liqo_kubeconf_milan"

永久配置:

sudo vim ~/.bashrc
export KUBECONFIG="$PWD/liqo_kubeconf_rome"
export KUBECONFIG_MILAN="$PWD/liqo_kubeconf_milan"

建议将第一个集群的 kubeconfig 导出为默认值(即KUBECONFIG),因为它将是虚拟集群的入口点,您将主要与其交互。

在第一个集群上,您只需输入以下内容即可获取可用的 Pod:

root@liqo:~/liqo/examples/quick-start# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-76f75df574-m5ppl                     1/1     Running   0          110s
kube-system          coredns-76f75df574-mljgc                     1/1     Running   0          110s
kube-system          etcd-rome-control-plane                      1/1     Running   0          2m6s
kube-system          kindnet-p9cjf                                1/1     Running   0          107s
kube-system          kindnet-pg7bt                                1/1     Running   0          110s
kube-system          kube-apiserver-rome-control-plane            1/1     Running   0          2m4s
kube-system          kube-controller-manager-rome-control-plane   1/1     Running   0          2m4s
kube-system          kube-proxy-gh5kr                             1/1     Running   0          110s
kube-system          kube-proxy-l67g5                             1/1     Running   0          107s
kube-system          kube-scheduler-rome-control-plane            1/1     Running   0          2m4s
local-path-storage   local-path-provisioner-7577fdbbfb-ct9rz      1/1     Running   0          110s

同样,在第二个集群上,您可以观察执行中的 Pod:

kubectl get pods -A --kubeconfig "$KUBECONFIG_MILAN"

安装Liqo

现在,您将使用以下名称在两个集群上安装 Liqo

  • rome:本地集群,您将在其中部署和控制应用程序。

  • milan:远程集群,部分工作负载将被卸载到该集群。

您可以通过启动以下命令在rome集群上安装 Liqo:

liqoctl install kind --cluster-name rome

这里一直报错,安装不成功

 ERRO  Error installing or upgrading Liqo: release liqo failed, and has been uninstalled due to atomic being set: timed out waiting for the condition                                                                                                                                    
 INFO  Likely causes for the installation/upgrade timeout could include:
 INFO  * One or more pods failed to start (e.g., they are in the ImagePullBackOff status)
 INFO  * A service of type LoadBalancer has been configured, but no provider is available
 INFO  You can add the --verbose flag for debug information concerning the failing resources
 INFO  Additionally, if necessary, you can increase the timeout value with the --timeout flag

根据提示,运行:liqoctl install kind --cluster-name rome --verbose 展示详情信息:

 Starting delete for "liqo-webhook-certificate-patch" ServiceAccount                                                                                                                                                                                                               
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Starting delete for "liqo-webhook-certificate-patch" Role                                                                                                                                                                                                                         
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Starting delete for "liqo-webhook-certificate-patch" RoleBinding                                                                                                                                                                                                                  
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Starting delete for "liqo-webhook-certificate-patch-pre" Job                                                                                                                                                                                                                      
 INFO  beginning wait for 1 resources to be deleted with timeout of 10m0s                                                                                                                                                                                                                
 INFO  creating 1 resource(s)                                                                                                                                                                                                                                                            
 INFO  Watching for changes to Job liqo-webhook-certificate-patch-pre with timeout of 10m0s                                                                                                                                                                                              
 INFO  Add/Modify event for liqo-webhook-certificate-patch-pre: ADDED                                                                                                                                                                                                                    
 INFO  liqo-webhook-certificate-patch-pre: Jobs active: 0, jobs failed: 0, jobs succeeded: 0                                                                                                                                                                                             
 INFO  Add/Modify event for liqo-webhook-certificate-patch-pre: MODIFIED                                                                                                                                                                                                                 
 INFO  liqo-webhook-certificate-patch-pre: Jobs active: 1, jobs failed: 0, jobs succeeded: 0                                                                                                                                                                                             
 INFO  Install failed and atomic is set, uninstalling release       
 .....
 Ignoring delete failure for "liqo-telemetry" /v1, Kind=ServiceAccount: serviceaccounts "liqo-telemetry" not found                                                                                                                                                                 
 INFO  Ignoring delete failure for "liqo-crd-replicator" /v1, Kind=ServiceAccount: serviceaccounts "liqo-crd-replicator" not found                                                                                                                                                       
 INFO  Ignoring delete failure for "liqo-gateway" /v1, Kind=ServiceAccount: serviceaccounts "liqo-gateway" not found                                                                                                                                                                     
 INFO  Ignoring delete failure for "liqo-auth" /v1, Kind=ServiceAccount: serviceaccounts "liqo-auth" not found                                                                                                                                                                           
 INFO  Ignoring delete failure for "liqo-network-manager" /v1, Kind=ServiceAccount: serviceaccounts "liqo-network-manager" not found                                                                                                                                                     
 INFO  Ignoring delete failure for "liqo-route" /v1, Kind=ServiceAccount: serviceaccounts "liqo-route" not found                                                                                                                                                                         
 INFO  Ignoring delete failure for "liqo-controller-manager" /v1, Kind=ServiceAccount: serviceaccounts "liqo-controller-manager" not found                                                                                                                                               
 INFO  Ignoring delete failure for "liqo-metric-agent" /v1, Kind=ServiceAccount: serviceaccounts "liqo-metric-agent" not found                                                                                                                                                           
 INFO  Starting delete for "liqo-webhook" MutatingWebhookConfiguration                                                                                                                                                                                                                   
 INFO  Ignoring delete failure for "liqo-webhook" admissionregistration.k8s.io/v1, Kind=MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io "liqo-webhook" not found                                                                                
 INFO  Starting delete for "liqo-webhook" ValidatingWebhookConfiguration                                                                                                                                                                                                                 
 INFO  Ignoring delete failure for "liqo-webhook" admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration: validatingwebhookconfigurations.admissionregistration.k8s.io "liqo-webhook" not found                                                                            
 INFO  purge requested for liqo                                                                                                                                                                                                                                                          
 ERRO  Error installing or upgrading Liqo: release liqo failed, and has been uninstalled due to atomic being set: timed out waiting for the condition                                                                                                                                    
 INFO  Likely causes for the installation/upgrade timeout could include:
 INFO  * One or more pods failed to start (e.g., they are in the ImagePullBackOff status)
 INFO  * A service of type LoadBalancer has been configured, but no provider is available
 INFO  You can add the --verbose flag for debug information concerning the failing resources
 INFO  Additionally, if necessary, you can increase the timeout value with the --timeout flag
                                                                           

这里还不太清楚怎么办,看情况好像是liqo-webhook相关的东西不行,又看到提示"One or more pods failed to start (e.g., they are in the ImagePullBackOff status)"
然后执行了下kubectl get pods -A 发现liqo-webhook-certificate-patch-pre-dxqj9 没拉取下来

root@liqo:~/liqo/examples/quick-start# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS             RESTARTS   AGE
kube-system          coredns-76f75df574-prq94                     1/1     Running            0          63m
kube-system          coredns-76f75df574-xljfr                     1/1     Running            0          63m
kube-system          etcd-rome-control-plane                      1/1     Running            0          63m
kube-system          kindnet-5g85h                                1/1     Running            0          63m
kube-system          kindnet-74l4j                                1/1     Running            0          63m
kube-system          kube-apiserver-rome-control-plane            1/1     Running            0          63m
kube-system          kube-controller-manager-rome-control-plane   1/1     Running            0          63m
kube-system          kube-proxy-86v24                             1/1     Running            0          63m
kube-system          kube-proxy-9c9df                             1/1     Running            0          63m
kube-system          kube-scheduler-rome-control-plane            1/1     Running            0          63m
liqo                 liqo-webhook-certificate-patch-pre-dxqj9     0/1     ImagePullBackOff   0          25m
local-path-storage   local-path-provisioner-7577fdbbfb-7vxcq      1/1     Running            0          63m

root@liqo:~/liqo/examples/quick-start# kubectl logs -f -n liqo liqo-webhook-certificate-patch-pre-dxqj9 
Error from server (BadRequest): container "create" in pod "liqo-webhook-certificate-patch-pre-dxqj9" is waiting to start: image can't be pulled

然后去官方仓库搜索webhook-certificate 发现
liqo学习及安装,k8s,kubernetes多集群互联,liqo,k8s
执行 kubectl describe job liqo-webhook-certificate-patch --namespace=liqo

root@liqo:~/liqo/examples/quick-start# kubectl describe job liqo-webhook-certificate-patch --namespace=liqo
Name:                        liqo-webhook-certificate-patch-pre
Namespace:                   liqo
Selector:                    batch.kubernetes.io/controller-uid=54b6ee31-c866-487f-a874-4c71ec3a872c
Labels:                      app.kubernetes.io/component=webhook-certificate-patch
                             app.kubernetes.io/instance=liqo-webhook-certificate-patch-pre
                             app.kubernetes.io/managed-by=Helm
                             app.kubernetes.io/name=webhook-certificate-patch-pre
                             app.kubernetes.io/part-of=liqo
                             app.kubernetes.io/version=v0.10.2
                             helm.sh/chart=liqo-v0.10.2
Annotations:                 helm.sh/hook: pre-install,pre-upgrade
                             helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
Parallelism:                 1
Completions:                 1
Completion Mode:             NonIndexed
Suspend:                     false
Backoff Limit:               6
TTL Seconds After Finished:  150
Start Time:                  Fri, 19 Apr 2024 07:04:18 +0000
Pods Statuses:               1 Active (0 Ready) / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/component=webhook-certificate-patch
                    app.kubernetes.io/instance=liqo-webhook-certificate-patch-pre
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=webhook-certificate-patch-pre
                    app.kubernetes.io/part-of=liqo
                    app.kubernetes.io/version=v0.10.2
                    batch.kubernetes.io/controller-uid=54b6ee31-c866-487f-a874-4c71ec3a872c
                    batch.kubernetes.io/job-name=liqo-webhook-certificate-patch-pre
                    controller-uid=54b6ee31-c866-487f-a874-4c71ec3a872c
                    helm.sh/chart=liqo-v0.10.2
                    job-name=liqo-webhook-certificate-patch-pre
  Service Account:  liqo-webhook-certificate-patch
  Containers:
   create:
    Image:      k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    Port:       <none>
    Host Port:  <none>
    Args:
      create
      --host=liqo-controller-manager,liqo-controller-manager.liqo,liqo-controller-manager.liqo.svc,liqo-controller-manager.liqo.svc.cluster.local
      --namespace=liqo
      --secret-name=liqo-webhook-certs
      --cert-name=tls.crt
      --key-name=tls.key
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  29m   job-controller  Created pod: liqo-webhook-certificate-patch-pre-dxqj9

发现image是:k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
这里安装过k8s的都知道,会有这个问题 k8s.gcr.io会访问不到。
这里卡了好久,k8s可以指定–image-repository 但是他不支持

liqoctl install kind --cluster-name rome --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers  --verbose 
Error: unknown flag: --image-repository
root@liqo:~/liqo/examples/quick-start# kubectl get pods -A

然后拉取aliyuncs的kube-webhook-certgen,打tag,在重新删除pod从新运行pod也是不行:

root@liqo:~/liqo/examples/quick-start# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
v1.1.1: Pulling from google_containers/kube-webhook-certgen
ec52731e9273: Pull complete 
b90aa28117d4: Pull complete 
Digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
root@liqo:~/liqo/examples/quick-start#  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
root@liqo:~/liqo/examples/quick-start# docker images
REPOSITORY                                                                 TAG       IMAGE ID       CREATED         SIZE
kindest/node                                                               v1.29.2   09c50567d34e   2 months ago    956MB
kindest/node                                                               v1.25.0   d3da246e125a   19 months ago   870MB
k8s.gcr.io/ingress-nginx/kube-webhook-certgen                              v1.1.1    c41e9fcadf5a   2 years ago     47.7MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen   v1.1.1    c41e9fcadf5a   2 years ago     47.7MB
root@liqo:~/liqo/examples/quick-start# kubectl delete pod liqo-webhook-certificate-patch-pre-fj72j -n liqo
pod "liqo-webhook-certificate-patch-pre-fj72j" deleted
root@liqo:~/liqo/examples/quick-start# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS              RESTARTS   AGE
kube-system          coredns-76f75df574-prq94                     1/1     Running             0          159m
kube-system          coredns-76f75df574-xljfr                     1/1     Running             0          159m
kube-system          etcd-rome-control-plane                      1/1     Running             0          159m
kube-system          kindnet-5g85h                                1/1     Running             0          159m
kube-system          kindnet-74l4j                                1/1     Running             0          159m
kube-system          kube-apiserver-rome-control-plane            1/1     Running             0          159m
kube-system          kube-controller-manager-rome-control-plane   1/1     Running             0          159m
kube-system          kube-proxy-86v24                             1/1     Running             0          159m
kube-system          kube-proxy-9c9df                             1/1     Running             0          159m
kube-system          kube-scheduler-rome-control-plane            1/1     Running             0          159m
liqo                 liqo-webhook-certificate-patch-pre-jk979     0/1     ContainerCreating   0          3s
local-path-storage   local-path-provisioner-7577fdbbfb-7vxcq      1/1     Running             0          159m

最后还是创建失败,ImagePullBackOff。这里我的原理不太懂,打标签之后也没直接用我pull下来的镜像,还是会去拉取,因为我安装k8s的时候直接指定–image-repository 所以不用什么拉取镜像打标签什么的,这里应该网上有的教程太老了吧,现在k8s都是直接用containerd了,k8s安装见k8s安装,linux-ubuntu上面kubernetes详细安装过程

接着又搜索怎么修改pod的image,各种试都不行,他是个job,k8s太多东西了,这个概念不清楚
liqo学习及安装,k8s,kubernetes多集群互联,liqo,k8s
liqo学习及安装,k8s,kubernetes多集群互联,liqo,k8s
liqo学习及安装,k8s,kubernetes多集群互联,liqo,k8s
liqo学习及安装,k8s,kubernetes多集群互联,liqo,k8s
到这里又行不通了,又看官网,发现执行的时候,可以指定这个webhook-certificate的image,删除liqo:

liqoctl uninstall --purge --kubeconfig="$KUBECONFIG"
liqoctl uninstall --purge --kubeconfig="$KUBECONFIG_MILAN"

等删除成功之后从新执行:

liqoctl install kind --cluster-name rome --verbose --set webhook.patch.image=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1

成功

 INFO  Installer initialized                                                                                                                                                                                                                                                             
 INFO  Cluster configuration correctly retrieved                                                                                                                                                                                                                                         
 INFO  Installation parameters correctly generated                                                                                                                                                                                                                                       
 INFO  All Set! You can now proceed establishing a peering (liqoctl peer --help for more information)                                                                                                                                                                                    
 INFO  Make sure to use the same version of Liqo on all remote clusters

在Milan集群上也安装 Liqo :

root@liqo:~/liqo/examples/quick-start# liqoctl install kind --cluster-name milan --kubeconfig "$KUBECONFIG_MILAN" --set webhook.patch.image=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 --verbose

Liqo pod 已启动并运行:

root@liqo:~/liqo/examples/quick-start# kubectl get pods -n liqo
NAME                                      READY   STATUS    RESTARTS   AGE
liqo-auth-6bf849d75-ll25k                 1/1     Running   0          53m
liqo-controller-manager-7c586975f-d2k6c   1/1     Running   0          53m
liqo-crd-replicator-679cfc85cd-g85nb      1/1     Running   0          53m
liqo-gateway-759f8b7d4d-lsblm             1/1     Running   0          53m
liqo-metric-agent-77d765d9df-v4tp2        1/1     Running   0          53m
liqo-network-manager-dd886d8bc-9st79      1/1     Running   0          53m
liqo-proxy-6d49f7789b-klcp4               1/1     Running   0          53m
liqo-route-6nglb                          1/1     Running   0          53m
liqo-route-7z4vc                          1/1     Running   0          53m
root@liqo:~/liqo/examples/quick-start# 

此外,您可以使用以下命令检查安装状态和主要 Liqo 配置参数:

root@liqo:~/liqo/examples/quick-start# liqoctl status
┌─ Namespace existence check ──────────────────────────────────────────────────────┐
|  INFO  ✔ liqo control plane namespace liqo exists                                |
└──────────────────────────────────────────────────────────────────────────────────┘

┌─ Control plane check ────────────────────────────────────────────────────────────┐
|  Deployment                                                                      |
|      liqo-controller-manager: Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-crd-replicator:     Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-metric-agent:       Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-auth:               Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-proxy:              Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-network-manager:    Desired: 1, Ready: 1/1, Available: 1/1             |
|      liqo-gateway:            Desired: 1, Ready: 1/1, Available: 1/1             |
|  DaemonSet                                                                       |
|      liqo-route:              Desired: 2, Ready: 2/2, Available: 2/2             |
└──────────────────────────────────────────────────────────────────────────────────┘

┌─ Local cluster information ──────────────────────────────────────────────────────┐
|  Cluster identity                                                                |
|      Cluster ID:   06b2ab0f-5dd0-42cb-aaca-73f92741b740                          |
|      Cluster name: rome                                                          |
|      Cluster labels                                                              |
|          liqo.io/provider: kind                                                  |
|  Configuration                                                                   |
|      Version: v0.10.2                                                            |
|  Network                                                                         |
|      Pod CIDR:      10.200.0.0/16                                                |
|      Service CIDR:  10.80.0.0/12                                                 |
|      External CIDR: 10.201.0.0/16                                                |
|  Endpoints                                                                       |
|      Network gateway:       udp://172.18.0.2:30620                               |
|      Authentication:        https://172.18.0.3:32395                             |
|      Kubernetes API server: https://172.18.0.3:6443                              |
└──────────────────────────────────────────────────────────────────────────────────┘

对等两个集群

就是把两个集群建立互连,在此示例中,由于两个 API 服务器(可以理解成两个k8s可以相互访问)可相互访问,因此您将使用带外对等互连方法。

首先,从Milan集群获取对等命令:

root@liqo:~/liqo/examples/quick-start# liqoctl generate peer-command --kubeconfig "$KUBECONFIG_MILAN"
 INFO  Peering information correctly retrieved                                                                                                                                                                                                                                           

Execute this command on a *different* cluster to enable an outgoing peering with the current cluster:

liqoctl peer out-of-band milan --auth-url https://172.18.0.4:31720 --cluster-id 0422b752-25e5-42d0-acbf-1d584b09d1a6 --auth-token dfd35fcb10d65c142738261c17d94724bd3bf2dd54e14ac344e22a0cee27b58a084f452ead3df1857e2bb9dd35d0d6ba5b03b7e507f220eff4c71785a42e7cae
root@liqo:~/liqo/examples/quick-start# 

rome:本地集群,您将在其中部署和控制应用程序。
milan:远程集群,部分工作负载将被卸载到该集群。
就是在rome集群执行pod相关的命令会到milan上?

其次,将命令复制粘贴到Rome集群中:

root@liqo:~/liqo/examples/quick-start# liqoctl peer out-of-band milan --auth-url https://172.18.0.4:31720 --cluster-id 0422b752-25e5-42d0-acbf-1d584b09d1a6 --auth-token dfd35fcb10d65c142738261c17d94724bd3bf2dd54e14ac344e22a0cee27b58a084f452ead3df1857e2bb9dd35d0d6ba5b03b7e507f220eff4c71785a42e7cae
 INFO  Peering enabled                                                                                                                                                                                                                                                                   
 INFO  Authenticated to cluster "milan"                                                                                                                                                                                                                                                  
 INFO  Outgoing peering activated to the remote cluster "milan"                                                                                                                                                                                                                          
 INFO  Network established to the remote cluster "milan"                                                                                                                                                                                                                                 
 INFO  Node created for remote cluster "milan"                                                                                                                                                                                                                                           
 INFO  Peering successfully established
root@liqo:~/liqo/examples/quick-start# 

现在, Rome集群中的 Liqo 控制平面将联系所提供的身份验证端点,向Milan集群提供令牌以获取其 Kubernetes 身份。

您可以通过运行以下命令来检查对等状态:

root@liqo:~/liqo/examples/quick-start# kubectl get foreignclusters
NAME    TYPE        OUTGOING PEERING   INCOMING PEERING   NETWORKING    AUTHENTICATION   AGE
milan   OutOfBand   Established        None               Established   Established      85s

输出表明跨集群网络隧道已建立,并且传出对等互连当前处于活动状态(即,Rome集群可以将工作负载卸载到Milan集群,但反之则不然):
同时,liqo-milan除了物理节点之外,您还应该看到一个虚拟节点 ( ):

oot@liqo:~/liqo/examples/quick-start# kubectl get nodes
NAME                 STATUS   ROLES           AGE     VERSION
liqo-milan           Ready    agent           2m46s   v1.29.2
rome-control-plane   Ready    control-plane   71m     v1.29.2
rome-worker          Ready    <none>          71m     v1.29.2

此外,您可以使用以下方法检查对等互连状态并检索更多高级信息:

liqoctl status peer milan

部署服务

现在,您可以像在单集群场景中一样在多集群环境中部署标准 Kubernetes 应用程序(即无需修改)。
下一篇见文章来源地址https://www.toymoban.com/news/detail-855746.html

到了这里,关于liqo学习及安装,k8s,kubernetes多集群互联的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【k8s】基于Prometheus监控Kubernetes集群安装部署

    目录 基于Prometheus监控Kubernetes集群安装部署 一、环境准备 二、部署kubernetes集群 三、部署Prometheus监控平台 四、部署Grafana服务 五、grafana  web操作 IP地址 主机名 组件 192.168.100.131 k8s-master kubeadm、kubelet、kubectl、docker-ce 192.168.100.132 k8s-node01 kubeadm、kubelet、kubectl、docker-ce 192.168

    2024年02月12日
    浏览(59)
  • 为Kubernetes(k8s)集群安装仪表盘(Dashboard)

    对应本片文章的视频教程地址:https://www.bilibili.com/video/BV1MF41197RS/?vd_source=98deeeab6739fa30792cfcffa994b50e 在之前的文章当中我们搭建了一个kubernetes集群,文章地址: https://blog.csdn.net/m0_51510236/article/details/130842122 这篇文章我们依照官方文档为这个kubernetes集群安装仪表盘(Dashboard),官

    2024年02月13日
    浏览(45)
  • Centos7 安装部署 Kubernetes(k8s) 高可用集群

    宿主机系统 集群角色 服务器IP 主机名称 容器 centos7.6 master 192.168.2.150 ks-m1 docker centos7.6 master 192.168.2.151 ks-n1 docker centos7.6 master 192.168.2.152 ks-n2 docker 1.1 服务器初始化及网络配置 VMware安装Centos7并初始化网络使外部可以访问** 注意事项:请一定要看完上面这篇文章再执行下面的操

    2024年02月03日
    浏览(58)
  • K8s(kubernetes)集群搭建及dashboard安装、基础应用部署

    本质是一组服务器集群,在集群每个节点上运行特定的程序,来对节点中的容器进行管理。实现资源管理的自动化。 自我修复 弹性伸缩 服务发现 负载均衡 版本回退 存储编排 控制节点(master)-控制平面 APIserver :资源操作的唯一入口 scheduler :集群资源调度,将Pod调度到node节

    2024年02月08日
    浏览(53)
  • 【Kubernetes】kubeadm安装k8s1.25.0高可用集群

    详情请参考 : 【Kubernetes】kubeadm安装k8s稳定版(1.23.1)高可用集群 这里不再赘述 这部分内容还没发布。。。后续有空再整理更新啦。 master、node节点都需要操作 docker也要安装,docker跟containerd不冲突,安装docker是为了能基于dockerfile构建镜像。 master、node节点都需要操作 3台机

    2024年01月16日
    浏览(55)
  • kubernetes(k8s)安装、集群搭建、可视化界面、完全卸载

    官网:https://kubernetes.io/zh-cn/docs/concepts/overview/ Kubernetes 是一个可移植、可扩展的开源平台, 用于管理容器化的工作负载和服务 ,可促进声明式配置和自动化。 Kubernetes 作用: 服务发现和负载均衡 Kubernetes 可以使用 DNS 名称或自己的 IP 地址来暴露容器。 如果进入容器的流量很

    2024年02月02日
    浏览(49)
  • 云原生|kubernetes|kubernetes集群部署神器kubekey安装部署高可用k8s集群(半离线形式)

    前面利用kubekey部署了一个简单的非高可用,etcd单实例的kubernetes集群,经过研究,发现部署过程可以简化,省去了一部分下载过程(主要是下载kubernetes组件的过程)只是kubernetes版本会固定在1.22.16版本,etcd集群可以部署成生产用的外部集群,并且apiserver等等组件也是高可用,

    2024年02月15日
    浏览(50)
  • 二进制安装Kubernetes(k8s) v1.27.3 IPv4/IPv6双栈 可脱离互联网

    https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了 kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。 我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。 若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影

    2024年02月16日
    浏览(48)
  • 二进制安装Kubernetes(k8s) v1.27.1 IPv4/IPv6双栈 可脱离互联网

    https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了 kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。 我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。 若您没有IPV6环境,或者不想使用IPv6,不对主机进行配置IPv6地址即可。 不配置IPV6,不影

    2024年02月06日
    浏览(52)
  • [Kubernetes[K8S]集群:master主节点初始化]:通过Calico和Coredns网络插件方式安装

    主节点:安装coredns - init初始化 主节点(此时还没有安装calico) 从节点:基于主节点生成join命令加入集群 主节点:安装calico:apply 生成pod,此时没有调整yaml网卡 coredns 和calico pod 运行成功 但是 calico-node-cl8f2 运行失败 查看 解决链接 因为之前写过一篇,calico一直异常,步骤

    2024年04月15日
    浏览(54)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包