基于velero及minio实现etcd数据备份与恢复

这篇具有很好参考价值的文章主要介绍了基于velero及minio实现etcd数据备份与恢复。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

1、Velero简介

Velero 是vmware开源的一个云原生的灾难恢复和迁移工具,它本身也是开源的,采用Go语言编写,可以安全的备份、恢复和迁移Kubernetes集群资源数据;官网https://velero.io/。Velero 是西班牙语意思是帆船,非常符合Kubernetes社区的命名风格,Velero的开发公司Heptio,已被VMware收购。Velero 支持标准的K8S集群,既可以是私有云平台也可以是公有云,除了灾备之外它还能做资源移转,支持把容器应用从一个集群迁移到另一个集群。Velero 的工作方式就是把kubernetes中的数据备份到对象存储以实现高可用和持久化,默认的备份保存时间为720小时,并在需要的时候进行下载和恢复。

2、Velero与etcd快照备份的区别

  • etcd 快照是全局完成备份(类似于MySQL全部备份),即使需要恢复一个资源对象(类似于只恢复MySQL的一个库),但是也需要做全局恢复到备份的状态(类似于MySQL的全库恢复),即会影响其它namespace中pod运行服务(类似于会影响MySQL其它数据库的数据)。
  • Velero可以有针对性的备份,比如按照namespace单独备份、只备份单独的资源对象等,在恢复的时候可以根据备份只恢复单独的namespace或资源对象,而不影响其它namespace中pod运行服务。
  • velero支持ceph、oss等对象存储,etcd 快照是一个为本地文件。
  • velero支持任务计划实现周期备份,但etcd 快照也可以基于cronjob实现。
  • velero支持对AWS EBS创建快照及还原https://www.qloudx.com/velero-for-kubernetes-backup-restore-stateful-workloads-with-aws-ebs-snapshots/
    https://github.com/vmware-tanzu/velero-plugin-for-aws

3、velero整体架构

基于velero及minio实现etcd数据备份与恢复

4、velero备份流程

基于velero及minio实现etcd数据备份与恢复

Velero 客户端调用Kubernetes API Server创建Backup任务。Backup 控制器基于watch 机制通过API Server获取到备份任务。Backup 控制器开始执行备份动作,其会通过请求API Server获取需要备份的数据。Backup 控制器将获取到的数据备份到指定的对象存储server端。

5、对象存储minio部署

5.1、创建数据目录

root@harbor:~# mkdir -p /data/minio

5.2、创建minio容器

下载镜像

root@harbor:~# docker pull minio/minio:RELEASE.2023-08-31T15-31-16Z
RELEASE.2023-08-31T15-31-16Z: Pulling from minio/minio
0c10cd59e10e: Pull complete 
b55c0ddd1333: Pull complete 
4aade59ba7c6: Pull complete 
7c45df1e40d6: Pull complete 
adedf83b12e0: Pull complete 
bc9f33183b0c: Pull complete 
Digest: sha256:76868af456548aab229762d726271b0bf8604a500416b3e9bdcb576940742cda
Status: Downloaded newer image for minio/minio:RELEASE.2023-08-31T15-31-16Z
docker.io/minio/minio:RELEASE.2023-08-31T15-31-16Z
root@harbor:~# 

创建minio容器

root@harbor:~# docker run --name minio \
> -p 9000:9000 \
> -p 9999:9999 \
> -d --restart=always \
> -e "MINIO_ROOT_USER=admin" \
> -e "MINIO_ROOT_PASSWORD=12345678" \
> -v /data/minio/data:/data \
> minio/minio:RELEASE.2023-08-31T15-31-16Z server /data \
> --console-address '0.0.0.0:9999'
ba5e511da5f30a17614d719979e28066788ca7520d87c67077a38389e70423f1
root@harbor:~# 

如果不指定,则默认用户名与密码为 minioadmin/minioadmin,可以通过环境变量自定义(MINIO_ROOT_USER来指定用户名,MINIO_ROOT_PASSWORD来指定用户名名对应的密码);

5.3、minio web界面登录

基于velero及minio实现etcd数据备份与恢复
基于velero及minio实现etcd数据备份与恢复

5.4、minio 创建bucket

基于velero及minio实现etcd数据备份与恢复

5.5、minio 验证bucket

基于velero及minio实现etcd数据备份与恢复

6、在master节点部署velero

6.1、下载velero客户端工具

下载velero客户端工具

root@k8s-master01:/usr/local/src# wget https://github.com/vmware-tanzu/velero/releases/download/v1.11.1/velero-v1.11.1-linux-amd64.tar.gz

解压压缩包

root@k8s-master01:/usr/local/src# ll
total 99344
drwxr-xr-x  3 root root     4096 Sep  2 12:38 ./
drwxr-xr-x 10 root root     4096 Feb 17  2023 ../
drwxr-xr-x  2 root root     4096 Oct 21  2015 bin/
-rw-r--r--  1 root root 64845365 May 31 13:21 buildkit-v0.11.6.linux-amd64.tar.gz
-rw-r--r--  1 root root 36864459 Sep  2 12:31 velero-v1.11.1-linux-amd64.tar.gz
root@k8s-master01:/usr/local/src# tar xf velero-v1.11.1-linux-amd64.tar.gz 
root@k8s-master01:/usr/local/src# ll
total 99348
drwxr-xr-x  4 root root     4096 Sep  2 12:39 ./
drwxr-xr-x 10 root root     4096 Feb 17  2023 ../
drwxr-xr-x  2 root root     4096 Oct 21  2015 bin/
-rw-r--r--  1 root root 64845365 May 31 13:21 buildkit-v0.11.6.linux-amd64.tar.gz
drwxr-xr-x  3 root root     4096 Sep  2 12:39 velero-v1.11.1-linux-amd64/
-rw-r--r--  1 root root 36864459 Sep  2 12:31 velero-v1.11.1-linux-amd64.tar.gz
root@k8s-master01:/usr/local/src# 

将velero二进制文件移动至/usr/local/bin

root@k8s-master01:/usr/local/src# ll velero-v1.11.1-linux-amd64
total 83780
drwxr-xr-x 3 root root     4096 Sep  2 12:39 ./
drwxr-xr-x 4 root root     4096 Sep  2 12:39 ../
-rw-r--r-- 1 root root    10255 Dec 13  2022 LICENSE
drwxr-xr-x 4 root root     4096 Sep  2 12:39 examples/
-rwxr-xr-x 1 root root 85765416 Jul 25 08:43 velero*
root@k8s-master01:/usr/local/src# cp velero-v1.11.1-linux-amd64/velero /usr/local/bin/
root@k8s-master01:/usr/local/src# 

验证velero命令是否可执行?

root@k8s-master01:/usr/local/src# velero --help
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.

If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.

Usage:
  velero [command]

Available Commands:
  backup            Work with backups
  backup-location   Work with backup storage locations
  bug               Report a Velero bug
  client            Velero client related commands
  completion        Generate completion script
  create            Create velero resources
  debug             Generate debug bundle
  delete            Delete velero resources
  describe          Describe velero resources
  get               Get velero resources
  help              Help about any command
  install           Install Velero
  plugin            Work with plugins
  repo              Work with repositories
  restore           Work with restores
  schedule          Work with schedules
  snapshot-location Work with snapshot locations
  uninstall         Uninstall Velero
  version           Print the velero version and associated image

Flags:
      --add_dir_header                   If true, adds the file directory to the header of the log messages
      --alsologtostderr                  log to standard error as well as files (no effect when -logtostderr=true)
      --colorized optionalBool           Show colored output in TTY. Overrides 'colorized' value from $HOME/.config/velero/config.json if present. Enabled by default
      --features stringArray             Comma-separated list of features to enable for this Velero process. Combines with values from $HOME/.config/velero/config.json if present
  -h, --help                             help for velero
      --kubeconfig string                Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
      --kubecontext string               The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context)
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory (no effect when -logtostderr=true)
      --log_file string                  If non-empty, use this log file (no effect when -logtostderr=true)
      --log_file_max_size uint           Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --logtostderr                      log to standard error instead of files (default true)
  -n, --namespace string                 The namespace in which Velero should operate (default "velero")
      --one_output                       If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
      --skip_headers                     If true, avoid header prefixes in the log messages
      --skip_log_headers                 If true, avoid headers when opening log files (no effect when -logtostderr=true)
      --stderrthreshold severity         logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
  -v, --v Level                          number for the log level verbosity
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

Use "velero [command] --help" for more information about a command.
root@k8s-master01:/usr/local/src# 

能够正常执行velero命令说明velero客户端工具就准备就绪;

6.2、配置velero认证环境

6.2.1、创建velero工作目录

root@k8s-master01:/usr/local/src# mkdir  /data/velero -p
root@k8s-master01:/usr/local/src# cd /data/velero/
root@k8s-master01:/data/velero# ll
total 8
drwxr-xr-x 2 root root 4096 Sep  2 12:42 ./
drwxr-xr-x 3 root root 4096 Sep  2 12:42 ../
root@k8s-master01:/data/velero# 

6.2.2、创建访问minio的认证文件

root@k8s-master01:/data/velero# ll
total 12
drwxr-xr-x 2 root root 4096 Sep  2 12:43 ./
drwxr-xr-x 3 root root 4096 Sep  2 12:42 ../
-rw-r--r-- 1 root root   69 Sep  2 12:43 velero-auth.txt
root@k8s-master01:/data/velero# cat velero-auth.txt 
[default]
aws_access_key_id = admin
aws_secret_access_key = 12345678
root@k8s-master01:/data/velero# 

这个velero-auth.txt文件中记录了访问对象存储minio的用户名和密码;其中aws_access_key_id这个变量用来指定对象存储用户名aws_secret_access_key变量用来指定密码;这两个变量是固定的不能随意改动;

6.2.3、准备user-csr文件

root@k8s-master01:/data/velero# ll 
total 16
drwxr-xr-x 2 root root 4096 Sep  2 12:48 ./
drwxr-xr-x 3 root root 4096 Sep  2 12:42 ../
-rw-r--r-- 1 root root  222 Sep  2 12:48 awsuser-csr.json
-rw-r--r-- 1 root root   69 Sep  2 12:43 velero-auth.txt
root@k8s-master01:/data/velero# cat awsuser-csr.json 
{
  "CN": "awsuser",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "SiChuan",
      "L": "GuangYuan",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
root@k8s-master01:/data/velero# 

该文件用于制作证书所需信息;

6.2.4、准备证书签发环境

安装证书签发工具

root@k8s-master01:/data/velero# apt install golang-cfssl

下载cfssl

root@k8s-master01:/data/velero# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64

下载cfssljson

root@k8s-master01:/data/velero# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 

cfssl-certinfo

root@k8s-master01:/data/velero# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64

重命名

root@k8s-master01:/data/velero# ll
total 40248
drwxr-xr-x 2 root root     4096 Sep  2 12:57 ./
drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
-rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
-rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo_1.6.1_linux_amd64
-rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl_1.6.1_linux_amd64
-rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson_1.6.1_linux_amd64
-rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
root@k8s-master01:/data/velero# mv cfssl-certinfo_1.6.1_linux_amd64 cfssl-certinfo
root@k8s-master01:/data/velero# mv cfssl_1.6.1_linux_amd64 cfssl
root@k8s-master01:/data/velero# mv cfssljson_1.6.1_linux_amd64 cfssljson
root@k8s-master01:/data/velero# ll
total 40248
drwxr-xr-x 2 root root     4096 Sep  2 12:58 ./
drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
-rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
-rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
-rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
-rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
-rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
root@k8s-master01:/data/velero# 

移动二进制文件至/usr/local/bin/

root@k8s-master01:/data/velero# cp cfssl-certinfo cfssl cfssljson /usr/local/bin/

添加可执行权限

root@k8s-master01:/data/velero# chmod  a+x /usr/local/bin/cfssl*
root@k8s-master01:/data/velero# ll /usr/local/bin/cfssl*
-rwxr-xr-x 1 root root 16659824 Sep  2 12:59 /usr/local/bin/cfssl*
-rwxr-xr-x 1 root root 13502544 Sep  2 12:59 /usr/local/bin/cfssl-certinfo*
-rwxr-xr-x 1 root root 11029744 Sep  2 12:59 /usr/local/bin/cfssljson*
root@k8s-master01:/data/velero# 

6.2.5、执行证书签发

复制部署k8s集群的ca-config.json文件至/data/velero

root@k8s-deploy:~# scp /etc/kubeasz/clusters/k8s-cluster01/ssl/ca-config.json 192.168.0.31:/data/velero
ca-config.json                                                                                                       100%  459   203.8KB/s   00:00    
root@k8s-deploy:~# 

验证ca-config.json是否正常复制

root@k8s-master01:/data/velero# ll 
total 40252
drwxr-xr-x 2 root root     4096 Sep  2 13:03 ./
drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
-rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
-rw-r--r-- 1 root root      459 Sep  2 13:03 ca-config.json
-rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
-rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
-rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
-rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
root@k8s-master01:/data/velero# 

签发证书

root@k8s-master01:/data/velero# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=./ca-config.json -profile=kubernetes ./awsuser-csr.json | cfssljson -bare awsuser
2023/09/02 13:05:37 [INFO] generate received request
2023/09/02 13:05:37 [INFO] received CSR
2023/09/02 13:05:37 [INFO] generating key: rsa-2048
2023/09/02 13:05:38 [INFO] encoded CSR
2023/09/02 13:05:38 [INFO] signed certificate with serial number 309924608852958492895277791638870960844710474947
2023/09/02 13:05:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

6.2.6、验证证书

root@k8s-master01:/data/velero# ll
total 40264
drwxr-xr-x 2 root root     4096 Sep  2 13:05 ./
drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
-rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
-rw------- 1 root root     1679 Sep  2 13:05 awsuser-key.pem
-rw-r--r-- 1 root root     1001 Sep  2 13:05 awsuser.csr
-rw-r--r-- 1 root root     1391 Sep  2 13:05 awsuser.pem
-rw-r--r-- 1 root root      459 Sep  2 13:03 ca-config.json
-rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
-rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
-rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
-rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
root@k8s-master01:/data/velero# 

6.2.7、分发证书到api-server证书路径

root@k8s-master01:/data/velero# cp awsuser-key.pem /etc/kubernetes/ssl/
root@k8s-master01:/data/velero# cp awsuser.pem /etc/kubernetes/ssl/
root@k8s-master01:/data/velero# ll /etc/kubernetes/ssl/
total 48
drwxr-xr-x 2 root root 4096 Sep  2 13:07 ./
drwxr-xr-x 3 root root 4096 Apr 22 14:56 ../
-rw-r--r-- 1 root root 1679 Apr 22 14:54 aggregator-proxy-key.pem
-rw-r--r-- 1 root root 1387 Apr 22 14:54 aggregator-proxy.pem
-rw------- 1 root root 1679 Sep  2 13:07 awsuser-key.pem
-rw-r--r-- 1 root root 1391 Sep  2 13:07 awsuser.pem
-rw-r--r-- 1 root root 1679 Apr 22 14:10 ca-key.pem
-rw-r--r-- 1 root root 1310 Apr 22 14:10 ca.pem
-rw-r--r-- 1 root root 1679 Apr 22 14:56 kubelet-key.pem
-rw-r--r-- 1 root root 1460 Apr 22 14:56 kubelet.pem
-rw-r--r-- 1 root root 1679 Apr 22 14:54 kubernetes-key.pem
-rw-r--r-- 1 root root 1655 Apr 22 14:54 kubernetes.pem
root@k8s-master01:/data/velero#

6.3、生成k8s集群认证config文件

root@k8s-master01:/data/velero# export KUBE_APISERVER="https://192.168.0.111:6443"                                                                     root@k8s-master01:/data/velero# kubectl config set-cluster kubernetes \
> --certificate-authority=/etc/kubernetes/ssl/ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=./awsuser.kubeconfig
Cluster "kubernetes" set.
root@k8s-master01:/data/velero# ll
total 40268
drwxr-xr-x 2 root root     4096 Sep  2 13:12 ./
drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
-rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
-rw------- 1 root root     1679 Sep  2 13:05 awsuser-key.pem
-rw-r--r-- 1 root root     1001 Sep  2 13:05 awsuser.csr
-rw------- 1 root root     1951 Sep  2 13:12 awsuser.kubeconfig
-rw-r--r-- 1 root root     1391 Sep  2 13:05 awsuser.pem
-rw-r--r-- 1 root root      459 Sep  2 13:03 ca-config.json
-rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
-rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
-rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
-rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
root@k8s-master01:/data/velero# cat awsuser.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.0.111:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
root@k8s-master01:/data/velero# 

6.3.1、设置客户端证书认证

root@k8s-master01:/data/velero# kubectl config set-credentials awsuser \
> --client-certificate=/etc/kubernetes/ssl/awsuser.pem \
> --client-key=/etc/kubernetes/ssl/awsuser-key.pem \
> --embed-certs=true \
> --kubeconfig=./awsuser.kubeconfig
User "awsuser" set.
root@k8s-master01:/data/velero# cat awsuser.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.0.111:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: awsuser
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVTmttQUJ6ZjVhdCtoZC9vYmtONXBVV3JWOU1Nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13T1RBeU1UTXdNVEF3V2hnUE1qQTNNekE0TWpBeE16QXhNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJBd0RnWURWUVFJRXdkVGFVTm9kV0Z1TVJJd0VBWURWUVFIRXdsSGRXRnVaMWwxWVc0eApEREFLQmdOVkJBb1RBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJBd0RnWURWUVFERXdkaGQzTjFjMlZ5Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12ekkKSHNqRmFZNzNmTm5aWVhqU0lsVEJKeDNqY1dYVGh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOAorVVFYTkFhTVYxOFhVaGdvSmJZaHRCWStpSGhjK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBECjNIZG1TUGJ0am0xRkVWaTFkVHpSeVhDSWxrTkJFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejMKaTlBS3ArOUhENUV6bE5QaVUwY1FlZkxERGEwRXp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNQpuVG1NNlNucEh5aFltNW5sd2FRdUZvekY2bWt4UzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6CkxRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGREtyYkphanpDTTNzM2ZHUzBtUwpLV0lHbm5XM01COEdBMVVkSXdRWU1CYUFGUFNlb2N3V2ZkMk5pR2JGVjJoZEh6SjRCdmlsTUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQXd4b043eUNQZzFRQmJRcTNWT1JYUFVvcXRDVjhSUHFjd3V4bTJWZkVmVmdPMFZYanQKTHR5aEl2RDlubEsyazNmTFpWTVc2MWFzbVhtWkttUTh3YkZtL1RieE83ZkdJSWdpSzJKOGpWWHZYRnhNeExZNQpRVjcvd3QxUUluWjJsTjBsM0c3TGhkYjJ4UjFORmd1eWNXdWtWV3JKSWtpcU1Ma0lOLzdPSFhtSFZXazV1a1ZlCmNoYmVIdnJSSXRRNHBPYjlFZVgzTUxiZXBkRjJ4TWs5NmZrVXJGWmhKYWREVnB5NXEwbHFpUVJkMVpIWk4xSkMKWVBrZGRXdVQxbHNXaWJzN3BWTHRXMXlnV0JlS2hKQ0FVeTlUZEZ1WEt1QUZvdVJKUUJWQUs4dTFHRU1aL2JEYgp2eXRxN2N6ZndWOFNreVNpKzZHcldZRVBXUXllUEVEWjBPU1oKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12eklIc2pGYVk3M2ZOblpZWGpTSWxUQkp4M2pjV1hUCmh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOCtVUVhOQWFNVjE4WFVoZ29KYllodEJZK2lIaGMKK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBEM0hkbVNQYnRqbTFGRVZpMWRUelJ5WENJbGtOQgpFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejNpOUFLcCs5SEQ1RXpsTlBpVTBjUWVmTEREYTBFCnp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNW5UbU02U25wSHloWW01bmx3YVF1Rm96RjZta3gKUzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6TFFJREFRQUJBb0lCQVFDTWVnb2RSNndUcHliKwp5WklJcXBkbnpBamVtWktnd0ZET2tRWnFTS1NsREZsY0Y0ZWRueHE3WGJXV2RQZzRuREtxNHNqSnJGVlNzUW12CkNFWnVlNWxVUkt6QWRGVlkzeFdpQnhOMUJYek5jN3hkZFVqRUNOOUNEYUNiOS9nUWEzS2RiK2VVTE1yT3lZYVgKYW5xY2J3YXVBWEFTT0tZYmZxcjA2T2R2a1dwLzIxWnF2bnAvdmZrN3dIYzduUktLWmt1ZVg0bVExdFFqdVBoZQpDQXN5WWZOeWM0VjVyUDF1K3AzTGU4Ly9sTXZQZ0wydFBib3NaaGYvM0dCUGpPZHFabVdnL0R5blhzN21qcnhqCng2OHJOcHIxU2ZhQUNOMjNQdE9HbXcreXh4NjdENTNUSVJUaXZIY2Izd0FIMnNRdkVzbG9HN0lMU0d2THJ1S3IKS0c2RkQwb0JBb0dCQVBsWFdydWxQa3B6bzl5ZUJnd3hudmlyN2x2THZrVkV1Q3ZmRVg2MHViYm5HOVVsZm1BQgpEaVduOFcvUkVHVjE0cFBtcjQ2eE5QLzkrb1p3cDNRNUMzbFNocENVWEVxZjVHTzBUSXdSb1NVdndKcUo2UHc0Cm4yb0xEbXBNS3k5bkZEcFFCQTFoYUZSQnZJd3ZxOXdHc0NmK3Fyc3pTNHM2bHp1Qm1KVDZXYUdCQW9HQkFNNnYKSWJrSXJnVW54NlpuNngyRjVmSEQ0YVFydWRLbWFScnN3NzV4dFY0ZVlEdU1ZVVIrYWxwY2llVDZ4Y1Z3NUFvbQp6Q2o1VUNsejZJZ3pJc2MyTGRMR3JOeDVrcUFTMzA1K0UxaVdEeGh6UG44bUhETkI2NGY5WTVYdjJ6bm9maWVsCmNKd2pBaE5OZlR1ck45ODR5RXpQL0tHa1NsbGNxdHFsOVF6VVZrK3RBb0dCQU81c0RGTy85NmRqbW4yY0VYWloKZ0lTU2l2TDJDUFBkZVNwaVBDMW5qT29MWmI3VUFscTB4NTFVVVBhMTk3SzlIYktGZEx2Q1VVYXp5bm9CZ080TwptaDBodjVEQ2ZOblN1S1pxUW9QeFc2RGVYNUttYXNXN014eEloRGs2cWxUQ2dVSWRQeks0UVBYSWdnMmVpL3h4CjNNSHhyN29mbTQzL3NacnlHai9pZ0JDQkFvR0FWb3BzRTE3NEJuNldrUzIzKzUraUhXNElYOFpUUTBtY2ZzS2UKWDNLYkgzS1dscmg3emNNazR2c1dYZ05HcGhwVDBaQlhNZHphWE5FRWoycmg2QW5lZS8vbVIxYThOenhQdGowQgorcml5VDJtSnhKRi9nMUxadlJJekRZZm1Ba1EvOW5mR1JBcEFoemFOOWxzRnhQaXduY0VFcGVYMW41ODJodUN3ClQ1UGxJKzBDZ1lFQTN1WmptcXl1U0ZsUGR0Q3NsN1ZDUWc4K0N6L1hBTUNQZGt0SmF1bng5VWxtWVZXVFQzM2oKby9uVVRPVHY1TWZPTm9wejVYOXM4SCsyeXFOdWpna2NHZmFTeHFTNlBkbWNhcTJGMTYxTDdTR0JDb2w1MVQ5ZwpXQkRObnlqOFprSkQxd2pRNkNDWG4zNDZIMS9YREZjbmhnc2c2UHRjTGh3RC8yS0l3eFVmdzFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
root@k8s-master01:/data/velero# 

6.3.2、设置上下文参数

root@k8s-master01:/data/velero# kubectl config set-context kubernetes \
> --cluster=kubernetes \
> --user=awsuser \
> --namespace=velero-system \
> --kubeconfig=./awsuser.kubeconfig
Context "kubernetes" created.
root@k8s-master01:/data/velero# cat awsuser.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.0.111:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: velero-system
    user: awsuser
  name: kubernetes
current-context: ""
kind: Config
preferences: {}
users:
- name: awsuser
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVTmttQUJ6ZjVhdCtoZC9vYmtONXBVV3JWOU1Nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13T1RBeU1UTXdNVEF3V2hnUE1qQTNNekE0TWpBeE16QXhNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJBd0RnWURWUVFJRXdkVGFVTm9kV0Z1TVJJd0VBWURWUVFIRXdsSGRXRnVaMWwxWVc0eApEREFLQmdOVkJBb1RBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJBd0RnWURWUVFERXdkaGQzTjFjMlZ5Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12ekkKSHNqRmFZNzNmTm5aWVhqU0lsVEJKeDNqY1dYVGh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOAorVVFYTkFhTVYxOFhVaGdvSmJZaHRCWStpSGhjK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBECjNIZG1TUGJ0am0xRkVWaTFkVHpSeVhDSWxrTkJFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejMKaTlBS3ArOUhENUV6bE5QaVUwY1FlZkxERGEwRXp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNQpuVG1NNlNucEh5aFltNW5sd2FRdUZvekY2bWt4UzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6CkxRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGREtyYkphanpDTTNzM2ZHUzBtUwpLV0lHbm5XM01COEdBMVVkSXdRWU1CYUFGUFNlb2N3V2ZkMk5pR2JGVjJoZEh6SjRCdmlsTUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQXd4b043eUNQZzFRQmJRcTNWT1JYUFVvcXRDVjhSUHFjd3V4bTJWZkVmVmdPMFZYanQKTHR5aEl2RDlubEsyazNmTFpWTVc2MWFzbVhtWkttUTh3YkZtL1RieE83ZkdJSWdpSzJKOGpWWHZYRnhNeExZNQpRVjcvd3QxUUluWjJsTjBsM0c3TGhkYjJ4UjFORmd1eWNXdWtWV3JKSWtpcU1Ma0lOLzdPSFhtSFZXazV1a1ZlCmNoYmVIdnJSSXRRNHBPYjlFZVgzTUxiZXBkRjJ4TWs5NmZrVXJGWmhKYWREVnB5NXEwbHFpUVJkMVpIWk4xSkMKWVBrZGRXdVQxbHNXaWJzN3BWTHRXMXlnV0JlS2hKQ0FVeTlUZEZ1WEt1QUZvdVJKUUJWQUs4dTFHRU1aL2JEYgp2eXRxN2N6ZndWOFNreVNpKzZHcldZRVBXUXllUEVEWjBPU1oKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12eklIc2pGYVk3M2ZOblpZWGpTSWxUQkp4M2pjV1hUCmh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOCtVUVhOQWFNVjE4WFVoZ29KYllodEJZK2lIaGMKK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBEM0hkbVNQYnRqbTFGRVZpMWRUelJ5WENJbGtOQgpFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejNpOUFLcCs5SEQ1RXpsTlBpVTBjUWVmTEREYTBFCnp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNW5UbU02U25wSHloWW01bmx3YVF1Rm96RjZta3gKUzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6TFFJREFRQUJBb0lCQVFDTWVnb2RSNndUcHliKwp5WklJcXBkbnpBamVtWktnd0ZET2tRWnFTS1NsREZsY0Y0ZWRueHE3WGJXV2RQZzRuREtxNHNqSnJGVlNzUW12CkNFWnVlNWxVUkt6QWRGVlkzeFdpQnhOMUJYek5jN3hkZFVqRUNOOUNEYUNiOS9nUWEzS2RiK2VVTE1yT3lZYVgKYW5xY2J3YXVBWEFTT0tZYmZxcjA2T2R2a1dwLzIxWnF2bnAvdmZrN3dIYzduUktLWmt1ZVg0bVExdFFqdVBoZQpDQXN5WWZOeWM0VjVyUDF1K3AzTGU4Ly9sTXZQZ0wydFBib3NaaGYvM0dCUGpPZHFabVdnL0R5blhzN21qcnhqCng2OHJOcHIxU2ZhQUNOMjNQdE9HbXcreXh4NjdENTNUSVJUaXZIY2Izd0FIMnNRdkVzbG9HN0lMU0d2THJ1S3IKS0c2RkQwb0JBb0dCQVBsWFdydWxQa3B6bzl5ZUJnd3hudmlyN2x2THZrVkV1Q3ZmRVg2MHViYm5HOVVsZm1BQgpEaVduOFcvUkVHVjE0cFBtcjQ2eE5QLzkrb1p3cDNRNUMzbFNocENVWEVxZjVHTzBUSXdSb1NVdndKcUo2UHc0Cm4yb0xEbXBNS3k5bkZEcFFCQTFoYUZSQnZJd3ZxOXdHc0NmK3Fyc3pTNHM2bHp1Qm1KVDZXYUdCQW9HQkFNNnYKSWJrSXJnVW54NlpuNngyRjVmSEQ0YVFydWRLbWFScnN3NzV4dFY0ZVlEdU1ZVVIrYWxwY2llVDZ4Y1Z3NUFvbQp6Q2o1VUNsejZJZ3pJc2MyTGRMR3JOeDVrcUFTMzA1K0UxaVdEeGh6UG44bUhETkI2NGY5WTVYdjJ6bm9maWVsCmNKd2pBaE5OZlR1ck45ODR5RXpQL0tHa1NsbGNxdHFsOVF6VVZrK3RBb0dCQU81c0RGTy85NmRqbW4yY0VYWloKZ0lTU2l2TDJDUFBkZVNwaVBDMW5qT29MWmI3VUFscTB4NTFVVVBhMTk3SzlIYktGZEx2Q1VVYXp5bm9CZ080TwptaDBodjVEQ2ZOblN1S1pxUW9QeFc2RGVYNUttYXNXN014eEloRGs2cWxUQ2dVSWRQeks0UVBYSWdnMmVpL3h4CjNNSHhyN29mbTQzL3NacnlHai9pZ0JDQkFvR0FWb3BzRTE3NEJuNldrUzIzKzUraUhXNElYOFpUUTBtY2ZzS2UKWDNLYkgzS1dscmg3emNNazR2c1dYZ05HcGhwVDBaQlhNZHphWE5FRWoycmg2QW5lZS8vbVIxYThOenhQdGowQgorcml5VDJtSnhKRi9nMUxadlJJekRZZm1Ba1EvOW5mR1JBcEFoemFOOWxzRnhQaXduY0VFcGVYMW41ODJodUN3ClQ1UGxJKzBDZ1lFQTN1WmptcXl1U0ZsUGR0Q3NsN1ZDUWc4K0N6L1hBTUNQZGt0SmF1bng5VWxtWVZXVFQzM2oKby9uVVRPVHY1TWZPTm9wejVYOXM4SCsyeXFOdWpna2NHZmFTeHFTNlBkbWNhcTJGMTYxTDdTR0JDb2w1MVQ5ZwpXQkRObnlqOFprSkQxd2pRNkNDWG4zNDZIMS9YREZjbmhnc2c2UHRjTGh3RC8yS0l3eFVmdzFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
root@k8s-master01:/data/velero# 

6.3.3、设置默认上下文

root@k8s-master01:/data/velero# kubectl config use-context kubernetes --kubeconfig=awsuser.kubeconfig
Switched to context "kubernetes".
root@k8s-master01:/data/velero# cat awsuser.kubeconfig             
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.0.111:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: velero-system
    user: awsuser
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: awsuser
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVTmttQUJ6ZjVhdCtoZC9vYmtONXBVV3JWOU1Nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13T1RBeU1UTXdNVEF3V2hnUE1qQTNNekE0TWpBeE16QXhNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJBd0RnWURWUVFJRXdkVGFVTm9kV0Z1TVJJd0VBWURWUVFIRXdsSGRXRnVaMWwxWVc0eApEREFLQmdOVkJBb1RBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJBd0RnWURWUVFERXdkaGQzTjFjMlZ5Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12ekkKSHNqRmFZNzNmTm5aWVhqU0lsVEJKeDNqY1dYVGh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOAorVVFYTkFhTVYxOFhVaGdvSmJZaHRCWStpSGhjK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBECjNIZG1TUGJ0am0xRkVWaTFkVHpSeVhDSWxrTkJFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejMKaTlBS3ArOUhENUV6bE5QaVUwY1FlZkxERGEwRXp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNQpuVG1NNlNucEh5aFltNW5sd2FRdUZvekY2bWt4UzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6CkxRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGREtyYkphanpDTTNzM2ZHUzBtUwpLV0lHbm5XM01COEdBMVVkSXdRWU1CYUFGUFNlb2N3V2ZkMk5pR2JGVjJoZEh6SjRCdmlsTUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQXd4b043eUNQZzFRQmJRcTNWT1JYUFVvcXRDVjhSUHFjd3V4bTJWZkVmVmdPMFZYanQKTHR5aEl2RDlubEsyazNmTFpWTVc2MWFzbVhtWkttUTh3YkZtL1RieE83ZkdJSWdpSzJKOGpWWHZYRnhNeExZNQpRVjcvd3QxUUluWjJsTjBsM0c3TGhkYjJ4UjFORmd1eWNXdWtWV3JKSWtpcU1Ma0lOLzdPSFhtSFZXazV1a1ZlCmNoYmVIdnJSSXRRNHBPYjlFZVgzTUxiZXBkRjJ4TWs5NmZrVXJGWmhKYWREVnB5NXEwbHFpUVJkMVpIWk4xSkMKWVBrZGRXdVQxbHNXaWJzN3BWTHRXMXlnV0JlS2hKQ0FVeTlUZEZ1WEt1QUZvdVJKUUJWQUs4dTFHRU1aL2JEYgp2eXRxN2N6ZndWOFNreVNpKzZHcldZRVBXUXllUEVEWjBPU1oKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12eklIc2pGYVk3M2ZOblpZWGpTSWxUQkp4M2pjV1hUCmh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOCtVUVhOQWFNVjE4WFVoZ29KYllodEJZK2lIaGMKK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBEM0hkbVNQYnRqbTFGRVZpMWRUelJ5WENJbGtOQgpFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejNpOUFLcCs5SEQ1RXpsTlBpVTBjUWVmTEREYTBFCnp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNW5UbU02U25wSHloWW01bmx3YVF1Rm96RjZta3gKUzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6TFFJREFRQUJBb0lCQVFDTWVnb2RSNndUcHliKwp5WklJcXBkbnpBamVtWktnd0ZET2tRWnFTS1NsREZsY0Y0ZWRueHE3WGJXV2RQZzRuREtxNHNqSnJGVlNzUW12CkNFWnVlNWxVUkt6QWRGVlkzeFdpQnhOMUJYek5jN3hkZFVqRUNOOUNEYUNiOS9nUWEzS2RiK2VVTE1yT3lZYVgKYW5xY2J3YXVBWEFTT0tZYmZxcjA2T2R2a1dwLzIxWnF2bnAvdmZrN3dIYzduUktLWmt1ZVg0bVExdFFqdVBoZQpDQXN5WWZOeWM0VjVyUDF1K3AzTGU4Ly9sTXZQZ0wydFBib3NaaGYvM0dCUGpPZHFabVdnL0R5blhzN21qcnhqCng2OHJOcHIxU2ZhQUNOMjNQdE9HbXcreXh4NjdENTNUSVJUaXZIY2Izd0FIMnNRdkVzbG9HN0lMU0d2THJ1S3IKS0c2RkQwb0JBb0dCQVBsWFdydWxQa3B6bzl5ZUJnd3hudmlyN2x2THZrVkV1Q3ZmRVg2MHViYm5HOVVsZm1BQgpEaVduOFcvUkVHVjE0cFBtcjQ2eE5QLzkrb1p3cDNRNUMzbFNocENVWEVxZjVHTzBUSXdSb1NVdndKcUo2UHc0Cm4yb0xEbXBNS3k5bkZEcFFCQTFoYUZSQnZJd3ZxOXdHc0NmK3Fyc3pTNHM2bHp1Qm1KVDZXYUdCQW9HQkFNNnYKSWJrSXJnVW54NlpuNngyRjVmSEQ0YVFydWRLbWFScnN3NzV4dFY0ZVlEdU1ZVVIrYWxwY2llVDZ4Y1Z3NUFvbQp6Q2o1VUNsejZJZ3pJc2MyTGRMR3JOeDVrcUFTMzA1K0UxaVdEeGh6UG44bUhETkI2NGY5WTVYdjJ6bm9maWVsCmNKd2pBaE5OZlR1ck45ODR5RXpQL0tHa1NsbGNxdHFsOVF6VVZrK3RBb0dCQU81c0RGTy85NmRqbW4yY0VYWloKZ0lTU2l2TDJDUFBkZVNwaVBDMW5qT29MWmI3VUFscTB4NTFVVVBhMTk3SzlIYktGZEx2Q1VVYXp5bm9CZ080TwptaDBodjVEQ2ZOblN1S1pxUW9QeFc2RGVYNUttYXNXN014eEloRGs2cWxUQ2dVSWRQeks0UVBYSWdnMmVpL3h4CjNNSHhyN29mbTQzL3NacnlHai9pZ0JDQkFvR0FWb3BzRTE3NEJuNldrUzIzKzUraUhXNElYOFpUUTBtY2ZzS2UKWDNLYkgzS1dscmg3emNNazR2c1dYZ05HcGhwVDBaQlhNZHphWE5FRWoycmg2QW5lZS8vbVIxYThOenhQdGowQgorcml5VDJtSnhKRi9nMUxadlJJekRZZm1Ba1EvOW5mR1JBcEFoemFOOWxzRnhQaXduY0VFcGVYMW41ODJodUN3ClQ1UGxJKzBDZ1lFQTN1WmptcXl1U0ZsUGR0Q3NsN1ZDUWc4K0N6L1hBTUNQZGt0SmF1bng5VWxtWVZXVFQzM2oKby9uVVRPVHY1TWZPTm9wejVYOXM4SCsyeXFOdWpna2NHZmFTeHFTNlBkbWNhcTJGMTYxTDdTR0JDb2w1MVQ5ZwpXQkRObnlqOFprSkQxd2pRNkNDWG4zNDZIMS9YREZjbmhnc2c2UHRjTGh3RC8yS0l3eFVmdzFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
root@k8s-master01:/data/velero# 

6.3.4、k8s集群中创建awsuser账户

root@k8s-master01:/data/velero# kubectl create clusterrolebinding awsuser --clusterrole=cluster-admin --user=awsuser
clusterrolebinding.rbac.authorization.k8s.io/awsuser created
root@k8s-master01:/data/velero# kubectl get clusterrolebinding -A|grep awsuser
awsuser                                                ClusterRole/cluster-admin                                          47s
root@k8s-master01:/data/velero# 

6.3.5、验证证书的可用性

root@k8s-master01:/data/velero# kubectl --kubeconfig ./awsuser.kubeconfig get nodes
NAME           STATUS                     ROLES    AGE    VERSION
192.168.0.31   Ready,SchedulingDisabled   master   132d   v1.26.4
192.168.0.32   Ready,SchedulingDisabled   master   132d   v1.26.4
192.168.0.33   Ready,SchedulingDisabled   master   132d   v1.26.4
192.168.0.34   Ready                      node     132d   v1.26.4
192.168.0.35   Ready                      node     132d   v1.26.4
192.168.0.36   Ready                      node     132d   v1.26.4
root@k8s-master01:/data/velero# kubectl --kubeconfig ./awsuser.kubeconfig get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS       AGE
calico-kube-controllers-5456dd947c-pwl2n   1/1     Running   31 (79m ago)   132d
calico-node-4zmb4                          1/1     Running   26 (79m ago)   132d
calico-node-7lc66                          1/1     Running   28 (79m ago)   132d
calico-node-bkhkd                          1/1     Running   28 (13d ago)   132d
calico-node-mw49k                          1/1     Running   28 (79m ago)   132d
calico-node-v726r                          1/1     Running   26 (79m ago)   132d
calico-node-x9r7h                          1/1     Running   28 (79m ago)   132d
coredns-77879dc67d-k9ztn                   1/1     Running   4 (79m ago)    27d
coredns-77879dc67d-qwb48                   1/1     Running   4 (79m ago)    27d
snapshot-controller-0                      1/1     Running   28 (79m ago)   132d
root@k8s-master01:/data/velero# 

使用--kubeconfig选项来指定认证文件,如果能够正常查看k8s集群,pod等信息,说明该认证文件没有问题;

6.3.6、k8s集群中创建namespace

root@k8s-master01:/data/velero# kubectl create ns velero-system
namespace/velero-system created
root@k8s-master01:/data/velero# kubectl get ns
NAME              STATUS   AGE
argocd            Active   129d
default           Active   132d
kube-node-lease   Active   132d
kube-public       Active   132d
kube-system       Active   132d
magedu            Active   90d
myserver          Active   98d
velero-system     Active   5s
root@k8s-master01:/data/velero#

6.4、执行安装velero服务端

root@k8s-master01:/data/velero# velero --kubeconfig  ./awsuser.kubeconfig \
>     install \
>     --provider aws \
>     --plugins velero/velero-plugin-for-aws:v1.5.5 \
>     --bucket velerodata  \
>     --secret-file ./velero-auth.txt \
>     --use-volume-snapshots=false \
>     --namespace velero-system \
> --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://192.168.0.42:9000
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client
CustomResourceDefinition/backuprepositories.velero.io: created
CustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: attempting to create resource client
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: attempting to create resource client
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: attempting to create resource client
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero-system: attempting to create resource
Namespace/velero-system: attempting to create resource client
Namespace/velero-system: already exists, proceeding
Namespace/velero-system: created
ClusterRoleBinding/velero-velero-system: attempting to create resource
ClusterRoleBinding/velero-velero-system: attempting to create resource client
ClusterRoleBinding/velero-velero-system: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: attempting to create resource client
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: attempting to create resource client
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: attempting to create resource client
BackupStorageLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: attempting to create resource client
Deployment/velero: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero-system' to view the status.
root@k8s-master01:/data/velero# 

6.5、验证安装velero服务端

root@k8s-master01:/data/velero# kubectl get pod -n velero-system
NAME                      READY   STATUS    RESTARTS   AGE
velero-5d675548c4-2dx8d   1/1     Running   0          105s
root@k8s-master01:/data/velero#

能够在velero-system名称空间下看到velero pod 正常running,这意味着velero服务端已经部署好了;

7、velero备份数据

7.1、对default ns进行备份

root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
root@k8s-master01:/data/velero#  velero backup create default-backup-${DATE} \
> --include-cluster-resources=true \
> --include-namespaces default \
> --kubeconfig=./awsuser.kubeconfig \
> --namespace velero-system
Backup request "default-backup-20230902133242" submitted successfully.
Run `velero backup describe default-backup-20230902133242` or `velero backup logs default-backup-20230902133242` for more details.
root@k8s-master01:/data/velero# 

验证备份

root@k8s-master01:/data/velero# velero backup describe default-backup-20230902133242 --kubeconfig=./awsuser.kubeconfig --namespace velero-system
Name:         default-backup-20230902133242
Namespace:    velero-system
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.26.4
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=26

Phase:  Completed


Namespaces:
  Included:  default
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  included

Label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  auto

TTL:  720h0m0s

CSISnapshotTimeout:    10m0s
ItemOperationTimeout:  1h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2023-09-02 13:33:01 +0000 UTC
Completed:  2023-09-02 13:33:09 +0000 UTC

Expiration:  2023-10-02 13:33:01 +0000 UTC

Total items to be backed up:  288
Items backed up:              288

Velero-Native Snapshots: <none included>
root@k8s-master01:/data/velero# 

minio验证备份数据
基于velero及minio实现etcd数据备份与恢复
删除pod并验证数据恢复

  • 删除pod
root@k8s-master01:/data/velero# kubectl get pods 
NAME   READY   STATUS    RESTARTS      AGE
bash   1/1     Running   5 (93m ago)   27d
root@k8s-master01:/data/velero# kubectl delete pod bash -n default 
pod "bash" deleted
root@k8s-master01:/data/velero# kubectl get pods                   
No resources found in default namespace.
root@k8s-master01:/data/velero# 
  • 恢复pod
root@k8s-master01:/data/velero# velero restore create --from-backup default-backup-20230902133242 --wait --kubeconfig=./awsuser.kubeconfig --namespace velero-system
Restore request "default-backup-20230902133242-20230902134421" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
..............................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe default-backup-20230902133242-20230902134421` and `velero restore logs default-backup-20230902133242-20230902134421`.
root@k8s-master01:/data/velero# 
  • 验证pod
root@k8s-master01:/data/velero# kubectl get pods 
NAME   READY   STATUS    RESTARTS   AGE
bash   1/1     Running   0          77s
root@k8s-master01:/data/velero# kubectl exec -it bash bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@bash ~]# ping www.baidu.com
PING www.a.shifen.com (14.119.104.189) 56(84) bytes of data.
64 bytes from 14.119.104.189 (14.119.104.189): icmp_seq=1 ttl=53 time=42.5 ms
64 bytes from 14.119.104.189 (14.119.104.189): icmp_seq=2 ttl=53 time=42.2 ms
^C
--- www.a.shifen.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 42.234/42.400/42.567/0.264 ms
[root@bash ~]# 

对myserver ns进行备份

root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
root@k8s-master01:/data/velero# velero backup create myserver-ns-backup-${DATE} \
> --include-cluster-resources=true \
> --include-namespaces myserver \
> --kubeconfig=/root/.kube/config \
> --namespace velero-system
Backup request "myserver-ns-backup-20230902134938" submitted successfully.
Run `velero backup describe myserver-ns-backup-20230902134938` or `velero backup logs myserver-ns-backup-20230902134938` for more details.
root@k8s-master01:/data/velero# 

minio验证备份
基于velero及minio实现etcd数据备份与恢复
删除pod并验证恢复

root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
root@k8s-master01:/data/velero# velero backup create myserver-ns-backup-${DATE} \
> --include-cluster-resources=true \
> --include-namespaces myserver \
> --kubeconfig=/root/.kube/config \
> --namespace velero-system
Backup request "myserver-ns-backup-20230902134938" submitted successfully.
Run `velero backup describe myserver-ns-backup-20230902134938` or `velero backup logs myserver-ns-backup-20230902134938` for more details.
root@k8s-master01:/data/velero# kubectl get pods -n myserver 
NAME                                                  READY   STATUS    RESTARTS        AGE
myserver-myapp-deployment-name-6965765b9c-h4kj6       1/1     Running   13 (104m ago)   98d
myserver-myapp-frontend-deployment-6bd57599f4-8zw5s   1/1     Running   13 (104m ago)   98d
myserver-myapp-frontend-deployment-6bd57599f4-j276c   1/1     Running   13 (104m ago)   98d
myserver-myapp-frontend-deployment-6bd57599f4-p76bw   1/1     Running   13 (13d ago)    98d
root@k8s-master01:/data/velero# kubectl get deployments.apps -n myserver 
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
myserver-myapp-deployment-name       1/1     1            1           98d
myserver-myapp-frontend-deployment   3/3     3            3           98d
root@k8s-master01:/data/velero# kubectl delete deployments.apps myserver-myapp-deployment-name -n myserver 
deployment.apps "myserver-myapp-deployment-name" deleted
root@k8s-master01:/data/velero# kubectl get pods -n myserver 
NAME                                                  READY   STATUS    RESTARTS        AGE
myserver-myapp-frontend-deployment-6bd57599f4-8zw5s   1/1     Running   13 (106m ago)   98d
myserver-myapp-frontend-deployment-6bd57599f4-j276c   1/1     Running   13 (106m ago)   98d
myserver-myapp-frontend-deployment-6bd57599f4-p76bw   1/1     Running   13 (13d ago)    98d
root@k8s-master01:/data/velero# velero restore create --from-backup myserver-ns-backup-20230902134938 --wait \
> --kubeconfig=./awsuser.kubeconfig \
> --namespace velero-system
Restore request "myserver-ns-backup-20230902134938-20230902135401" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
...............................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe myserver-ns-backup-20230902134938-20230902135401` and `velero restore logs myserver-ns-backup-20230902134938-20230902135401`.
root@k8s-master01:/data/velero# kubectl get deployments.apps -n myserver 
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
myserver-myapp-deployment-name       0/1     1            0           37s
myserver-myapp-frontend-deployment   3/3     3            3           98d
root@k8s-master01:/data/velero# kubectl get pods -n myserver                 
NAME                                                  READY   STATUS            RESTARTS        AGE
myserver-myapp-deployment-name-6965765b9c-h4kj6       0/1     PodInitializing   0               69s
myserver-myapp-frontend-deployment-6bd57599f4-8zw5s   1/1     Running           13 (108m ago)   98d
myserver-myapp-frontend-deployment-6bd57599f4-j276c   1/1     Running           13 (108m ago)   98d
myserver-myapp-frontend-deployment-6bd57599f4-p76bw   1/1     Running           13 (13d ago)    98d
root@k8s-master01:/data/velero#

7.2、备份指定资源对象

备份指定namespace中的pod或特定资源

root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
root@k8s-master01:/data/velero#  velero backup create pod-backup-${DATE} --include-cluster-resources=true \
>  --ordered-resources 'pods=default/bash,magedu/ubuntu1804,magedu/mysql-0;deployments.apps=myserver/myserver-myapp-frontend-deployment,magedu/wordpress-app-deployment;services=myserver/myserver-myapp-service-name,magedu/mysql,magedu/zookeeper' \
>  --namespace velero-system --include-namespaces=myserver,magedu,default
Backup request "pod-backup-20230902141842" submitted successfully.
Run `velero backup describe pod-backup-20230902141842` or `velero backup logs pod-backup-20230902141842` for more details.
root@k8s-master01:/data/velero# 

删除pod并验证恢复

  • 删除资源
root@k8s-master01:/data/velero# kubectl get deployments.apps -n magedu 
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
magedu-consumer-deployment     3/3     3            3           22d
magedu-dubboadmin-deployment   1/1     1            1           22d
magedu-provider-deployment     3/3     3            3           22d
wordpress-app-deployment       1/1     1            1           13d
zookeeper1                     1/1     1            1           90d
zookeeper2                     1/1     1            1           90d
zookeeper3                     1/1     1            1           90d
root@k8s-master01:/data/velero# kubectl delete deployments.apps wordpress-app-deployment -n magedu 
deployment.apps "wordpress-app-deployment" deleted
root@k8s-master01:/data/velero# kubectl get deployments.apps -n magedu 
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
magedu-consumer-deployment     3/3     3            3           22d
magedu-dubboadmin-deployment   1/1     1            1           22d
magedu-provider-deployment     3/3     3            3           22d
zookeeper1                     1/1     1            1           90d
zookeeper2                     1/1     1            1           90d
zookeeper3                     1/1     1            1           90d
root@k8s-master01:/data/velero# kubectl get svc -n magedu 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
magedu-consumer-server      NodePort    10.100.208.121   <none>        80:49630/TCP                                   22d
magedu-dubboadmin-service   NodePort    10.100.244.92    <none>        80:31080/TCP                                   22d
magedu-provider-spec        NodePort    10.100.187.168   <none>        80:44873/TCP                                   22d
mysql                       ClusterIP   None             <none>        3306/TCP                                       79d
mysql-0                     ClusterIP   None             <none>        3306/TCP                                       13d
mysql-read                  ClusterIP   10.100.15.127    <none>        3306/TCP                                       79d
redis                       ClusterIP   None             <none>        6379/TCP                                       88d
redis-access                NodePort    10.100.117.185   <none>        6379:36379/TCP                                 88d
wordpress-app-spec          NodePort    10.100.189.214   <none>        80:30031/TCP,443:30033/TCP                     13d
zookeeper                   ClusterIP   10.100.237.95    <none>        2181/TCP                                       90d
zookeeper1                  NodePort    10.100.63.118    <none>        2181:32181/TCP,2888:30541/TCP,3888:31200/TCP   90d
zookeeper2                  NodePort    10.100.199.43    <none>        2181:32182/TCP,2888:32670/TCP,3888:32264/TCP   90d
zookeeper3                  NodePort    10.100.41.9      <none>        2181:32183/TCP,2888:31329/TCP,3888:32546/TCP   90d
root@k8s-master01:/data/velero# kubectl delete svc mysql -n magedu 
service "mysql" deleted
root@k8s-master01:/data/velero# kubectl delete svc zookeeper -n magedu      
service "zookeeper" deleted
root@k8s-master01:/data/velero# kubectl get svc -n magedu 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
magedu-consumer-server      NodePort    10.100.208.121   <none>        80:49630/TCP                                   22d
magedu-dubboadmin-service   NodePort    10.100.244.92    <none>        80:31080/TCP                                   22d
magedu-provider-spec        NodePort    10.100.187.168   <none>        80:44873/TCP                                   22d
mysql-0                     ClusterIP   None             <none>        3306/TCP                                       13d
mysql-read                  ClusterIP   10.100.15.127    <none>        3306/TCP                                       79d
redis                       ClusterIP   None             <none>        6379/TCP                                       88d
redis-access                NodePort    10.100.117.185   <none>        6379:36379/TCP                                 88d
wordpress-app-spec          NodePort    10.100.189.214   <none>        80:30031/TCP,443:30033/TCP                     13d
zookeeper1                  NodePort    10.100.63.118    <none>        2181:32181/TCP,2888:30541/TCP,3888:31200/TCP   90d
zookeeper2                  NodePort    10.100.199.43    <none>        2181:32182/TCP,2888:32670/TCP,3888:32264/TCP   90d
zookeeper3                  NodePort    10.100.41.9      <none>        2181:32183/TCP,2888:31329/TCP,3888:32546/TCP   90d
root@k8s-master01:/data/velero#
  • 恢复资源
root@k8s-master01:/data/velero# velero restore create --from-backup pod-backup-20230902141842 --wait \
> --kubeconfig=./awsuser.kubeconfig \
> --namespace velero-system
Restore request "pod-backup-20230902141842-20230902142341" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
............................................
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe pod-backup-20230902141842-20230902142341` and `velero restore logs pod-backup-20230902141842-20230902142341`.
root@k8s-master01:/data/velero# 
  • 验证对应资源是否恢复
root@k8s-master01:/data/velero# kubectl get kubectl get deployments.apps -n magedu 
error: the server doesn't have a resource type "kubectl"
root@k8s-master01:/data/velero# kubectl get deployments.apps -n magedu 
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
magedu-consumer-deployment     3/3     3            3           22d
magedu-dubboadmin-deployment   1/1     1            1           22d
magedu-provider-deployment     3/3     3            3           22d
wordpress-app-deployment       1/1     1            1           2m57s
zookeeper1                     1/1     1            1           90d
zookeeper2                     1/1     1            1           90d
zookeeper3                     1/1     1            1           90d
root@k8s-master01:/data/velero# kubectl get pods -n magedu 
NAME                                            READY   STATUS      RESTARTS        AGE
magedu-consumer-deployment-798c7d785b-fp4b9     1/1     Running     3 (140m ago)    22d
magedu-consumer-deployment-798c7d785b-wmv9p     1/1     Running     3 (140m ago)    22d
magedu-consumer-deployment-798c7d785b-zqm74     1/1     Running     3 (13d ago)     22d
magedu-dubboadmin-deployment-798c4dfdd8-kvfvh   1/1     Running     3 (140m ago)    22d
magedu-provider-deployment-6fccc6d9f5-k6z7m     1/1     Running     3 (140m ago)    22d
magedu-provider-deployment-6fccc6d9f5-nl4zd     1/1     Running     3 (140m ago)    22d
magedu-provider-deployment-6fccc6d9f5-p94rb     1/1     Running     3 (140m ago)    22d
mysql-0                                         2/2     Running     12 (140m ago)   79d
mysql-1                                         2/2     Running     12 (140m ago)   79d
mysql-2                                         2/2     Running     12 (140m ago)   79d
redis-0                                         1/1     Running     8 (13d ago)     88d
redis-1                                         1/1     Running     8 (140m ago)    88d
redis-2                                         1/1     Running     8 (140m ago)    88d
redis-3                                         1/1     Running     8 (13d ago)     87d
redis-4                                         1/1     Running     8 (140m ago)    88d
redis-5                                         1/1     Running     8 (140m ago)    88d
ubuntu1804                                      0/1     Completed   0               88d
wordpress-app-deployment-64c956bf9c-6qp8q       2/2     Running     0               3m31s
zookeeper1-675c5477cb-vmwwq                     1/1     Running     10 (13d ago)    90d
zookeeper2-759fb6c6f-7jktr                      1/1     Running     10 (140m ago)   90d
zookeeper3-5c78bb5974-vxpbh                     1/1     Running     10 (140m ago)   90d
root@k8s-master01:/data/velero# kubectl svc -n magedu
error: unknown command "svc" for "kubectl"

Did you mean this?
        set
root@k8s-master01:/data/velero# kubectl get svc -n magedu 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
magedu-consumer-server      NodePort    10.100.208.121   <none>        80:49630/TCP                                   22d
magedu-dubboadmin-service   NodePort    10.100.244.92    <none>        80:31080/TCP                                   22d
magedu-provider-spec        NodePort    10.100.187.168   <none>        80:44873/TCP                                   22d
mysql                       ClusterIP   None             <none>        3306/TCP                                       4m6s
mysql-0                     ClusterIP   None             <none>        3306/TCP                                       13d
mysql-read                  ClusterIP   10.100.15.127    <none>        3306/TCP                                       79d
redis                       ClusterIP   None             <none>        6379/TCP                                       88d
redis-access                NodePort    10.100.117.185   <none>        6379:36379/TCP                                 88d
wordpress-app-spec          NodePort    10.100.189.214   <none>        80:30031/TCP,443:30033/TCP                     13d
zookeeper                   ClusterIP   10.100.177.73    <none>        2181/TCP                                       4m6s
zookeeper1                  NodePort    10.100.63.118    <none>        2181:32181/TCP,2888:30541/TCP,3888:31200/TCP   90d
zookeeper2                  NodePort    10.100.199.43    <none>        2181:32182/TCP,2888:32670/TCP,3888:32264/TCP   90d
zookeeper3                  NodePort    10.100.41.9      <none>        2181:32183/TCP,2888:31329/TCP,3888:32546/TCP   90d
root@k8s-master01:/data/velero# 

7.3、批量备份所有namespace

root@k8s-master01:/data/velero# cat all-ns-backup.sh
#!/bin/bash
NS_NAME=`kubectl get ns | awk '{if (NR>2){print}}' | awk '{print $1}'`
DATE=`date +%Y%m%d%H%M%S`
cd /data/velero/
for i in $NS_NAME;do
velero backup create ${i}-ns-backup-${DATE} \
--include-cluster-resources=true \
--include-namespaces ${i} \
--kubeconfig=/root/.kube/config \
--namespace velero-system
done
root@k8s-master01:/data/velero# 

执行脚本备份

root@k8s-master01:/data/velero# bash all-ns-backup.sh
Backup request "default-ns-backup-20230902143131" submitted successfully.
Run `velero backup describe default-ns-backup-20230902143131` or `velero backup logs default-ns-backup-20230902143131` for more details.
Backup request "kube-node-lease-ns-backup-20230902143131" submitted successfully.
Run `velero backup describe kube-node-lease-ns-backup-20230902143131` or `velero backup logs kube-node-lease-ns-backup-20230902143131` for more details.
Backup request "kube-public-ns-backup-20230902143131" submitted successfully.
Run `velero backup describe kube-public-ns-backup-20230902143131` or `velero backup logs kube-public-ns-backup-20230902143131` for more details.
Backup request "kube-system-ns-backup-20230902143131" submitted successfully.
Run `velero backup describe kube-system-ns-backup-20230902143131` or `velero backup logs kube-system-ns-backup-20230902143131` for more details.
Backup request "magedu-ns-backup-20230902143131" submitted successfully.
Run `velero backup describe magedu-ns-backup-20230902143131` or `velero backup logs magedu-ns-backup-20230902143131` for more details.
Backup request "myserver-ns-backup-20230902143131" submitted successfully.
Run `velero backup describe myserver-ns-backup-20230902143131` or `velero backup logs myserver-ns-backup-20230902143131` for more details.
Backup request "velero-system-ns-backup-20230902143131" submitted successfully.
Run `velero backup describe velero-system-ns-backup-20230902143131` or `velero backup logs velero-system-ns-backup-20230902143131` for more details.
root@k8s-master01:/data/velero#

在minio验证备份
基于velero及minio实现etcd数据备份与恢复文章来源地址https://www.toymoban.com/news/detail-690484.html

到了这里,关于基于velero及minio实现etcd数据备份与恢复的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Postman 实现备份数据 Postman恢复数据 postman 导出导入数据 postman 导入导出数据

            在使用postman调试接口时,若遇到内网的环境,无法通过账户同步数据; 在A电脑调试的接口数据,需要移动到B电脑上,如何实现postman 的数据迁移(导出)功能呢?         在新的电脑上,如何导入postman的数据呢?         本文将整理 postman的数据导出、

    2024年01月25日
    浏览(49)
  • 备份StarRocks数据到对象存储minio中/外表查minio中的数据

    1.部署minio环境 宿主机与容器挂在映射 宿主机位置 容器位置 /data/minio/config /data /data/minio/data /root/.minio 拉起环境: 2.准备starrocks环境 参考docker部署starrocks 使用 Docker 部署 StarRocks @ deploy_with_docker @ StarRocks Docs 3.minio文件查询/全库备份·实操 借助python生成parquet文件  3.1 去查存在

    2024年02月10日
    浏览(40)
  • 数据备份与恢复

    按照 数据库服务状态 分为: 冷备份:在备份时暂停数据库运行和服务,将整个数据库复制到备份设备中 热备份:在备份时不停止数据库的运行和服务 按照 备份的数据 分为: 物理备份:备份数据库服务器上存储的原始数据和文件,可以直接拷贝和恢复 逻辑备份:备份的是

    2024年01月20日
    浏览(38)
  • MySQL 数据备份和数据恢复

    目录 一、数据备份 1、概述 2、MySQLdump命令备份 1)备份单个数据库中的所有表 2) 备份数据中某个或多个表 3) 备份所有数据库 4)备份多个库 5) 只备份一个表或多个表结构 二、数据恢复 三、数据备份与恢复应用 1、概述 数据备份是数据库管理员非常重要的工作之一。系统意

    2024年02月11日
    浏览(39)
  • 小米数据恢复:有无备份从小米手机恢复删除数据方法

    如果您不小心删除了小米手机上的数据,后来发现您需要它,那么本文适合您。我将向您介绍一些最可靠的小米恢复方法,以将您的数据恢复到您的设备上。无论您是否有备份,都可以处理。让我们开始吧! 1.从小米云恢复已删除的数据 与大多数智能手机公司一样,小米也提

    2024年02月09日
    浏览(46)
  • MySQL数据备份与恢复

    备份的主要目的: 备份的主要目的是:灾难恢复,备份还可以测试应用、回滚数据修改、查询历史数据、审计等。 日志: MySQL 的日志默认保存位置为: ##配置文件 ##通用查询日志,用来记录MySQL的所有连接和语句,默认是关闭的 ##二进制日志(binlog),用来记录所有更新了数据

    2024年02月11日
    浏览(44)
  • ES数据备份与恢复

    场景:ES线上的数据和服务迁移到另外的机器上去 老ES机器ip:172.16.0.1 新ES机器ip:172.16.0.2 一. 首先, 在备份之前要在es/config/elasticsearch.yml添加仓库配置: path.repo: [\\\"/mnt/backup/es_backup\\\"] 1. 2.重启ES 二. 在老机器上 3.创建备份仓库 在/mnt/backup下新建名为es_backup的仓库 POST /_snapsho

    2024年02月16日
    浏览(38)
  • ElasticSearch 数据备份与恢复

    以下为背景 Elasticsearch 7.6.2单点,8.3.3单点 Docker 部署 当前使用场景:部分index,数据量较大,需要在跨版本的ES之间进行迁移 一、前提说明 1. Elasticsearch备份 2. 备份恢复方案 二、Elasticsearch 环境准备 1.查看Elasticsearch所有版本 2.部署2个Elasticsearch单点服务 三、数据备份 1. 增加

    2024年02月10日
    浏览(39)
  • MySQL 数据备份与恢复

    本次使用的MySQL版本为8.0.20 目录 一、数据备份 1.使用 MySQLdump 命令备份 (1)使用MySQLdump备份单个数据库中的所有表 案例:完成数据插入后,输入备份命令如下: (2)使用MySQLdump备份数据库中的某个表 案例:备份booksDB数据库中的books表,输入语句如下: (3)使用MySQLdump备

    2024年02月11日
    浏览(42)
  • 06.Oracle数据备份与恢复

    01.CentOS7静默安装oracle11g 02.Oracle的启动过程 03.从简单的sql开始 04.Oracle的体系架构 05.Oracle数据库对象 06.Oracle数据备份与恢复 07.用户和权限管理 08.Oracle的表 09.Oracle表的分区 10.Oracle的同义词与序列 11.Oracle的视图 12.Oracle的索引 13.Oracle通过JDBC连接Java 14.Oracle中的事务 15.Oracle11g的

    2024年02月05日
    浏览(43)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包