elasticsearch数据迁移之elasticdump

这篇具有很好参考价值的文章主要介绍了elasticsearch数据迁移之elasticdump。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

系列文章目录

第一章 es集群搭建
第二章 es集群基本操作命令
第三章 es基于search-guard插件实现加密认证
第四章 es常用插件



前言

在企业实际生产环境中,避免不了要对es集群进行迁移、数据备份与恢复,以此来确保数据的可用性及完整性。因此,就涉及到了数据备份与恢复。本章主要以elasticdump工具为主,讲解备份操作及恢复操作。其余备份和恢复方法暂不涉及,总结来自于生产实战。文章来源地址https://www.toymoban.com/news/detail-860254.html


迁移方式 使用场景
logstash ​​迁移全量​​​或​​增量数据​​,且对实时性要求不高的场景需要对迁移的数据通过 es query 进行简单的过滤的场景需要对迁移的数据进行复杂的过滤或处理的场景版本跨度较大的数据迁移场景,如 5.x 版本迁移到 6.x 版本或 7.x 版本
elasticdump 数据量较小​​的场景

一、elasticdump是什么?

Elasticdump 是一个开源免费且用于导入和导出 Elasticsearch 数据的命令行工具。它提供了一种方便的方式来在不同的 Elasticsearch 实例之间传输数据,或者进行数据备份和恢复。

使用 Elasticdump,可以将 Elasticsearch 索引中的数据导出为 JSON 文件,或者将 JSON 文件中的数据导入到 Elasticsearch 索引中。它支持各种选项和过滤器,用于指定源和目标,包括索引模式、文档类型、查询过滤器等等。

主要特征包括:
	支持在Elasticsearch实例或者集群之间传输和备份数据。可以将数据从一个集群复制到另一个集群。
	支持不同格式的数据传输,包括JSON、NDJSON、CSV、备份文件等。
	可以通过命令行或者程序化的方式使用。命令行方式提供了便捷的操作接口。
	支持增量式同步,只复制目标集群中不存在的文档。
	支持各种认证方式连接Elasticsearch,如basic auth、Amazon IAM等。
	支持多线程操作,可以加快数据迁移的速度。
	开源免费,代码托管在GitHub上。

不足:
	elasticdump工具适用于备份单个索引或者整个es集群索引不太大的备份,如果es集群数据量较大,则需要使用logstash方法进行迁移恢复。

二、安装elasticdump工具

1.离线安装

背景:
elasticdump工具依赖node环境
	1、离线安装node方式 (服务器为纯内网环境)
	 下载node安装包
		https://nodejs.org/dist/v16.14.0/node-v16.14.0-linux-x64.tar.xz
	上传至服务器
		rz node-v16.14.0-linux-x64.tar.xz
	解压并安装
		tar xf node-v16.14.0-linux-x64.tar.xz -C /usr/local/
		mv /usr/local/node-v16.14.0-linux-x64  /usr/local/node
    配置环境变量并重新加载
    	vim /etc/profile
		export NODE_HOME=/usr/local/node
		export PATH=$NODE_HOME/bin:$PATH
	source /etc/profile
    验证安装是否成功
    	node -v #安装成功此命令会返回node对应的版本信息
    	npm -v
    2、离线安装elasticdump
    本地下载对应的工具包
    	https://github.com/elasticsearch-dump/elasticsearch-dump/archive/v6.19.0.tar.gz
    上传至服务器
    	rz v6.19.0.tar.gz
    解压安装
    	tar xf v6.19.0.tar.gz -C /export/server/
    检查是否安装成功
    	elasticdump --version

2.在线安装

背景:
elasticdump工具依赖node环境
	1、在线安装node方式 (服务器可通外网)
	 #下载node安装包
		wget https://nodejs.org/dist/v16.14.0/node-v16.14.0-linux-x64.tar.xz
	 #解压并安装
		tar xf node-v16.14.0-linux-x64.tar.xz -C /usr/local/
		mv /usr/local/node-v16.14.0-linux-x64  /usr/local/node
     #配置环境变量并重新加载
    	vim /etc/profile
		export NODE_HOME=/usr/local/node
		export PATH=$NODE_HOME/bin:$PATH
	source /etc/profile
     #验证安装是否成功
    	node -v #安装成功此命令会返回node对应的版本信息
    	npm -v
     #或者直接采用yum命令安装node环境
     	yum -y install  nodejs npm
    2、在线安装elasticdump
    #设置npm源 否则安装会很慢
	npm config set registry https://registry.npm.taobao.org/
	#全局安装 
    npm install elasticdump -g
    #检查是否安装成功
    elasticdump --version

三、elasticdump相关参数

	#相关参数 可以使用elasticdump --help查看
	--input: 指定输入的源位置 Elasticsearch 实例或JSON文件;
 	--output: 指定输出的目标位置 Elasticsearch实例或JSON文件;
	--type:指定要操作的数据类型,包括index、alias、template、data analyzers等;
	--searchBody:对于输入为Elasticsearch实例时,指定一个JSON对象作为查询参数;
	--limit:限制导出的文档数。通的导入导出是100条数据一次,如果是大批量数据的话就很耗时间。limit是一个限制大小的参数,可以根据需求来进行调整其大小。
	--inputIndex:指定输入源 Elasticsearch实例的索引名称;
	--ignore-errors:忽略出错的文档,继续导出其余文档。
	--scrollTime:设置scroll时间,以毫秒为单位。默认为10分钟。
	--timeout:设置请求超时时间,以毫秒为单位。默认为30秒。
	--support-big-int 支持大数类型
	--big-int-fields 指定支持的字段,默认是''(default '')
	--bulk-1imit:设置批量操作中的文档数量限制默认为1000
官网给出的参数
elasticdump: Import and export tools for elasticsearch
version: %%version%%

Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]

Core options
--------------------
--input
                    Source location (required)

--input-index
                    Source index and type
                    (default: all, example: index/type)

--output
                    Destination location (required)

--output-index
                    Destination index and type
                    (default: all, example: index/type)


Options
--------------------
--big-int-fields
                    Specifies a comma-seperated list of fields that should be checked for big-int support
                    (default '')

--bulkAction
                    Sets the operation type to be used when preparing the request body to be sent to elastic search.
                    For more info - https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
                    (default: index, options: [index, update, delete, create)

--ca, --input-ca, --output-ca
                    CA certificate. Use --ca if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.

--cert, --input-cert, --output-cert
                    Client certificate file. Use --cert if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.

--csvConfigs
                    Set all fast-csv configurations
                    A escaped JSON string or file can be supplied. File location must be prefixed with the @ symbol
                    (default: null)

--csvCustomHeaders  A comma-seperated listed of values that will be used as headers for your data. This param must
                    be used in conjunction with `csvRenameHeaders`
                    (default : null)

--csvDelimiter
                    The delimiter that will separate columns.
                    (default : ',')

--csvFirstRowAsHeaders
                    If set to true the first row will be treated as the headers.
                    (default : true)

--csvHandleNestedData
                    Set to true to handle nested JSON/CSV data.
                    NB : This is a very opinionated implementaton !
                    (default : false)

--csvIdColumn
                    Name of the column to extract the record identifier (id) from
                    When exporting to CSV this column can be used to override the default id (@id) column name
                    (default : null)

--csvIgnoreAutoColumns
                    Set to true to prevent the following columns @id, @index, @type from being written to the output file
                    (default : false)

--csvIgnoreEmpty
                    Set to true to ignore empty rows.
                    (default : false)

--csvIncludeEndRowDelimiter
                    Set to true to include a row delimiter at the end of the csv
                    (default : false)

--csvIndexColumn
                    Name of the column to extract the record index from
                    When exporting to CSV this column can be used to override the default index (@index) column name
                    (default : null)

--csvLTrim
                    Set to true to left trim all columns.
                    (default : false)

--csvMaxRows
                    If number is > 0 then only the specified number of rows will be parsed.(e.g. 100 would return the first 100 rows of data)
                    (default : 0)

--csvRTrim
                    Set to true to right trim all columns.
                    (default : false)

--csvRenameHeaders
                    If you want the first line of the file to be removed and replaced by the one provided in the `csvCustomHeaders` option
                    (default : true)

--csvSkipLines
                    If number is > 0 the specified number of lines will be skipped.
                    (default : 0)

--csvSkipRows
                    If number is > 0 then the specified number of parsed rows will be skipped
                    NB:  (If the first row is treated as headers, they aren't a part of the count)
                    (default : 0)

--csvTrim
                    Set to true to trim all white space from columns.
                    (default : false)

--csvTypeColumn
                    Name of the column to extract the record type from
                    When exporting to CSV this column can be used to override the default type (@type) column name
                    (default : null)

--csvWriteHeaders   Determines if headers should be written to the csv file.
                    (default : true)

--customBackoff
                    Activate custom customBackoff function. (s3)

--debug
                    Display the elasticsearch commands being used
                    (default: false)

--delete
                    Delete documents one-by-one from the input as they are
                    moved.  Will not delete the source index
                    (default: false)

--delete-with-routing
                    Passes the routing query-param to the delete function
                    used to route operations to a specific shard.
                    (default: false)

--esCompress
                    if true, add an Accept-Encoding header to request compressed content encodings from the server (if not already present)
                    and decode supported content encodings in the response.
                    Note: Automatic decoding of the response content is performed on the body data returned through request
                    (both through the request stream and passed to the callback function) but is not performed on the response stream
                    (available from the response event) which is the unmodified http.IncomingMessage object which may contain compressed data.
                    See example below.

--fileSize
                    supports file splitting.  This value must be a string supported by the **bytes** module.
                    The following abbreviations must be used to signify size in terms of units
                    b for bytes
                    kb for kilobytes
                    mb for megabytes
                    gb for gigabytes
                    tb for terabytes
                    e.g. 10mb / 1gb / 1tb
                    Partitioning helps to alleviate overflow/out of memory exceptions by efficiently segmenting files
                    into smaller chunks that then can be merged if needs be.

--filterSystemTemplates
                    Whether to remove metrics-*-* and logs-*-* system templates
                    (default: true])

--force-os-version
                    Forces the OpenSearch version used by elasticsearch-dump.
                    (default: 7.10.2)

--fsCompress
                    gzip data before sending output to file.
                    On import the command is used to inflate a gzipped file

--handleVersion
                    Tells elastisearch transport to handle the `_version` field if present in the dataset
                    (default : false)

--headers
                    Add custom headers to Elastisearch requests (helpful when
                    your Elasticsearch instance sits behind a proxy)
                    (default: '{"User-Agent": "elasticdump"}')
                    Type/direction based headers are supported .i.e. input-headers/output-headers
                    (these will only be added based on the current flow type input/output)

--help
                    This page

--ignore-errors
                    Will continue the read/write loop on write error
                    (default: false)

--ignore-es-write-errors
                    Will continue the read/write loop on a write error from elasticsearch
                    (default: true)

--inputSocksPort, --outputSocksPort
                    Socks5 host port

--inputSocksProxy, --outputSocksProxy
                    Socks5 host address

--inputTransport
                    Provide a custom js file to use as the input transport

--key, --input-key, --output-key
                    Private key file. Use --key if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.

--limit
                    How many objects to move in batch per operation
                    limit is approximate for file streams
                    (default: 100)

--maxRows
                    supports file splitting.  Files are split by the number of rows specified

--maxSockets
                    How many simultaneous HTTP requests can the process make?
                    (default:
                      5 [node <= v0.10.x] /
                      Infinity [node >= v0.11.x] )

--noRefresh
                    Disable input index refresh.
                    Positive:
                      1. Much increased index speed
                      2. Much less hardware requirements
                    Negative:
                      1. Recently added data may not be indexed
                    Recommended using with big data indexing,
                    where speed and system health is a higher priority
                    than recently added data.

--offset
                    Integer containing the number of rows you wish to skip
                    ahead from the input transport.  When importing a large
                    index, things can go wrong, be it connectivity, crashes,
                    someone forgets to `screen`, etc.  This allows you
                    to start the dump again from the last known line written
                    (as logged by the `offset` in the output).  Please be
                    advised that since no sorting is specified when the
                    dump is initially created, there's no real way to
                    guarantee that the skipped rows have already been
                    written/parsed.  This is more of an option for when
                    you want to get as much data as possible in the index
                    without concern for losing some rows in the process,
                    similar to the `timeout` option.
                    (default: 0)

--outputTransport
                    Provide a custom js file to use as the output transport

--overwrite
                    Overwrite output file if it exists
                    (default: false)

--params
                    Add custom parameters to Elastisearch requests uri. Helpful when you for example
                    want to use elasticsearch preference
                    --input-params is a specific params extension that can be used when fetching data with the scroll api
                    --output-params is a specific params extension that can be used when indexing data with the bulk index api
                    NB : These were added to avoid param pollution problems which occur when an input param is used in an output source
                    (default: null)

--parseExtraFields
                    Comma-separated list of meta-fields to be parsed

--pass, --input-pass, --output-pass
                    Pass phrase for the private key. Use --pass if source and destination are identical.
                    Otherwise, use the one prefixed with --input or --output as needed.

--quiet
                    Suppress all messages except for errors
                    (default: false)

--retryAttempts
                    Integer indicating the number of times a request should be automatically re-attempted before failing
                    when a connection fails with one of the following errors `ECONNRESET`, `ENOTFOUND`, `ESOCKETTIMEDOUT`,
                    ETIMEDOUT`, `ECONNREFUSED`, `EHOSTUNREACH`, `EPIPE`, `EAI_AGAIN`
                    (default: 0)

--retryDelay
                    Integer indicating the back-off/break period between retry attempts (milliseconds)
                    (default : 5000)

--retryDelayBase
                    The base number of milliseconds to use in the exponential backoff for operation retries. (s3)

--scroll-with-post
                    Use a HTTP POST method to perform scrolling instead of the default GET
                    (default: false)

--scrollId
                    The last scroll Id returned from elasticsearch.
                    This will allow dumps to be resumed used the last scroll Id &
                    `scrollTime` has not expired.

--scrollTime
                    Time the nodes will hold the requested search in order.
                    (default: 10m)

--searchBody
                    Preform a partial extract based on search results
                    when ES is the input, default values are
                      if ES > 5
                        `'{"query": { "match_all": {} }, "stored_fields": ["*"], "_source": true }'`
                      else
                        `'{"query": { "match_all": {} }, "fields": ["*"], "_source": true }'`
                    [As of 6.68.0] If the searchBody is preceded by a @ symbol, elasticdump will perform a file lookup
                    in the location specified. NB: File must contain valid JSON

--searchBodyTemplate
                    A method/function which can be called to the searchBody
                        doc.searchBody = { query: { match_all: {} }, stored_fields: [], _source: true };
                    May be used multiple times.
                    Additionally, searchBodyTemplate may be performed by a module. See [searchBody Template](#search-template) below.

--searchWithTemplate
                    Enable to use Search Template when using --searchBody
                    If using Search Template then searchBody has to consist of "id" field and "params" objects
                    If "size" field is defined within Search Template, it will be overridden by --size parameter
                    See https://www.elastic.co/guide/en/elasticsearch/reference/current/search-template.html for
                    further information
                    (default: false)

--size
                    How many objects to retrieve
                    (default: -1 -> no limit)

--skip-existing
                    Skips resource_already_exists_exception when enabled and exit with success
                    (default: false)

--sourceOnly
                    Output only the json contained within the document _source
                    Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
                    sourceOnly: {SOURCE}
                    (default: false)

--support-big-int
                    Support big integer numbers

--templateRegex
                    Regex used to filter templates before passing to the output transport
                    (default: ((metrics|logs|\..+)(-.+)?)

--timeout
                    Integer containing the number of milliseconds to wait for
                    a request to respond before aborting the request. Passed
                    directly to the request library. Mostly used when you don't
                    care too much if you lose some data when importing
                    but would rather have speed.

--tlsAuth
                    Enable TLS X509 client authentication

--toLog
                    When using a custom outputTransport, should log lines
                    be appended to the output stream?
                    (default: true, except for `$`)

--transform
                    A method/function which can be called to modify documents
                    before writing to a destination. A global variable 'doc'
                    is available.
                    Example script for computing a new field 'f2' as doubled
                    value of field 'f1':
                        doc._source["f2"] = doc._source.f1 * 2;
                    May be used multiple times.
                    Additionally, transform may be performed by a module. See [Module Transform](#module-transform) below.

--type
                    What are we exporting?
                    (default: data, options: [index, settings, analyzer, data, mapping, policy, alias, template, component_template, index_template])

--versionType
                    Elasticsearch versioning types. Should be `internal`, `external`, `external_gte`, `force`.
                    NB : Type validation is handled by the bulk endpoint and not by elasticsearch-dump


AWS specific options
--------------------
--awsAccessKeyId and --awsSecretAccessKey
                    When using Amazon Elasticsearch Service protected by
                    AWS Identity and Access Management (IAM), provide
                    your Access Key ID and Secret Access Key.
                    --sessionToken can also be optionally provided if using temporary credentials

--awsChain
                    Use [standard](https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/)
                    location and ordering for resolving credentials including environment variables,
                    config files, EC2 and ECS metadata locations _Recommended option for use with AWS_

--awsIniFileName
                    Override the default aws ini file name when using --awsIniFileProfile
                    Filename is relative to ~/.aws/
                    (default: config)

--awsIniFileProfile
                    Alternative to --awsAccessKeyId and --awsSecretAccessKey,
                    loads credentials from a specified profile in aws ini file.
                    For greater flexibility, consider using --awsChain
                    and setting AWS_PROFILE and AWS_CONFIG_FILE
                    environment variables to override defaults if needed

--awsRegion
                    Sets the AWS region that the signature will be generated for
                    (default: calculated from hostname or host)

--awsService
                    Sets the AWS service that the signature will be generated for
                    (default: calculated from hostname or host)

--awsUrlRegex
                    Overrides the default regular expression that is used to validate AWS urls that should be signed
                    (default: ^https?:\/\/.*\.amazonaws\.com.*$)

--s3ACL
                    S3 ACL: private | public-read | public-read-write | authenticated-read | aws-exec-read |
                    bucket-owner-read | bucket-owner-full-control [default private]

--s3AccessKeyId
                    AWS access key ID

--s3Compress
                    gzip data before sending to s3

--s3Configs
                    Set all s3 constructor configurations
                    A escaped JSON string or file can be supplied. File location must be prefixed with the @ symbol
                    (default: null)

--s3Endpoint
                    AWS endpoint that can be used for AWS compatible backends such as
                    OpenStack Swift and OpenStack Ceph

--s3ForcePathStyle
                    Force path style URLs for S3 objects [default false]

--s3Options
                    Set all s3 parameters shown here https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createMultipartUpload-property
                    A escaped JSON string or file can be supplied. File location must be prefixed with the @ symbol
                    (default: null)

--s3Region
                    AWS region

--s3SSEKMSKeyId
                    KMS Id to be used with aws:kms uploads

--s3SSLEnabled
                    Use SSL to connect to AWS [default true]

--s3SecretAccessKey
                    AWS secret access key

--s3ServerSideEncryption
                    Enables encrypted uploads

--s3StorageClass
                    Set the Storage Class used for s3
                    (default: STANDARD)

四、使用elasticdump进行数据备份

	1、数据备份之备份单个索引
		当es集群配置了分词器后,首先需要导出分词
		#导出分词器的时候要特别注意,我们只能根据索引单个导,不能全部导出,全部导出会出现索引不存在的错误:
		/export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input http://用户名:密码@ip地址:9200/索引名 --output /export/分词文件名.json --type=analyzer(指定导出分词) --limit=10000 (限制大小)
		
		导出mapping结构
		/export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input http://用户名:密码@ip地址:9200/索引名 --output /export/mapping文件名.json --type=mapping(指定导出结构)  --limit=10000    #导出结构
		
		导出数据
	 	/export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input http://用户名:密码@ip地址:9200/索引名 --output /export/数据文件名.json --type=data(导出数据)   --limit=10000        #导出数据
	2、数据备份之备份多个索引(前提是总索引数据不超过1G)
		查看全部索引大小 如下所示
			​​curl -X GET http://ip:9200/_cat/indices?v​ -u用户名:密码
health  status index  uuid  pri  rep docs.count docs.deleted store.size pri.store.size
green    open  test1  xxx   1     1   100104      0            508mb      508mb
	 index: 索引名称
	 docs.count: 索引中文档总数
	 store.size: 索引占用磁盘空间大小
	 pri.store.size: 主分片占用磁盘空间大小

		使用awk命令获取索引名称并保存至unidom.txt文件中
		​​curl -X GET http://ip:9200/_cat/indices?v​ -u用户名:密码 | grep -v "status" |awk '{print $3}' > unidom.txt

		编写脚本文件进行多索引备份,详细脚本如下:
		#!/bin/bash
		for item in `cat /root/unidom.txt`
		do
		  /export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input http://icos:2y5CJH9S660ao70r3@10.20.0.8:9200/$item --output /export/unidom/analyzer/${item}_analyzer.json --type=analyzer
		  /export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input http://icos:2y5CJH9S660ao70r3@10.20.0.8:9200/$item --output /export/unidom/mapping/${item}_mapping.json --type=mapping
		  /export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input http://icos:2y5CJH9S660ao70r3@10.20.0.8:9200/$item --output /export/unidom/data/${item}_data.json --type=data
		done

		执行上述脚本,并输出对应的脚本执行日志
		nohup ./script.sh >>backup.log 2>&1 & #执行脚本script.sh时将错误输出2以及标准输出1都一起以附加写方式导入backup.log文件
		当最终执行完脚本后查看日志中末尾是否有success字样,这样即可备份成功。

五、使用elasticdump进行数据恢复

	1、检查新es集群中是否存在要导入的索引名称,如果不存在则先创建索引名称
		curl -X PUT http://用户:密码@IP地址:9200/索引名 -u用户:密码
	2、先导入分词
	#!/bin/bash
	#ls /export/cityos-oss/analyzer |awk -F'_' '{print $1}' 命令主要是拿出备份文件以json结尾的文件名
	for item in `ls /export/backup_es/analyzer |awk -F'_' '{print $1}'` 
	do
	/export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input /export/backup_es/analyzer/${item}_analyzer.json --output http://用户:密码@IP地址:9200/$item --type=analyzer
	done
	执行上述脚本,并输出对应的脚本执行日志
		nohup ./analyzer_in.sh >>/analyzer.log 2>&1 & #执行脚本script.sh时将错误输出2以及标准输出1都一起以附加写方式导入/analyzer.log文件
	3、导入mapping结构
	#!/bin/bash
	for item in `ls /export/backup_es/mapping |awk -F'_' '{print $1}'`
	do
	  /export/server/elasticdump/elasticsearch-dump/bin/elasticdump --input /export/backup_es/mapping/${item}_mapping.json --output http://用户:密码@IP地址:9200/$item --type=mapping
	done
	执行上述脚本,并输出对应的脚本执行日志
		nohup ./mapping_in.sh >>mapping.log 2>&1 & #执行脚本script.sh时将错误输出2以及标准输出1都一起以附加写方式导入mapping.log文件
	4、导入数据
	#!/bin/bash
	for item in `ls /export/backup_es/data |awk -F'_' '{print $1}'`
	do
	   /export/server/elasticdump/elasticsearch-dump/bin/elasticdump--input /export/backup_es/data/${item}_data.json --output http://用户:密码@IP地址:9200/$item --type=data
	done
	执行上述脚本,并输出对应的脚本执行日志
		nohup ./data_in.sh >>data.log 2>&1 & #执行脚本script.sh时将错误输出2以及标准输出1都一起以附加写方式导入data.log文件
	5、导入完成后进行大小验证
		​​curl -X GET http://ip:9200/_cat/indices?v​ -u用户名:密码
		查看索引名称,索引中文档总数, 索引占用磁盘空间大小,主分片占用磁盘空间大小是否与源es集群查出来的索引大小一致。

到了这里,关于elasticsearch数据迁移之elasticdump的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 使用elasticsearchdump迁移elasticsearch数据实战

    目录 1.安装nodejs 2.安装elasticsearchdump 3.迁移 4.核对数据 5.注意事项 核对迁移后数据是否正确 在3迁移中 --type=mapping,如果es版本不一致可能会报错,如果报错,需要手动创建新es的索引的映射 比如es6迁移数据到es7 es7去掉了_type 只能手动设置映射 直接put设置映射 设置示例 请求

    2024年01月18日
    浏览(43)
  • Elasticsearch-dump 迁移数据

    elasticsearch-dump可以将ES中的某个索引数据迁移至其他的ES中,或者将ES数据整体迁移,我这里的场景是将生产的ES索引数据迁移至开发环境的ES中 1、elasticsearch-dump迁移 Tools for moving and saving indicies. 从来移动和保存索引的工具。 GITHUB项目地址: https://github.com/taskrabbit/elasticsearch-d

    2024年02月11日
    浏览(40)
  • 【笔记】Elasticsearch snapshot(快照)数据迁移

    0.简介         项目中需要进行Elasticsearch(以下简称ES)新旧集群切换,涉及到集群数据迁移。本篇笔记录了利用Elasticsearch snapshot特性进行数据迁移的关键步骤。 1.迁移前检查         在开始进行迁移前,做以下两点检查。           1)检查是否开启path.repo选项          

    2024年02月19日
    浏览(36)
  • Linux下安装Elasticsearch(1),面试字节跳动Linux运维工程师该怎么准备

    network.host: 0.0.0.0 # 绑定到0.0.0.0,允许任何ip来访问 我们把data和logs目录修改指向了elasticsearch的安装目录。但是这两个目录并不存在,因此我们需要创建出来。 进入elasticsearch的根目录,然后创建: mkdir data mkdir logs 目前我们是做的单机安装,如果要做集群,只需要在这个配置

    2024年04月25日
    浏览(45)
  • CentOS 7 离线迁移 Elasticsearch 数据

    环境说明 说明 配置 源地址 10.10.200.15:9200 目的地址 192.168.68.129:9200 迁移索引 user_info 数据条数 7 迁移方法 elasticsearch-dump 安装包依赖 node-v16.16.0-linux-x64.tar.xz elasticdump.tar.gz 环境验证 源地址 获取 elasticsearch 集群可用性 获取 elasticsearch 索引情况 目标地址 查看集群可用性 查看是

    2024年02月06日
    浏览(44)
  • Elasticsearch数据迁移从aliyun到aws

    前言: Aliyun Opeasearch 6.8.6 迁移 Aws OpenSearch 7.10 数据量: 32.5G左右, 数据传输方法 Aliyun OSS -→ Aliyun ECS/ AWS EC2 --→ AWS S3 1):点击创建角色,AWS服务搜索S3: 2): 附加策略policy-es-s3: 3):输入角色名称OpenSearchSnapshotRole创建角色 4):修改信任实体 在角色列表中点击OpenSearchSnapshotRole,在

    2023年04月20日
    浏览(66)
  • elasticsearch 8 修改分词器并数据迁移

    下载地址:https://github.com/medcl/elasticsearch-analysis-ik/releases 注意:版本要和ES版本对应 解压后放入plugins文件中 然后重启服务: docker-compose restart elasticsearch ,大概需要1分钟 当索引存在时不能修改已有索引分词器,会出现错误: 因此需要进行一下步骤: 使用新的 mappings 创建新

    2024年02月07日
    浏览(40)
  • elasticsearch7.5.2 数据迁移解决方案

    1. 迁移旧数据 a. 查看ES数据文件挂载目录位置 容器内路径: /usr/share/elasticsearch/data 如果没有挂载,需要将/usr/share/elasticsearch/data 压缩后,文件拷到宿主机上 bash # docker cp [容器名称:文件路径] [宿主机路径] b. 将data.tar.gz 上传至B服务器 迁移es数据文件至B服务器 /root 下 新增

    2024年02月12日
    浏览(58)
  • 【Elasticsearch】小白实战!ES使用Reindex迁移数据

    文章有点长,如果你想认真阅读,建议到我语雀文档上观看,格式友好 - ES 迁移工作 最近有一个需求是需要我负责将服务器A里的 ES 数据迁移到服务器B上,但是由于环境不通,所以就先在公司的测试环境和我本地上进行测试,因为之前没有接触过 ES 数据的迁移,所以上手时

    2024年02月05日
    浏览(42)
  • elasticsearch-dump 迁移es数据 (elasticdump)

    elasticsearch部分查询语句 # 获取集群的节点列表: curl ‘localhost:9200/_cat/nodesv’ curl ‘localhost:9200/_cat/indicesv’ 创建一个名为“customer”的索引,然后再查看所有的索引: curl -X PUT ‘localhost:9200/customerpretty’ curl ‘localhost:9200/_cat/indicesv’ 参考链接:https://blog.csdn.net/pilihaotian/ar

    2023年04月08日
    浏览(43)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包