参考自文档:
https://www.cnblogs.com/balloon72/p/13177872.html elasticsearch和kibana安装
https://www.cnblogs.com/fuguang/p/13745336.html monstache同步数据
1.ElasticSearch和kibana 安装
准备配置文件
mkdir -p /mydata/elasticsearch/config
mkdir -p /mydata/elasticsearch/data
mkdir -p /mydata/kibana/config
mkdir -p /mydata/monstache-conf
echo "http.host: 0.0.0.0" >> /mydata/elasticsearch/config/elasticsearch.yml
echo "http.cors.enabled: true" >> /mydata/elasticsearch/config/elasticsearch.yml
echo "http.cors.allow-origin: \"*\"" >> /mydata/elasticsearch/config/elasticsearch.yml
#2022-5-7新增以下两行 开启安全验证
echo "xpack.security.enabled: true" >> /mydata/elasticsearch/config/elasticsearch.yml
echo "xpack.security.transport.ssl.enabled: true" >> /mydata/elasticsearch/config/elasticsearch.yml
chmod 777 /mydata/elasticsearch/config
chmod 777 /mydata/kibana/config
chmod 777 /mydata/elasticsearch/data
chmod 777 /mydata/monstache-conf
编辑:/mydata/kibana/config/kibana.yml
elasticsearch.hosts: http://elasticsearch:9200
server.host: "0.0.0.0"
server.name: kibana
xpack.monitoring.ui.container.elasticsearch.enabled: true
#2022-5-7新增以下两行 开启安全验证
elasticsearch.username: "elastic" # es账号
elasticsearch.password: "******" # es密码
i18n.locale: zh-CN
编辑:/mydata/monstache-conf/monstache.config.toml文件内容:
# connectionn settings
# connect to MongoDB using the following URL
2022-5-12修改以下一行密码内容
mongo-url = "mongodb://root:******@192.168.3.208:27017,192.168.3.208:27018/nfy-csia?slaveOk=true&write=1&readPreference=secondaryPreferred&connectTimeoutMS=300000&authSource=admin&authMechanism=SCRAM-SHA-1"
# connect to the Elasticsearch REST API at the following node URLs
elasticsearch-urls = ["http://192.168.3.208:9200"]
direct-read-namespaces = ["nfy-csia.capMessage","nfy-csia.vehicleMessage"]
change-stream-namespaces = ["nfy-csia.capMessage","nfy-csia.vehicleMessage"]
# use the following user name for Elasticsearch basic auth
elasticsearch-user = "elastic"
# use the following password for Elasticsearch basic auth
#2022-5-7修改以下一行密码内容
elasticsearch-password = "******"
# use 4 go routines concurrently pushing documents to Elasticsearch
elasticsearch-max-conns = 4
# propogate dropped collections in MongoDB as index deletes in Elasticsearch
dropped-collections = true
# propogate dropped databases in MongoDB as index deletes in Elasticsearch
dropped-databases = true
# in the log if you had synced previously. This just means that you are replaying old docs which are already
# in Elasticsearch with a newer version. Elasticsearch is preventing the old docs from overwriting new ones.
replay = false
# resume processing from a timestamp saved in a previous run
resume = true
index-as-update = true
# use a custom resume strategy (tokens) instead of the default strategy (timestamps)
# tokens work with MongoDB API 3.6+ while timestamps work only with MongoDB API 4.0+
resume-strategy = 0
# print detailed information including request traces
verbose = true
准备容器配置
docker-compose编排脚本新增内容(实际追加在192.168.3.249服务器原docker-compose):
elasticsearch:
image: elasticsearch:7.14.2
restart: always
container_name: elasticsearch
deploy:
resources:
limits:
cpus: "4"
memory: 6G
reservations:
memory: 2G
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1024m -Xmx4096m"
volumes:
- /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /mydata/elasticsearch/data:/usr/share/elasticsearch/data
- /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins
ports:
- 9200:9200
- 9300:9300
networks:
- csia
kibana:
image: kibana:7.14.2
restart: always
container_name: es-kibana
deploy:
resources:
limits:
cpus: "1"
memory: 500M
reservations:
memory: 100M
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
volumes:
- /mydata/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
- csia
#2022-5-7新增密码设置步骤:
es启动后进入容器设置密码:
# 进入容器
docker exec -it elasticsearch /bin/bash
# 设置密码-手动设置密码 会有多个用户需要设置,如下图
elasticsearch-setup-passwords interactive
# 完成后测试访问
curl 127.0.0.1:9200 -u elastic:******
如图为以上多个用户配置密码,最主要是第一个elastic用户设为 ****** ,后面用户密码相同即可,不重要。
验证:http://IP:5601/app/kibana
2.mongoDB同步配置和 工具monstache安装:
在已有的mongoDB容器追加一行,并重启:
command: mongod --replSet repset
docker-compose添加mongoDB副本集容器:
mongo-replSet:
image: mongo:4.1.13
restart: always
deploy:
resources:
limits:
cpus: "2"
memory: 4G
reservations:
memory: 200M
logging:
driver: "json-file"
options:
max-size: "500m"
privileged: true
ports:
- 27018:27017
networks:
- csia
command: mongod --replSet repset
mongo两个容器都运行后,进入其中一个执行关联副本集命令:
docker exec -it mongo容器名 bash
mongo
rs.initiate({_id:"repset",members:[{_id:0,host:"192.168.3.249:27017"},{_id:1,host:"192.168.3.249:27018"}]})
返回ok结束
docker-compose新增同步工具monstache容器:
monstache:
image: rwynn/monstache:rel6
restart: always
container_name: monstache
volumes:
- /mydata/monstache-conf/monstache.config.toml:/app/monstache.config.toml
restart: always
deploy:
resources:
limits:
cpus: "1"
memory: 500M
reservations:
memory: 100M
command: -f /app/monstache.config.toml
depends_on:
- mongo
完成后启动docker-compose
查看es内索引和数据量:curl -s -XGET --user elastic:密码 'http://127.0.0.1:9200/_cat/indices/?v’
正常情况如图,能看到mongo的索引并数据不断增加
文章来源:https://www.toymoban.com/news/detail-412613.html
2022-5-7更新: 以下功能实测效率低已放弃使用附:由于技战法功能需要用到聚合查询,初始化es后需要将cameraId和archivesInfo.archivesId两个字段执行如下配置:kibana->开发工具->执行如下1、PUT nfy-csia.capmessage/_mapping?pretty{
~~ “properties”: { “cameraId”: { “type”: “text”, “fielddata”: true~~
~~ } }}{2、PUT nfy-csia.capmessage/_mapping?pretty “properties”: { “archivesInfo.archivesId”: { “type”: “text”, “fielddata”: true~~
~~ } }}~~
文章来源地址https://www.toymoban.com/news/detail-412613.html
到了这里,关于ElasticSearch单节点部署并通过monstache同步MongoDB数据的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!