服务器IP | 部暑角色 |
---|---|
192.168.11.100 | kafka 、elasticsearch 、jaeger-collector、 jaeger-ingester、 jaeger-agent、 jaeger-query、 hotrod、spark dependencies |
jaeger组件介绍:
jaeger-client:jaeger 的客户端,实现了opentracing协议;
jaeger-agent:jaeger client的一个代理程序,client将收集到的调用链数据发给agent,然后由agent发给collector;
jaeger-collector:负责接收jaeger client或者jaeger agent上报上来的调用链数据,然后做一些校验,比如时间范围是否合法等,最终会经过内部的处理存储到后端存储;
jaeger-query:专门负责调用链查询的一个服务,有自己独立的UI;
jaeger-ingester:中文名称“摄食者”,可用从kafka读取数据然后写到jaeger的后端存储,比如Cassandra和Elasticsearch;
spark denpendencies 对spans进行集合,生成依赖图
jaeger-client–>jaeger-agent–>jaeger-collector–>kafka–>jaeger-ingester–>elasticsearch<–jaeger-query<–jaeger-ui
一、docker部暑
。。。
二、安装kafka
path=/data/kafka
mkdir -p ${path}/{data,etc,log}
chown -R 1001 ${path}
cat > ${path}/etc/kafka_jaas.conf << 'EOF'
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="passwordXXXXX";
};
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="inuser"
password="BitNami"
user_inuser="BitNami"
user_kafka="passwordXXXXX";
org.apache.kafka.common.security.scram.ScramLoginModule required;
};
EOF
cat > ${path}/etc/sasl_config.properties << 'EOF'
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="passwordXXXXX";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
EOF
# KAFKA_BROKER_ID 、 KAFKA_CFG_ADVERTISED_LISTENERS 、 KAFKA_CFG_CONTROLLER_QUORUM_VOTERS 根据实际情况填写
cat > ${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f kafka
docker run -d \
--name kafka \
--restart=always \
--network host \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_HEAP_OPTS="-Xmx512m -Xms512m" \
-e KAFKA_ENABLE_KRAFT=yes \
-e KAFKA_CFG_PROCESS_ROLES=broker,controller \
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT \
-e KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 \
-e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.11.100:9092 \
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.100:9093 \
-e KAFKA_KRAFT_CLUSTER_ID="Aqvf7RVETX-DInZbNUS2Wg" \
-e KAFKA_CFG_SASL_ENABLED_MECHANISMS="PLAIN" \
-e KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL="PLAIN" \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \
-e KAFKA_LOG_RETENTION_HOURS=24 \
-v `pwd`/etc/kafka_jaas.conf:/opt/bitnami/kafka/config/kafka_jaas.conf \
-v `pwd`/etc/sasl_config.properties:/opt/bitnami/kafka/config/sasl_config.properties \
-v `pwd`/data:/bitnami/kafka/ \
-v /etc/localtime:/etc/localtime \
bitnami/kafka:3.4.0
#-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.100:9093,2@192.168.11.101:9093,3@192.168.11.102:9093 \ #kafka集群配置
EOF
bash ${path}/start.sh
#开放firewalld端口
firewall-cmd --permanent --add-port=9092/tcp
firewall-cmd --permanent --add-port=9093/tcp
firewall-cmd --reload
docker exec -it kafka bash
#创建topic
kafka-topics.sh --create --bootstrap-server 192.168.11.100:9092 --topic test --command-config /opt/bitnami/kafka/config/sasl_config.properties
#生产
kafka-console-producer.sh --bootstrap-server 192.168.11.100:9092 --topic test --command-config /opt/bitnami/kafka/config/sasl_config.properties
#消费
kafka-console-consumer.sh --bootstrap-server 192.168.11.100:9092 --topic test --command-config /opt/bitnami/kafka/config/sasl_config.properties
#扩容分区
kafka-topics.sh --bootstrap-server 192.168.11.100:9092 --alter --topic test --partitions 3 --replication-factor 1 --command-config /opt/bitnami/kafka/config/sasl_config.properties
#查询分区
kafka-topics.sh --describe --bootstrap-server 192.168.11.100:9092 --topic test --command-config /opt/bitnami/kafka/config/sasl_config.properties
四、安装kowl
path=/data/kowl
mkdir -p ${path}
cat > ${path}/start.sh << 'EOF'
#!/bin/bash
docker rm -f kowl
cd $(dirname $0)
docker run -itd \
--restart=always \
--name kowl \
-v /etc/localtime:/etc/localtime \
-p 19002:8080 \
-e KAFKA_BROKERS="192.168.11.100:9092,192.168.11.101:9092,192.168.11.102:9092" \
-e KAFKA_TLS_ENABLED=false \
-e KAFKA_SASL_ENABLED=true \
-e KAFKA_SASL_USERNAME=kafka \
-e KAFKA_SASL_PASSWORD="passwordXXXXX" \
rsmnarts/kowl:latest
EOF
bash ${path}/start.sh
#开放firewalld端口
firewall-cmd --permanent --add-port=19002/tcp
firewall-cmd --reload
四、elasticsearch
4.1、集群证书生成,生成elastic-certificates.p12证书(此步要手动执行确认)
mkdir -p /data/elasticsearch/{config,logs,data}/
mkdir -p /data/elasticsearch/config/certs/
chown 1000:root /data/elasticsearch/{config,logs,data}
docker run -it --rm \
-v /data/elasticsearch/config/:/usr/share/elasticsearch/config/ \
elasticsearch:7.17.6 bash
#以下需要手动执行
bin/elasticsearch-certutil ca -s --pass '' --days 10000 --out elastic-stack-ca.p12
bin/elasticsearch-certutil cert -s --ca-pass '' --pass '' --days 5000 --ca elastic-stack-ca.p12 --out elastic-certificates.p12
mv elastic-* config/certs
chown -R 1000:root config
exit
4.2 准备elasticsearch.yml
mkdir -p /data/elasticsearch/{config,data}
cat > /data/elasticsearch/config/elasticsearch.yml << 'EOF'
cluster.name: smartgate-cluster
discovery.seed_hosts: 192.168.11.100
cluster.initial_master_nodes: 192.168.11.100
network.host: 192.168.11.100
#增加了写队列的大小
thread_pool.write.queue_size: 1000
#锁定内存
bootstrap.memory_lock: true
xpack.license.self_generated.type: basic
xpack.ml.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: "certificate"
xpack.security.transport.ssl.keystore.path: "certs/elastic-certificates.p12"
xpack.security.transport.ssl.truststore.path: "certs/elastic-certificates.p12"
xpack.security.enabled: true
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
#xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12
#xpack.security.http.ssl.client_authentication: optional
#xpack.security.authc.realms.pki.pki1.order: 1
node.roles: ['master','data','ingest','remote_cluster_client']
node.name: 192.168.11.100
http.port: 9200
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
http.cors.enabled: true
http.host: "192.168.11.100,127.0.0.1"
transport.host: "192.168.11.100,127.0.0.1"
ingest.geoip.downloader.enabled: false
EOF
cat >/data/elasticsearch/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
dockerd --iptables=false >/dev/nul 2>&1 &
sleep 1
docker start elasticsearch >/dev/nul 2>&1
if [ "$?" == "0" ]
then
docker rm elasticsearch -f
fi
sleep 1
docker start elasticsearch >/dev/nul 2>&1
if [ "$?" != "0" ]
then
echo "run elasticsearch"
docker run -d \
--restart=always \
--name elasticsearch \
--network host \
--privileged \
--ulimit memlock=-1:-1 \
--ulimit nofile=65536:65536 \
-e ELASTIC_PASSWORD=xxxxxxxx \
-e KIBANA_PASSWORD=xxxxxxxx \
-e "ES_JAVA_OPTS=-Xms1g -Xmx1g" \
-v /etc/localtime:/etc/localtime \
-v `pwd`/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v `pwd`/config/certs/:/usr/share/elasticsearch/config/certs \
-v `pwd`/data/:/usr/share/elasticsearch/data/ \
-v `pwd`/logs/:/usr/share/elasticsearch/logs/ \
elasticsearch:7.17.6
fi
EOF
bash /data/elasticsearch/start.sh
4.3 验证es
curl -u elastic:xxxxxxxx http://192.168.11.100:9200/
{
"name" : "192.168.11.101",
"cluster_name" : "smartgate-cluster",
"cluster_uuid" : "arM00fRrTy-FsqohMaftAA",
"version" : {
"number" : "7.17.6",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "f65e9d338dc1d07b642e14a27f338990148ee5b6",
"build_date" : "2022-08-23T11:08:48.893373482Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
#开放firewalld端口
firewall-cmd --permanent --add-port=9200/tcp
firewall-cmd --permanent --add-port=9300/tcp
firewall-cmd --reload
五、jaeger-collector
path=/data/jaeger-collector
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-collector
docker run -d \
--restart=always \
--name jaeger-collector \
-p 9411:9411 \
-p 14250:14250 \
-p 14268:14268 \
-p 14269:14269 \
-e SPAN_STORAGE_TYPE=kafka \
-e KAFKA_PRODUCER_BROKERS="192.168.11.100:9092" \
-e KAFKA_PRODUCER_PLAINTEXT_USERNAME=kafka \
-e KAFKA_PRODUCER_PLAINTEXT_PASSWORD="passwordXXXXX" \
-e KAFKA_PRODUCER_AUTHENTICATION="plaintext" \
-e KAFKA_TOPIC="jaeger-spans" \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-collector:1.42
#-e SPAN_STORAGE_TYPE=elasticsearch \
#-e ES_SERVER_URLS=http://192.168.11.100:9200 \
#-e ES_USERNAME=elastic \
#-e ES_PASSWORD=xxxxxxxx \
EOF
bash ${path}/start.sh
#开放firewalld端口
firewall-cmd --permanent --add-port=9411/tcp
firewall-cmd --permanent --add-port=14250/tcp
firewall-cmd --permanent --add-port=14268/tcp
firewall-cmd --permanent --add-port=14269/tcp
firewall-cmd --reload
六、jaeger-ingester
path=/data/jaeger-ingester
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-ingester
docker run -d \
--restart=always \
--name jaeger-ingester \
--restart=always \
--hostname=ingester \
-e SPAN_STORAGE_TYPE=elasticsearch \
-e ES_ARCHIVE_SERVER_URLS="http://192.168.11.100:9200" \
-e ES_SERVER_URLS="http://192.168.11.100:9200" \
-e ES_USERNAME=elastic \
-e ES_PASSWORD=xxxxxxxx \
-e KAFKA_CONSUMER_BROKERS="192.168.11.100:9092" \
-e KAFKA_CONSUMER_TOPIC="jaeger-spans" \
-e KAFKA_CONSUMER_PLAINTEXT_USERNAME="kafka" \
-e KAFKA_CONSUMER_PLAINTEXT_PASSWORD="passwordXXXXX" \
-e KAFKA_CONSUMER_AUTHENTICATION="plaintext" \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-ingester:1.42
EOF
bash ${path}/start.sh
七、jaeger-agent(每台服务器部暑一个,但在kubernetes中部暑的意义不大)
path=/data/jaeger-agent
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-agent
docker run -d \
--restart=always \
--name jaeger-agent \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778/tcp \
-p 5775:5775/udp \
-e REPORTER_GRPC_HOST_PORT=192.168.11.100:14250 \
-e LOG_LEVEL=debug \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-agent:1.42
EOF
bash ${path}/start.sh
#开放firewalld端口
firewall-cmd --permanent --add-port=6831/tcp
firewall-cmd --permanent --add-port=6832/tcp
firewall-cmd --permanent --add-port=5778/tcp
firewall-cmd --permanent --add-port=5775/tcp
firewall-cmd --reload
八、jaeger-query
path=/data/jaeger-query
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-query
docker run -d \
--restart=always \
--name=jaeger-query \
-p 16685:16685 \
-p 16686:16686 \
-p 16687:16687 \
-e SPAN_STORAGE_TYPE=elasticsearch \
-e ES_SERVER_URLS=http://192.168.11.100:9200 \
-e ES_USERNAME=elastic \
-e ES_PASSWORD=xxxxxxxx \
-e METRICS_STORAGE_TYPE=prometheus \
-e PROMETHEUS_SERVER_URL=http://192.168.11.100:9090 \
-e PROMETHEUS_TLS_ENABLED=false \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-query:1.42
EOF
bash ${path}/start.sh
#开放firewalld端口
firewall-cmd --permanent --add-port=16686/tcp
firewall-cmd --permanent --add-port=16687/tcp
firewall-cmd --permanent --add-port=16685/tcp
firewall-cmd --reload
访问query
http://192.168.11.100:16686/search
九、jaeger-client (hotrod example) 与应用集成
path=/data/hotrod
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
docker rm -f hotrod
cd `dirname $0`
docker run -d \
--restart=always \
--name=hotrod \
-v /etc/localtime:/etc/localtime \
-p 8081:8080 \
-e OTEL_EXPORTER_JAEGER_AGENT_HOST=192.168.11.100 \
-e OTEL_EXPORTER_JAEGER_AGENT_PORT=6831 \
-e OTEL_EXPORTER_JAEGER_ENDPOINT=http://192.168.11.100:14268/api/traces \
jaegertracing/example-hotrod:1.42 -m prometheus all
EOF
bash ${path}/start.sh
#开放firewalld端口
firewall-cmd --permanent --add-port=8081/tcp
firewall-cmd --reload
#访问hotrod
http://192.168.11.100:8081/
十、dependenices
#定时任务 ,可24小时执行一次
crontab -e
docker run --rm -it -e STORAGE=elasticsearch -e ES_NODES=http://192.168.11.100:9200 -e ES_TIME_RANGE=24h -e ES_USERNAME=elastic -e ES_PASSWORD=xxxxxxxx -v /etc/localtime:/etc/localtime jaegertracing/spark-dependencies
23/05/27 10:14:12 INFO ElasticsearchDependenciesJob: Running Dependencies job for 2023-05-27T00:00Z, reading from jaeger-span-2023-05-27 index, result storing to jaeger-dependencies-2023-05-27
23/05/27 10:14:14 INFO ElasticsearchDependenciesJob: Done, 7 dependency objects created
spark 支持elasticsearch https协议(可在es的容器中运行)
#获取elasticsearch的https证书
openssl s_client -connect 192.168.11.100:9200 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >> ca.crt
把elasticsearch的https证书导入cacerts
keytool -import -alias elasticsearch -keystore -storepass "123456" cacerts -file ca.crt
#备份旧证书
mv $JAVA_HOME/lib/security/cacerts $JAVA_HOME/lib/security/cacerts.bak
#把cacerts证书放到java目录中
cp cacerts $JAVA_HOME/lib/security/cacerts
#在docker中运行
STORAGE=elasticsearch ES_NODES=https://192.168.11.100:9200 ES_USERNAME=elastic ES_PASSWORD=9BPyBVOR6rygdeoOezJDwKF30FAelstic java -jar /app/jaeger-spark-dependencies-0.0.1-SNAPSHOT.jar
23/05/28 03:56:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/05/28 03:56:04 INFO ElasticsearchDependenciesJob: Running Dependencies job for 2023-05-28T00:00Z, reading from jaeger-span-2023-05-28 index, result storing to jaeger-dependencies-2023-05-28
23/05/28 03:56:06 INFO ElasticsearchDependenciesJob: Done, 8 dependency objects created
---
#在系统运行
docker run --rm -it -e STORAGE=elasticsearch -e ES_NODES=https://192.168.11.100:9200 -e ES_TIME_RANGE=24h -e ES_USERNAME=elastic -e ES_PASSWORD=9BPyBVOR6rygdeoOezJDwKF30FAelstic -e ES_NODES_WAN_ONLY=true -v /data/jaeger/cacerts:/usr/local/openjdk-8/lib/security/cacerts -v /etc/localtime:/etc/localtime jaegertracing/spark-dependencies
23/05/28 04:01:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/05/28 04:01:32 INFO ElasticsearchDependenciesJob: Running Dependencies job for 2023-05-28T00:00Z, reading from jaeger-span-2023-05-28 index, result storing to jaeger-dependencies-2023-05-28
23/05/28 04:01:34 INFO ElasticsearchDependenciesJob: Done, 8 dependency objects created
文章来源:https://www.toymoban.com/news/detail-724652.html
1、prometheus文章来源地址https://www.toymoban.com/news/detail-724652.html
#创建prometheus工作目录
mkdir /data/prometheus/{data,conf,conf/rules,conf/sd_config} -p
chown -R 65534:65534 /data/prometheus/data
#promethes配置文件
cat > /data/prometheus/conf/prometheus.yml << 'EOF'
global:
scrape_interval: 30s
evaluation_interval: 30s
scrape_timeout: 10s
#加载警报规则
rule_files:
- "/etc/prometheus/rules/*.rules"
#集成alertmanager
alerting:
alertmanagers:
- static_configs:
- targets:
- 192.168.11.100:9093
timeout: 10s
#promethes自身的监控
scrape_configs:
- job_name: prometheus
metrics_path: '/metrics' #默认
scheme: 'http' #默认
scrape_interval: 30s #覆盖全局
static_configs:
- targets: ['localhost:9090']
labels:
instance: prometheus
EOF
#启动脚本
cat > /data/prometheus/start.sh << 'EOF'
docker run -d \
--name prometheus \
--restart=always \
-p 9090:9090 \
-v /data/prometheus/conf/prometheus.yml:/etc/prometheus/prometheus.yml \
-v /data/prometheus/conf/rules:/etc/prometheus/rules \
-v /data/prometheus/conf/sd_config:/etc/prometheus/sd_config \
-v /data/prometheus/data:/data/prometheus \
-v /etc/localtime:/etc/localtime:ro \
prom/prometheus:v2.28.0 \
--web.read-timeout=5m \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/data/prometheus \
--web.max-connections=512 \
--storage.tsdb.retention=30d \
--query.timeout=2m \
--web.enable-lifecycle \
--web.listen-address=:9090 \
--web.enable-admin-api
EOF
bash /data/prometheus/start.sh
到了这里,关于jaeger部暑 (elasticsearch+kafka)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!