jaeger部暑 (elasticsearch+kafka)

这篇具有很好参考价值的文章主要介绍了jaeger部暑 (elasticsearch+kafka)。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

服务器IP 部暑角色
192.168.11.100 kafka 、elasticsearch 、jaeger-collector、 jaeger-ingester、 jaeger-agent、 jaeger-query、 hotrod、spark dependencies

jaeger组件介绍:
jaeger-client:jaeger 的客户端,实现了opentracing协议;
jaeger-agent:jaeger client的一个代理程序,client将收集到的调用链数据发给agent,然后由agent发给collector;
jaeger-collector:负责接收jaeger client或者jaeger agent上报上来的调用链数据,然后做一些校验,比如时间范围是否合法等,最终会经过内部的处理存储到后端存储;
jaeger-query:专门负责调用链查询的一个服务,有自己独立的UI;
jaeger-ingester:中文名称“摄食者”,可用从kafka读取数据然后写到jaeger的后端存储,比如Cassandra和Elasticsearch;
spark denpendencies 对spans进行集合,生成依赖图

jaeger-client–>jaeger-agent–>jaeger-collector–>kafka–>jaeger-ingester–>elasticsearch<–jaeger-query<–jaeger-ui

jaeger部暑 (elasticsearch+kafka),运维工具,jaeger

一、docker部暑
。。。

二、安装kafka

path=/data/kafka
mkdir -p ${path}/{data,etc,log}
chown -R 1001 ${path}

cat > ${path}/etc/kafka_jaas.conf << 'EOF'
KafkaClient {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="kafka"
   password="passwordXXXXX";
   };
KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="inuser"
   password="BitNami"
   user_inuser="BitNami"
   user_kafka="passwordXXXXX";
   org.apache.kafka.common.security.scram.ScramLoginModule required;
   };
EOF

cat > ${path}/etc/sasl_config.properties << 'EOF'
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="passwordXXXXX";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
EOF


# KAFKA_BROKER_ID 、 KAFKA_CFG_ADVERTISED_LISTENERS  、 KAFKA_CFG_CONTROLLER_QUORUM_VOTERS 根据实际情况填写
cat > ${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f kafka

docker run -d \
--name kafka \
--restart=always \
--network host \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_HEAP_OPTS="-Xmx512m -Xms512m" \
-e KAFKA_ENABLE_KRAFT=yes \
-e KAFKA_CFG_PROCESS_ROLES=broker,controller \
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT \
-e KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 \
-e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.11.100:9092 \
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.100:9093 \
-e KAFKA_KRAFT_CLUSTER_ID="Aqvf7RVETX-DInZbNUS2Wg" \
-e KAFKA_CFG_SASL_ENABLED_MECHANISMS="PLAIN" \
-e KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL="PLAIN" \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \
-e KAFKA_LOG_RETENTION_HOURS=24 \
-v `pwd`/etc/kafka_jaas.conf:/opt/bitnami/kafka/config/kafka_jaas.conf \
-v `pwd`/etc/sasl_config.properties:/opt/bitnami/kafka/config/sasl_config.properties \
-v `pwd`/data:/bitnami/kafka/ \
-v /etc/localtime:/etc/localtime \
bitnami/kafka:3.4.0
#-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@192.168.11.100:9093,2@192.168.11.101:9093,3@192.168.11.102:9093 \ #kafka集群配置
EOF

bash ${path}/start.sh

#开放firewalld端口

firewall-cmd --permanent --add-port=9092/tcp
firewall-cmd --permanent --add-port=9093/tcp
firewall-cmd --reload
docker exec -it kafka bash 

#创建topic
kafka-topics.sh --create --bootstrap-server 192.168.11.100:9092 --topic test  --command-config /opt/bitnami/kafka/config/sasl_config.properties

#生产
kafka-console-producer.sh --bootstrap-server 192.168.11.100:9092 --topic test  --command-config /opt/bitnami/kafka/config/sasl_config.properties

#消费
kafka-console-consumer.sh --bootstrap-server 192.168.11.100:9092 --topic test --command-config /opt/bitnami/kafka/config/sasl_config.properties

#扩容分区
kafka-topics.sh --bootstrap-server 192.168.11.100:9092 --alter --topic  test --partitions 3 --replication-factor 1 --command-config /opt/bitnami/kafka/config/sasl_config.properties

#查询分区
kafka-topics.sh --describe --bootstrap-server 192.168.11.100:9092 --topic test  --command-config /opt/bitnami/kafka/config/sasl_config.properties

四、安装kowl

path=/data/kowl
mkdir -p ${path}

cat > ${path}/start.sh << 'EOF'
#!/bin/bash
docker rm -f kowl

cd $(dirname $0)
docker run -itd \
--restart=always \
--name kowl \
-v /etc/localtime:/etc/localtime \
-p 19002:8080 \
-e  KAFKA_BROKERS="192.168.11.100:9092,192.168.11.101:9092,192.168.11.102:9092" \
-e KAFKA_TLS_ENABLED=false \
-e KAFKA_SASL_ENABLED=true \
-e KAFKA_SASL_USERNAME=kafka \
-e KAFKA_SASL_PASSWORD="passwordXXXXX" \
rsmnarts/kowl:latest
EOF

bash ${path}/start.sh

#开放firewalld端口

firewall-cmd --permanent --add-port=19002/tcp
firewall-cmd --reload

四、elasticsearch
4.1、集群证书生成,生成elastic-certificates.p12证书(此步要手动执行确认)

mkdir -p /data/elasticsearch/{config,logs,data}/
mkdir -p /data/elasticsearch/config/certs/

chown 1000:root /data/elasticsearch/{config,logs,data}
docker run -it --rm \
-v /data/elasticsearch/config/:/usr/share/elasticsearch/config/ \
elasticsearch:7.17.6 bash  
#以下需要手动执行
bin/elasticsearch-certutil ca  -s --pass '' --days 10000 --out elastic-stack-ca.p12

bin/elasticsearch-certutil cert  -s --ca-pass '' --pass '' --days 5000  --ca elastic-stack-ca.p12  --out  elastic-certificates.p12

mv elastic-* config/certs
chown -R 1000:root config
exit

4.2 准备elasticsearch.yml

mkdir -p /data/elasticsearch/{config,data}
cat > /data/elasticsearch/config/elasticsearch.yml << 'EOF'
cluster.name: smartgate-cluster
discovery.seed_hosts: 192.168.11.100
cluster.initial_master_nodes: 192.168.11.100
network.host: 192.168.11.100

#增加了写队列的大小
thread_pool.write.queue_size: 1000
#锁定内存
bootstrap.memory_lock: true

xpack.license.self_generated.type: basic
xpack.ml.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: "certificate"
xpack.security.transport.ssl.keystore.path: "certs/elastic-certificates.p12"
xpack.security.transport.ssl.truststore.path: "certs/elastic-certificates.p12"
xpack.security.enabled: true

#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
#xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12
#xpack.security.http.ssl.client_authentication: optional
#xpack.security.authc.realms.pki.pki1.order: 1

node.roles: ['master','data','ingest','remote_cluster_client']
node.name: 192.168.11.100

http.port: 9200
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
http.cors.enabled: true
http.host: "192.168.11.100,127.0.0.1"
transport.host: "192.168.11.100,127.0.0.1"
ingest.geoip.downloader.enabled: false
EOF

cat >/data/elasticsearch/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
dockerd --iptables=false >/dev/nul 2>&1 &
sleep 1
docker start elasticsearch >/dev/nul 2>&1
if [ "$?" == "0" ]
then
docker rm elasticsearch -f
fi
sleep 1
docker start elasticsearch >/dev/nul 2>&1
if [ "$?" != "0" ]
then

echo "run elasticsearch"

docker run -d \
--restart=always \
--name elasticsearch \
--network host \
--privileged \
--ulimit memlock=-1:-1 \
--ulimit nofile=65536:65536 \
-e ELASTIC_PASSWORD=xxxxxxxx \
-e KIBANA_PASSWORD=xxxxxxxx \
-e "ES_JAVA_OPTS=-Xms1g -Xmx1g" \
-v /etc/localtime:/etc/localtime \
-v `pwd`/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v `pwd`/config/certs/:/usr/share/elasticsearch/config/certs \
-v `pwd`/data/:/usr/share/elasticsearch/data/ \
-v `pwd`/logs/:/usr/share/elasticsearch/logs/  \
elasticsearch:7.17.6
fi
EOF

bash /data/elasticsearch/start.sh

4.3 验证es

curl -u elastic:xxxxxxxx  http://192.168.11.100:9200/
{
  "name" : "192.168.11.101",
  "cluster_name" : "smartgate-cluster",
  "cluster_uuid" : "arM00fRrTy-FsqohMaftAA",
  "version" : {
    "number" : "7.17.6",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "f65e9d338dc1d07b642e14a27f338990148ee5b6",
    "build_date" : "2022-08-23T11:08:48.893373482Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

#开放firewalld端口

firewall-cmd --permanent --add-port=9200/tcp
firewall-cmd --permanent --add-port=9300/tcp
firewall-cmd --reload

五、jaeger-collector

path=/data/jaeger-collector
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-collector

docker run -d \
--restart=always \
--name jaeger-collector \
-p 9411:9411 \
-p 14250:14250 \
-p 14268:14268 \
-p 14269:14269 \
-e SPAN_STORAGE_TYPE=kafka \
-e KAFKA_PRODUCER_BROKERS="192.168.11.100:9092" \
-e KAFKA_PRODUCER_PLAINTEXT_USERNAME=kafka \
-e KAFKA_PRODUCER_PLAINTEXT_PASSWORD="passwordXXXXX" \
-e KAFKA_PRODUCER_AUTHENTICATION="plaintext" \
-e KAFKA_TOPIC="jaeger-spans" \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-collector:1.42
#-e SPAN_STORAGE_TYPE=elasticsearch \
#-e ES_SERVER_URLS=http://192.168.11.100:9200 \
#-e ES_USERNAME=elastic \
#-e ES_PASSWORD=xxxxxxxx \
EOF

bash ${path}/start.sh

#开放firewalld端口

firewall-cmd --permanent --add-port=9411/tcp
firewall-cmd --permanent --add-port=14250/tcp
firewall-cmd --permanent --add-port=14268/tcp
firewall-cmd --permanent --add-port=14269/tcp
firewall-cmd --reload

六、jaeger-ingester

path=/data/jaeger-ingester
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-ingester

docker run -d \
--restart=always \
--name jaeger-ingester \
--restart=always \
--hostname=ingester \
-e SPAN_STORAGE_TYPE=elasticsearch \
-e ES_ARCHIVE_SERVER_URLS="http://192.168.11.100:9200" \
-e ES_SERVER_URLS="http://192.168.11.100:9200" \
-e ES_USERNAME=elastic \
-e ES_PASSWORD=xxxxxxxx \
-e KAFKA_CONSUMER_BROKERS="192.168.11.100:9092" \
-e KAFKA_CONSUMER_TOPIC="jaeger-spans" \
-e KAFKA_CONSUMER_PLAINTEXT_USERNAME="kafka" \
-e KAFKA_CONSUMER_PLAINTEXT_PASSWORD="passwordXXXXX" \
-e KAFKA_CONSUMER_AUTHENTICATION="plaintext" \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-ingester:1.42
EOF

bash ${path}/start.sh

七、jaeger-agent(每台服务器部暑一个,但在kubernetes中部暑的意义不大)

path=/data/jaeger-agent
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-agent

docker run -d \
--restart=always \
--name jaeger-agent \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778/tcp \
-p 5775:5775/udp \
-e REPORTER_GRPC_HOST_PORT=192.168.11.100:14250 \
-e LOG_LEVEL=debug \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-agent:1.42
EOF

bash ${path}/start.sh

#开放firewalld端口

firewall-cmd --permanent --add-port=6831/tcp
firewall-cmd --permanent --add-port=6832/tcp
firewall-cmd --permanent --add-port=5778/tcp
firewall-cmd --permanent --add-port=5775/tcp
firewall-cmd --reload

八、jaeger-query

path=/data/jaeger-query
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
cd `dirname $0`
docker rm -f jaeger-query

docker run -d \
--restart=always \
--name=jaeger-query \
-p 16685:16685 \
-p 16686:16686 \
-p 16687:16687 \
-e SPAN_STORAGE_TYPE=elasticsearch \
-e ES_SERVER_URLS=http://192.168.11.100:9200 \
-e ES_USERNAME=elastic \
-e ES_PASSWORD=xxxxxxxx \
-e METRICS_STORAGE_TYPE=prometheus \
-e PROMETHEUS_SERVER_URL=http://192.168.11.100:9090 \
-e PROMETHEUS_TLS_ENABLED=false \
-v /etc/localtime:/etc/localtime \
jaegertracing/jaeger-query:1.42
EOF

bash ${path}/start.sh

#开放firewalld端口

firewall-cmd --permanent --add-port=16686/tcp
firewall-cmd --permanent --add-port=16687/tcp
firewall-cmd --permanent --add-port=16685/tcp
firewall-cmd --reload

访问query

http://192.168.11.100:16686/search

九、jaeger-client (hotrod example) 与应用集成

path=/data/hotrod
mkdir ${path} -p
cat >${path}/start.sh << 'EOF'
#!/bin/bash
docker rm -f hotrod

cd `dirname $0`
docker run -d \
--restart=always \
--name=hotrod \
-v /etc/localtime:/etc/localtime \
-p 8081:8080 \
-e OTEL_EXPORTER_JAEGER_AGENT_HOST=192.168.11.100 \
-e OTEL_EXPORTER_JAEGER_AGENT_PORT=6831 \
-e OTEL_EXPORTER_JAEGER_ENDPOINT=http://192.168.11.100:14268/api/traces \
jaegertracing/example-hotrod:1.42 -m prometheus all
EOF

bash ${path}/start.sh

#开放firewalld端口

firewall-cmd --permanent --add-port=8081/tcp
firewall-cmd --reload

#访问hotrod
http://192.168.11.100:8081/

jaeger部暑 (elasticsearch+kafka),运维工具,jaeger

十、dependenices
#定时任务 ,可24小时执行一次
crontab -e

docker run --rm -it -e STORAGE=elasticsearch -e ES_NODES=http://192.168.11.100:9200 -e ES_TIME_RANGE=24h -e ES_USERNAME=elastic -e ES_PASSWORD=xxxxxxxx -v /etc/localtime:/etc/localtime jaegertracing/spark-dependencies

23/05/27 10:14:12 INFO ElasticsearchDependenciesJob: Running Dependencies job for 2023-05-27T00:00Z, reading from jaeger-span-2023-05-27 index, result storing to jaeger-dependencies-2023-05-27
23/05/27 10:14:14 INFO ElasticsearchDependenciesJob: Done, 7 dependency objects created

spark 支持elasticsearch https协议(可在es的容器中运行)

#获取elasticsearch的https证书
openssl s_client -connect 192.168.11.100:9200 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >> ca.crt

把elasticsearch的https证书导入cacerts
keytool -import -alias elasticsearch -keystore -storepass "123456" cacerts -file ca.crt
#备份旧证书
mv $JAVA_HOME/lib/security/cacerts $JAVA_HOME/lib/security/cacerts.bak
#把cacerts证书放到java目录中
cp cacerts $JAVA_HOME/lib/security/cacerts

#在docker中运行
STORAGE=elasticsearch ES_NODES=https://192.168.11.100:9200  ES_USERNAME=elastic ES_PASSWORD=9BPyBVOR6rygdeoOezJDwKF30FAelstic java  -jar /app/jaeger-spark-dependencies-0.0.1-SNAPSHOT.jar    

23/05/28 03:56:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/05/28 03:56:04 INFO ElasticsearchDependenciesJob: Running Dependencies job for 2023-05-28T00:00Z, reading from jaeger-span-2023-05-28 index, result storing to jaeger-dependencies-2023-05-28
23/05/28 03:56:06 INFO ElasticsearchDependenciesJob: Done, 8 dependency objects created

---
#在系统运行
docker run --rm -it -e STORAGE=elasticsearch -e ES_NODES=https://192.168.11.100:9200 -e ES_TIME_RANGE=24h -e ES_USERNAME=elastic -e ES_PASSWORD=9BPyBVOR6rygdeoOezJDwKF30FAelstic -e ES_NODES_WAN_ONLY=true -v /data/jaeger/cacerts:/usr/local/openjdk-8/lib/security/cacerts -v /etc/localtime:/etc/localtime jaegertracing/spark-dependencies
23/05/28 04:01:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/05/28 04:01:32 INFO ElasticsearchDependenciesJob: Running Dependencies job for 2023-05-28T00:00Z, reading from jaeger-span-2023-05-28 index, result storing to jaeger-dependencies-2023-05-28
23/05/28 04:01:34 INFO ElasticsearchDependenciesJob: Done, 8 dependency objects created


jaeger部暑 (elasticsearch+kafka),运维工具,jaeger

jaeger部暑 (elasticsearch+kafka),运维工具,jaeger

1、prometheus文章来源地址https://www.toymoban.com/news/detail-724652.html

#创建prometheus工作目录
mkdir /data/prometheus/{data,conf,conf/rules,conf/sd_config} -p
chown -R  65534:65534 /data/prometheus/data

#promethes配置文件
cat > /data/prometheus/conf/prometheus.yml << 'EOF'
global:
  scrape_interval:     30s
  evaluation_interval: 30s
  scrape_timeout:      10s

#加载警报规则
rule_files:
  - "/etc/prometheus/rules/*.rules"

#集成alertmanager
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - 192.168.11.100:9093
    timeout: 10s
 
#promethes自身的监控
scrape_configs:
  - job_name: prometheus
    metrics_path: '/metrics' #默认
    scheme: 'http'  #默认
    scrape_interval: 30s    #覆盖全局
    static_configs:
      - targets: ['localhost:9090']
        labels:
          instance: prometheus
EOF

#启动脚本
cat > /data/prometheus/start.sh << 'EOF'
docker run -d \
--name prometheus \
--restart=always \
-p 9090:9090 \
-v /data/prometheus/conf/prometheus.yml:/etc/prometheus/prometheus.yml  \
-v /data/prometheus/conf/rules:/etc/prometheus/rules \
-v /data/prometheus/conf/sd_config:/etc/prometheus/sd_config \
-v /data/prometheus/data:/data/prometheus \
-v /etc/localtime:/etc/localtime:ro \
prom/prometheus:v2.28.0 \
--web.read-timeout=5m \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/data/prometheus \
--web.max-connections=512 \
--storage.tsdb.retention=30d \
--query.timeout=2m \
--web.enable-lifecycle  \
--web.listen-address=:9090  \
--web.enable-admin-api
EOF
bash /data/prometheus/start.sh

到了这里,关于jaeger部暑 (elasticsearch+kafka)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【运维】Kafka高可用: KRaft(不依赖zookeeper)集群搭建

    本文主要介绍了 kafka raft集群架构: 与旧架构的不同点,有哪些优势,哪些问题 架构成员有哪些,怎么规划。 三节点集群安装、启动与测试 在旧的架构中 Kafka集群包含 多个broker节点和一个ZooKeeper 集群 。如上图集群结构:4个broker节点和3个ZooKeeper节点。Kafka 集群的controller在

    2024年02月03日
    浏览(40)
  • elasticsearch系列四:集群常规运维

    概述  在使用es中如果遇到了集群不可写入或者部分索引状态unassigned,明明写入了很多数据但是查不到等等系列问题该怎么办呢?咱们今天一起看下常用运维命令。 案例       起初我们es性能还跟得上,随着业务发展壮大,发现查询性能越来越不行了,我们可以通过cat api查

    2024年02月04日
    浏览(41)
  • Kafka的灵魂伴侣Logi-KafkaManger(3)之运维管控--集群列表

    ==================================================================== 运维管控这个菜单栏目下面主要是供 运维人员 来管理所有集群的; 接入集群 Kafka的灵魂伴侣Logi-KafkaManger一之集群的接入及相关概念讲解 物理集群列表 列出了所有物理集群,点击一个物理集群进去看详细信息; 如果没有信息请

    2024年04月17日
    浏览(27)
  • 深入解读Kafka:如何进行运维与监控,实现性能调优和故障排除

    Kafka是由Apache Software Foundation开发的一款分布式流处理平台和消息队列系统 可以处理大规模的实时数据流,具有高吞吐量、低延迟、持久性和可扩展性等优点 常用于数据架构、数据管道、日志聚合、事件驱动等场景,对Kafka的运维和监控十分必要 本文旨在介绍Kafka的运维和监

    2024年02月04日
    浏览(44)
  • 系统运维工具KSysAK——让运维回归简单

    系统运维工具KSysAK——让运维回归简单 系统异常定位分析工具KSysAK是云峦操作系统研发及运维人员总结开发及运维经验,设计和研发的多个运维工具的集合, 可以覆盖系统的日常监控、线上问题诊断和系统故障修复等常见运维场景 。 工具的整体设计上,力图让运维工作回归

    2024年02月03日
    浏览(31)
  • 使用Jaeger进行分布式跟踪:学习如何在服务网格中使用Jaeger来监控和分析请求的跟踪信息

    🌷🍁 博主猫头虎 带您 Go to New World.✨🍁 🦄 博客首页——猫头虎的博客🎐 🐳《面试题大全专栏》 文章图文并茂🦕生动形象🦖简单易学!欢迎大家来踩踩~🌺 🌊 《IDEA开发秘籍专栏》学会IDEA常用操作,工作效率翻倍~💐 🌊 《100天精通Golang(基础入门篇)》学会Golang语言

    2024年02月08日
    浏览(39)
  • Jaeger的经典BUG

    前端,笔者在使用Jaeger进行Trace监控的时候,当数据量增大到一定数量级时,出现了一次CPU暴增导致节点服务器挂了的经典案例,这里对案例进行一个简单的抽象,供大家参考: 首先通过pprof对耗时的函数进行定位: 发现是在Trace初始化的调用了HostIP方法特别耗时 然后看了下

    2024年02月10日
    浏览(32)
  • 一站式 Elasticsearch 集群指标监控与运维管控平台

    上篇文章写了一下消息运维管理平台,今天带来的是ES的监控和运维平台。目前初创企业,不像大型互联网公司,可以重复的造轮子。前期还是快速迭代试错阶段,方便拿到市场反馈,及时调整自己的战略和产品方向。让自己活下去,话不多说 开始今天的分享。 一、项目介绍

    2024年02月10日
    浏览(39)
  • ElasticSearch+Kibana+Logstash实现日志可视化运维监控

    1.目标 1.安装ElasticSearch 2.安装Kibana 3.安装Logstash采集/var/log/messages日志内容 4.图表化展示日志监控结果 2.版本 这三者的版本号要完全一样 ElasticSearch 6.1.1 Kibana 6.1.1 Logstash 6.1.1 Jdk1.8.0_181 3.安装ElasticSearch 安装包:https://cloud.189.cn/t/zuQz6v2YZRVb (访问码:6sbf) 下载网站:https://www.elast

    2024年02月10日
    浏览(54)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包