k8s日志收集方案及实战

这篇具有很好参考价值的文章主要介绍了k8s日志收集方案及实战。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

k8s 日志收集方案

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
三种收集方案的优缺点:
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
日志收集环境:
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集架构
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集所用到的安装包及软件:

链接:https://pan.baidu.com/s/1G8XB6dKP8nmgcGNdup_j6g?pwd=vpwu 
提取码:vpwu 
1、elasticsearch安装配置
1.1 es安装
ip:192.168.75.170
rpm -ivh elasticsearch-7.6.2-x86_64.rpm
1.2 es配置
[root@es es]# cat /etc/elasticsearch/elasticsearch.yml| grep -v "#"
cluster.name: log-cluster1
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.75.170
http.port: 9200
discovery.seed_hosts: ["192.168.75.170"]
cluster.initial_master_nodes: ["node-1"]
action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"
1.3 启动es
systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch
systemctl status elasticsearch
2、kibana安装配置
2.1 kibana安装
ip:192.168.75.171
rpm -ivh kibana-7.6.2-x86_64.rpm
2.2 kibana配置
[root@kinaba ~]# cat /etc/kibana/kibana.yml | grep -Ev "^#|^$"
server.port: 5601
server.host: "192.168.75.171"
server.name: "kibana"
elasticsearch.hosts: ["http://192.168.75.170:9200"]
2.3 启动kibana
systemctl enable kibana
systemctl start kibana
systemctl status kibana
3、zookeeper安装配置
3.1 zookeeper安装
ip:192.168.75.173
mkdir /usr/local/java
tar xf jdk-8u171-linux-x64.tar.gz -C /usr/local/java/

vim /etc/profile	#配置环境变量,添加下面几行
export JAVA_HOME=/usr/local/java/jdk1.8.0_171
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

source /etc/profile
java -version

下载安装包,下载地址:https://zookeeper.apache.org/releases.html

  wget https://dlcdn.apache.org/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz
  mkdir /apps
  tar xf apache-zookeeper-3.6.3-bin.tar.gz -C /apps/ && cd /apps
  ln -s apache-zookeeper-3.6.3-bin zookeeper

修改zookeeper配置

mkdir -p /data/zookeeper/{data,logs}
cd /apps/zookeeper/
cp conf/zoo_sample.cfg conf/zoo.cfg
cat conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
clientPort=2181
server.0=192.168.75.173:2288:3388
echo 0 >/data/zookeeper/data/myid #此处myid在3个节点分别为0,1,2
3.2 启动zookeeper
cat /etc/systemd/system/zookeeper.service
[Unit]
Description=zookeeper.service
After=network.target

[Service]
Type=forking
ExecStart=/apps/zookeeper/bin/zkServer.sh start
ExecStop=/apps/zookeeper/bin/zkServer.sh stop
ExecReload=/apps/zookeeper/bin/zkServer.sh restart

[Install]
WantedBy=multi-user.target
system daemon reload
systemctl restart zookeeper
systemctl status zookeeper
3.3 检查zookeeper状态
[root@kafka ~]# /apps/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: standalone
4、kafka安装配置
4.1 kafka安装

和zookeeper部署在相同的主机,下载安装包,下载地址:https://kafka.apache.org/downloads

wget https://archive.apache.org/dist/kafka/3.2.1/kafka_2.12-3.2.1.tgz
tar xf kafka_2.12-3.2.1.tgz -C /apps/ && cd /apps
ln -s kafka_2.12-3.2.1 kafka
4.2 配置kafka
mkdir -p /data/kafka/kafka-logs
cd /app/kafka
vim config/server.properties
broker.id=0	#修改id,3个节点分别为0、1、2
listeners=PLAINTEXT://192.168.75.173:9092	#修改监听地址为本机地址
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/kafka-logs		#修改数据存放目录
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.75.173:2181	#指定zookeeper地址
zookeeper.connection.timeout.ms=18000`在这里插入代码片`
group.initial.rebalance.delay.ms=0
4.3 启动kafka
cat /etc/systemd/system/kafka.service
[Unit]
Description=kafka.service
After=network.target remote-fs.target zookeeper.service

[Service]
Type=forking
Environment=JAVA_HOME=/usr/local/java/jdk1.8.0_171
ExecStart=/apps/kafka/bin/kafka-server-start.sh -daemon /apps/kafka/config/server.properties
ExecStop=/apps/kafka/bin/kafka-server-stop.sh
ExecReload=/bin/kill -s HUP $MAINPID

[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kafka
systemctl status kafka

使用Kafka 客户端管理工具 Offset Explorer查看验证kafka集群状态,关于此工具的使用可以参考:https://blog.csdn.net/qq_39416311/article/details/123316904
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk

5、安装配置logstash
5.1 安装logstash
ip:192.168.75.172
rpm -ivh logstash-7.6.2.rpm
5.2 启动logstash
systemctl start logstash
systemctl enable logstash
6、配置日志收集
6.1 基于daemonset的日志收集

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk

6.2 构建logstash镜像

在k8s或者其他安装有docker的机器上构建logstash镜像,Dockerfile如下

[root@master Dockerfile]# pwd
/root/Dockerfile
[root@master Dockerfile]# cat Dockerfile 
FROM logstash:7.6.2
LABEL author="admin@qq.com"
WORKDIR /usr/share/logstash
COPY logstash.yml /usr/share/logstash/config/logstash.yml
COPY logstash.conf /usr/share/logstash/pipeline/logstash.conf
USER root
RUN usermod -a -G root logstash

logstash.yml内容如下

[root@master Dockerfile]# cat logstash.yml 
http.host: "0.0.0.0"

logstash.conf内容如下

[root@master Dockerfile]# cat logstash.conf 
input {
  file {
    path => "/var/lib/docker/containers/*/*-json.log" #docker
    #path => "/var/log/pods/*/*/*.log"	#使用containerd时,Pod的log的存放路径
    start_position => "beginning"
    type => "applog"	#日志类型,自定义
  }

  file {
    path => "/var/log/*.log"	#操作系统日志路径
    start_position => "beginning"
    type => "syslog"
  }
}

output {
  if [type] == "applog" {	#指定将applog类型的日志发送到kafka的哪个topic
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}"	#日志格式
   } }

  if [type] == "syslog" {	##指定将syslog类型的日志发送到kafka的哪个topic
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}" #系统日志不是json格式
  }}
}

构建镜像

[root@master Dockerfile]# ls
Dockerfile  logstash.conf  logstash.yml
docker build -t logstash-daemonset:7.6.2 .

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
打包镜像,传到其他节点,也可以上传到harbor

docker save -o logstash-daemonset.tar logstash-daemonset:7.6.2
docker load -i logstash-daemonset.tar
6.3 部署logstash daemonset,yaml如下
[root@master ~]# cat logstash.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: logstash-daemonset
  namespace: log
spec:
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

      containers:
      - name: logstash
        image: logstash-daemonset-docker:7.6.2
        imagePullPolicy: IfNotPresent
        env:
        - name: KAFKA_SERVER
          value: "192.168.75.173:9092"
        - name: TOPIC_ID
          value: "logstash-log-test1"
        - name: CODEC
          value: "json"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
          readOnly: False
        - name: varlogpods
         # mountPath: /var/log/pods
          mountPath: /var/lib/docker/containers
          readOnly: False
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlogpods
        hostPath:
          #path: /var/log/pods
          path: /var/lib/docker/containers
[root@master ~]# kubectl apply -f logstash.yaml 
daemonset.apps/logstash-daemonset created
[root@master ~]# kubectl get pods -n log
NAME                       READY   STATUS    RESTARTS   AGE
logstash-daemonset-8j622   1/1     Running   0          7s
logstash-daemonset-dxnwk   1/1     Running   0          7s

在kafka中进行查看,可以看到日志已经发送到kafka
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk

6.4 修改logstash配置

这里的logstash是之前部署在主机上的logstash,而不是Pod。配置其从kafka读取日志然后发送到es

[root@logstash conf.d]# cat logstash-daemonset-kafka-to-es.conf 
input {
  kafka {
    bootstrap_servers => "192.168.75.173:9092"
    topics => ["logstash-log-test1"]
    codec => "json"
  }
}

output {
  if [type] == "applog" {
    elasticsearch {
      hosts => ["192.168.75.170:9200"]
      index => "applog-%{+YYYY.MM.dd}"
    }}

  if [type] == "syslog" {
    elasticsearch {
      hosts => ["192.168.75.170:9200"]
      index => "syslog-%{+YYYY.MM.dd}"
    }}
}
检查语法
./logstash -f /etc/logstash/conf.d/logstash-daemonset-kafka-to-es.conf -t
启动
./logstash -f /etc/logstash/conf.d/logstash-daemonset-kafka-to-es.conf
systemctl restart logstash
6.5 配置kibana展示日志

分别为applog和syslog创建日志索引模式
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
创建pod查看是否有日志上传

[root@master ~]# cat nginx1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx2
  namespace: log
  labels:
    app: nginx2
spec:
  containers:
  - name: nginx2
    image: nginx:latest
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 80
7、基于sidecar容器的日志收集

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
在这种方式下node节点上的日志还是需要部署额外的服务去收集

7.1 构建sidecar镜像
[root@master Dockerfile-sidecar]# cat Dockerfile 
FROM logstash:7.6.2
LABEL author="admin@163.com"
WORKDIR /usr/share/logstash
COPY logstash.yml /usr/share/logstash/config/logstash.yml
COPY logstash.conf /usr/share/logstash/pipeline/logstash.conf
USER root
RUN usermod -a -G root logstash

logstash.yaml内容如下

[root@master Dockerfile-sidecar]# cat logstash.yml 
http.host: "0.0.0.0"

logstash.conf内容如下:

[root@master Dockerfile-sidecar]# cat logstash.conf 
input {
  file {
    path => "/var/log/applog/catalina.*.log"
    start_position => "beginning"
    type => "tomcat-app1-catalina-log"
  }

  file {
    path => "/var/log/applog/localhost_access_log.*.txt"
    start_position => "beginning"
    type => "tomcat-app1-access-log"
  }
}

output {
  if [type] == "tomcat-app1-catalina-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}"
   } }

  if [type] == "tomcat-app1-access-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}" #系统日志不是json格式
  }}
}
7.2 构建镜像
docker build -t logstash-sidecar:7.6.2 .
docker save -o logstash-sidecar.tar logstash-sidecar:7.6.2
7.3 部署pod
[root@master ~]# cat sidecar.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: log
  name: tomcat-app1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat-app1
  template:
    metadata:
      labels:
        app: tomcat-app1
    spec:
      containers:
      - name: logstash-sidecar
        image: logstash-sidecar:7.6.2
        imagePullPolicy: IfNotPresent
        env:
        - name: KAFKA_SERVER
          value: "192.168.75.173:9092"
        - name: TOPIC_ID
          value: "tomcat-log1"
        - name: CODEC
          value: "json"
        volumeMounts:
        - name: applog
          mountPath: /var/log/applog
      - name: tomcat
        image: daocloud.io/library/tomcat:8.0.45
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: applog
          mountPath: /usr/local/tomcat/logs/
      volumes:
      - name: applog
        emptyDir: {}
[root@master ~]# kubectl get pods -n log
NAME                          READY   STATUS    RESTARTS   AGE
tomcat-app1-55759b549-d6tgv   2/2     Running   0          117s
tomcat-app1-55759b549-r8dfd   2/2     Running   0          117s

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
在kafka中查看,sidecar容器已经将日志发送到kafka

7.4 配置logstash
cat logstash-sidercat-kafka-to-es.conf
input {
  kafka {	#从kafka tomcat-app1-log topic中读取日志
    bootstrap_servers => "192.168.75.173:9092"
    topics => ["tomcat-app1"]
    codec => "json"
  }
}


output {
  if [type] == "tomcat-app1-access-log" {	#tomcat访问日志存储到es的tomcat-app1-accesslog-%{+YYYY.MM.dd}索引中
    elasticsearch {
      hosts => ["192.168.75.170:9200"]
      index => "tomcat-app1-accesslog-%{+YYYY.MM.dd}"
    }
  }

  if [type] == "tomcat-app1-catalina-log" {		#tomcat启动日志存储到es的tomcat-app1-catalinalog-%{+YYYY.MM.dd}索引中
    elasticsearch {
      hosts => ["192.168.75.170:9200"]
      index => "tomcat-app1-catalinalog-%{+YYYY.MM.dd}"
    }
  }
}
systemctl restart logstash

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk

8、基于容器内置的日志收集进程的日志收集

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk

8.1 构建镜像

业务镜像内需要运行两个进程,一个是tomcat提供web服务,另一个是filebeat负责收集日志。Dockerfile如下:

[root@node1 Dockerfile]# cat Dockerfile 
FROM centos:centos7.7.1908
WORKDIR /tmp
COPY jdk-8u171-linux-x64.tar.gz /tmp
RUN tar zxf /tmp/jdk-8u171-linux-x64.tar.gz -C /usr/local/ && rm -rf /tmp/jdk-8u171-linux-x64.tar.gz
RUN ln -s /usr/local/jdk1.8.0_171 /usr/local/jdk
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME=${JAVA_HOME}/jre
ENV CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
ENV PATH=${JAVA_HOME}/bin:$PATH
COPY apache-tomcat-8.5.87.tar.gz /tmp
RUN tar zxf apache-tomcat-8.5.87.tar.gz -C /usr/local && rm -rf /tmp/apache-tomcat-8.5.87.tar.gz
RUN mv /usr/local/apache-tomcat-8.5.87 /usr/local/tomcat
COPY filebeat-7.6.2-x86_64.rpm /tmp/
RUN cd /tmp/ && rpm -ivh filebeat-7.6.2-x86_64.rpm && rm -f filebeat-7.6.2-x86_64.rpm
COPY filebeat.yml /etc/filebeat/
COPY run.sh /usr/local/bin/
RUN chmod 755 /usr/local/bin/run.sh
EXPOSE 8443 8080
CMD ["/usr/local/bin/run.sh"]

filebeat.yml内容如下

[root@node1 Dockerfile]# cat filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /usr/local/tomcat/logs/catalina.*.log
  fields:
    type: filebeat-tomcat-catalina
- type: log
  enabled: true
  paths:
    - /usr/local/tomcat/logs/localhost_access_log.*.txt
  fields:
    type: filebeat-tomcat-accesslog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

output.kafka:
  hosts: ["192.168.75.173:9092"]
  required_acks: 1
  topic: "filebeat-tomcat-app1"
  compression: gzip
  max_message_bytes: 1000000

run.sh内容如下:

[root@node1 Dockerfile]# cat run.sh 
#!/bin/bash

/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat &

/usr/local/tomcat/bin/startup.sh && tail -f /usr/local/tomcat/logs/catalina.out

执行构建

docker build -t filebeat-log:7.6.2 .
8.2 部署业务容器
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat-myapp
  template:
    metadata:
      labels:
        app: tomcat-myapp
    spec:
      containers:
      - name: tomcat-myapp
        image: filebeat-log:7.6.2
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
[root@master ~]# kubectl get pods -n log
NAME                            READY   STATUS    RESTARTS   AGE
tomcat-myapp-78ff79cd6c-4dnsb   1/1     Running   0          3m4s
tomcat-myapp-78ff79cd6c-mpl4f   1/1     Running   0          3m4s
8.3 配置logstash
cat /etc/logstash/conf.d/filebeat-process-kafka-to-es.conf

input {
  kafka {
    bootstrap_servers => "192.168.75.173:9092"
    topics => ["filebeat-tomcat-app1"]
    codec => "json"
  }
}

output {
  if [fields][type] == "filebeat-tomcat-catalina" {
    elasticsearch {
      hosts => ["192.168.75.170:9200"]
      index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
    }}

  if [fields][type] == "filebeat-tomcat-accesslog" {
    elasticsearch {
      hosts => ["192.168.75.170:9200"]
      index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
    }}
}
systemctl restart logstash

k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk
k8s日志收集方案,elk,k8s,kubernetes,elasticsearch,elk

9、elasticsearch-head安装
yum install git npm # npm在epel源中
git clone https://github.com/mobz/elasticsearch-head.git # 安装过程需要连接互联网
cd elasticsearch-head # git clone后会自动生成的一个目录
npm install
npm run start

如果想查询集群健康信息,那么需要在elasticsearch配置文件中授权文章来源地址https://www.toymoban.com/news/detail-756537.html

vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true # elasticsearch中启用CORS
http.cors.allow-origin: "*" # 允许访问的IP地址段,* 为所有IP都可以访问
10、kafka命令
查看已创建的topic列表:
./kafka-topics.sh --list --zookeeper localhost
查看对应topic的描述信息
[root@kafka bin]# ./kafka-topics.sh --describe --zookeeper 192.168.75.173 --topic logstash-log
Topic:logstash-log	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: logstash-log	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
消费消息:
[root@kafka bin]# ./kafka-console-consumer.sh --bootstrap-server 192.168.75.173:9092 --topic logstash-log --from-beginning

[root@kafka bin]# ./kafka-console-producer.sh --broker-list 192.168.75.173:9092 --topic logstash-log
[root@kafka bin]# ./kafka-console-consumer.sh --bootstrap-server 192.168.75.173:9092 --topic logstash-log --from-beginning

查询topic列表
./kafka-topics.sh --list --zookeeper 192.168.75.173:2181

到了这里,关于k8s日志收集方案及实战的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【Kubernetes 企业项目实战】04、基于 K8s 构建 EFK+logstash+kafka 日志平台(中)

    目录 一、安装存储日志组件 Elasticsearch 1.1 创建名称空间 1.2 安装 elasticsearch 组件 1)创建 headless service 服务 2)通过 statefulset 创建 elasticsearch 集群 二、安装 kibana 可视化 UI 界面 本篇文章所用到的资料文件下载地址: kibana-v7.2.0-kubernetes文档类资源-CSDN下载 https://download.csdn.ne

    2023年04月08日
    浏览(31)
  • K8S:容器日志收集与管理

    Kubernetes 里面对容器日志的处理方式,都叫作 cluster-level-logging,即:这个日志处理系统,与容器、Pod 以及 Node 的生命周期都是完全无关的。这种设计当然是为了保证,无论是容器挂了、Pod 被删除,甚至节点宕机的时候,应用的日志依然可以被正常获取到。 而对于一个容器来

    2024年02月15日
    浏览(28)
  • K8s 日志收集-Day 07

    官方文档:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/ (1)node节点收集 基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集; 该方式的优点:日志收集架构简单,易部署、易维护。 该方式的缺点:node节点产生的日志、

    2024年03月26日
    浏览(29)
  • EFK简单部署收集K8S日志

    安装ES kibana K8S部署

    2024年04月27日
    浏览(23)
  • K8S部署EFK(fluentd)收集日志

    首先部署es和kinban es部署 创建es-svc es-statefulset.yaml 开启es跨域访问 部署kinban 创建kibana的configmap kinban开启中文 创建kibana 部署fluentd 创建fluentd-configmap 只收集pod label 标记为 logging=true 的日志 注意挂载的日志目录出现unreadable 说明日志软连接有问题,无法读取日志 部署fluentd-dae

    2024年02月16日
    浏览(32)
  • 收集K8S容器的标准输出日志实践

    参考文章 k8s日志文件说明 一般情况下,容器中的日志在输出到标准输出 (stdout) 时,会以  *-json.log   的命名方式保存在 /var/lib/docker/containers 目录中。 这里能看到,有这么个文件:  /data/docker/containers/container id/*-json.log ,然后k8s默认会在  /var/log/containers   和  /var/log/pods 目

    2024年02月11日
    浏览(30)
  • k8s日志收集组件 Grafana loki --- 理论篇

    当我们在k8s上运行程序时,习惯的会使用ELK来收集和查询程序运行日志。今天我们介绍一款新的专为日志收集而生的神器:Grafana loki。Grafana Loki 是一组组件,可以组合成一个功能齐全的日志堆栈。 与其他日志记录系统不同,Loki 仅构建索引有关日志的元数据:标签(就像 P

    2024年02月10日
    浏览(37)
  • 第26关 K8s日志收集揭秘:利用Log-pilot收集POD内业务日志文件

    ------ 课程视频同步分享在今日头条和B站 大家好,我是博哥爱运维。 OK,到目前为止,我们的服务顺利容器化并上了K8s,同时也能通过外部网络进行请求访问,相关的服务数据也能进行持久化存储了,那么接下来很关键的事情,就是怎么去收集服务产生的日志进行数据分析及

    2024年02月03日
    浏览(36)
  • 【k8s】【ELK】【zookeeper+kafka+efak】【一】日志环境部署

    如何收集日志 使用 EFK+Logstash+Kafka 收集K8S哪些日志? 2.1 ES集群的构建 demo: 2.2 交付ES-Service 01-es-svc.yaml 2.3 交付ES-Master节点 2.4交付ES-Data节点 2.5 验证ES集群 3.1 交付Kibana(dp、svc、ingress) 01-kibana-dp.yam 02-kibana-svc.yam 03-kibana-ingress.yam 3.2 访问kibana 01-zk-svc.yaml 02-zk-sts.yaml 验证zk集群

    2024年02月07日
    浏览(35)
  • Rancher中使用promtail+loki+grafna收集k8s日志并展示

    根据应用需求和日志数量级别选择对应的日志收集、过滤和展示方式,当日志量不太大,又想简单集中管理查看日志时,可使用promtail+loki+grafna的方式。本文找那个loki和grafana外置在了k8s集群之外。 方式一: 方式二: 登录rancher,选择集群→应用→Chart仓库→创建,配置仓库

    2024年01月20日
    浏览(30)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包