centos7 yum安装ELK8.X+filebeat

这篇具有很好参考价值的文章主要介绍了centos7 yum安装ELK8.X+filebeat。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

centos7 yum安装ELK8.X+filebeat


环境版本

系统:centos7.9

elasticsearch-8.5.3
kibana-8.5.3
logstash-8.5.3
filebeat-8.5.3

一、ELK下载路径

  1. 下载地址:Elastic官网
    centos7 yum安装ELK8.X+filebeat

  2. 我下载的是rpm格式
    centos7 yum安装ELK8.X+filebeat

  3. 在ssh工具上,可以通过wget 命令将4个rpm包进行下载,

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.5.3-x86_64.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.5.3-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.5.3-x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.5.3-x86_64.rpm
  1. 下载的包就放在root下,如
    centos7 yum安装ELK8.X+filebeat

二、elasticsearch 安装配置

1.elasticsearch 安装

使用yum localinstall 安装本地rpm包,执行代码如下(示例):

yum localinstall elasticsearch-8.5.3-x86_64.rpm

安装完成会生成默认密码,可以记录下,登录kibana时需要
centos7 yum安装ELK8.X+filebeat

2.配置elasticsearch.yml文件

配置elasticsearch.yml文件:

vi /etc/elasticsearch/elasticsearch.yml

把配置文件中如下几个更改或添加,并打开注释,如下

cluster.name: wxxya-es
http.port: 9200
network.host: 0.0.0.0
http.host: 0.0.0.0

elasticsearch8中多了sslt和安全中心,xpack版块是安装自动生成的不用管

centos7 yum安装ELK8.X+filebeat

3.配置jvm.options文件,设置es占用系统内存大小

vi /etc/elasticsearch/jvm.options

我这当前设置最大小为3g

-Xms3g
-Xmx3g

4.elasticsearch运行

启动停止

sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service

开机自启

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

查看elasticsearch启动日志,选其一就行

journalctl --unit elasticsearch
systemctl status elasticsearch.service

例:
centos7 yum安装ELK8.X+filebeat
查看端口是否启动,8.x使用curl请求9200,返回:curl: (52) Empty reply from server,必须是https请求。

netstat -ntlp
curl http://localhost:9200

centos7 yum安装ELK8.X+filebeat


三、Kibana 安装配置

1.Kibana 安装

使用yum localinstall 安装本地rpm包,执行代码如下(示例):

yum localinstall kibana-8.5.3-x86_64.rpm

2.配置kibana.yml文件

vi /etc/kibana/kibana.yml

把配置文件中kibana改成中文,设置自己本机IP,其它不变。如下

i18n.locale: "zh-CN"
elasticsearch.hosts: ['https://172.24.67.40:9200']

centos7 yum安装ELK8.X+filebeat

3.kibana运行,与es一致名称改一下就行

启动停止

sudo systemctl start kibana.service
sudo systemctl stop kibana.service

查看5601商品是否启动

centos7 yum安装ELK8.X+filebeat

4.打开浏览器http://ip:5601,使用elastic用户的密码进行认证

第一次访问改链接的时候需要填入令牌,令牌就是第一次启动elasticsearch时保存的信息中的token,注意这个token只有30分钟的有效期,如果过期了只能进入容器重置token
centos7 yum安装ELK8.X+filebeat

重置token:进入容器执行

 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url "https://127.0.0.1:9200"

输入token以后会看到一个验证码框,验证码从kibana的日志中获取
centos7 yum安装ELK8.X+filebeat

生成验证码

/usr/share/kibana/bin/kibana-verification-code

输入用户名:elastic,密码:es安装时记录的密码
centos7 yum安装ELK8.X+filebeat

四、logstash 安装配置

1.logstash 安装

使用yum localinstall 安装本地rpm包,执行代码如下(示例):

yum localinstall logstash-8.5.3-x86_64.rpm

2.配置logstash.yml文件

vi /etc/logstash/logstash.yml

在kibana.yml文件中增加了

http.host: "0.0.0.0"
http.port: 9600-9700

更改startup.options文件,设置为root用户权限

LS_USER=root
LS_GROUP=root

创建一个logstash.conf配置文件,我这是配置nginx日志和pm2日志,nginx日志格式也得配置一下

vi /etc/logstash/conf.d/logstash.conf

logstash.conf内容如下

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  
   beats {
    port => 4567
  }
   file {
    path => "/var/log/nginx/access.log"
    type => "nginx-accesslog"
    stat_interval => "1"
    start_position => "beginning"
  }

  file {
    path => "/var/log/nginx/error.log"
    type => "nginx-errorlog"
    stat_interval => "1"
    start_position => "beginning"
  }

}

filter {
  if [type] == "nginx-accesslog" {
  grok {
    match => { "message" => ["%{IPORHOST:clientip} - %{DATA:username} \[%{HTTPDATE:request-time}\] \"%{WORD:request-method} %{DATA:request-uri} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:useragent}\""] }
    remove_field => "message"
    add_field => { "project" => "magedu"}
  }
  mutate {
    convert => [ "[response_code]", "integer"]
    }
  }
  if [type] == "nginx-errorlog" {
    grok {
      match => { "message" => ["(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:loglevel}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:message}, client: %{IPV4:clientip}, server: %{GREEDYDATA:server}, request: \"(?:%{WORD:request-method} %{NOTSPACE:request-uri}(?: HTTP/%{NUMBER:httpversion}))\", host: %{GREEDYDATA:domainname}"]}
      remove_field => "message"
    }
  }
}

output {
	if [type] == "nginx-accesslog" {
    elasticsearch {
		hosts => ["https://172.24.67.40:9200"]
		index => "nginx-accesslog-%{+yyyy.MM.dd}"
		ssl => true
		ssl_certificate_verification => false
		user => "elastic"
		password => "tsy123456"
	}}
	if [type] == "nginx-errorlog" {
    elasticsearch {
		hosts => ["https://172.24.67.40:9200"]
		index => "nginx-errorlog-%{+yyyy.MM.dd}"
		ssl => true
		ssl_certificate_verification => false
		user => "elastic"
		password => "tsy123456"
	}}
	if [fields][service] == "pm2" {
    elasticsearch {
		hosts => ["https://172.24.67.40:9200"]
		index => "pm2-log-%{+yyyy.MM.dd}"
		ssl => true
		ssl_certificate_verification => false
		user => "elastic"
		password => "tsy123456"
	}}
}

我之前出过nginx配置和安装:nginx安装教程

vi /etc/nginx/nginx.conf

nginx文件更改内容,设置json日志格式

user root;
worker_processes auto;

pid /run/nginx.pid;


events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
					  
	log_format json '{ "@timestamp": "$time_iso8601", '
                     '"remote_addr": "$remote_addr", ' # 客户端的ip地址
                     '"remote_user": "$remote_user", '  # 客户端用户名称
                     '"body_bytes_sent": "$body_bytes_sent", ' # 发送给客户端文件主体内容大小
                     '"request_time": "$request_time", '
                     '"status": "$status", '
                     '"host": "$host", '
                     '"request": "$request", ' # 请求的url与http协议
                     '"request_method": "$request_method", '
                     '"uri": "$uri", '
                     '"http_referrer": "$http_referer", ' # 从那个页面链接访问过来的
                     '"http_x_forwarded_for": "$http_x_forwarded_for", ' # 客户端真实ip地址

                     '"http_user_agent": "$http_user_agent" '  # 客户端浏览器的相关信息

                '}';

    access_log  /var/log/nginx/access.log  json ;
	error_log /var/log/nginx/error.log;


	#autoindex on;   #开启nginx目录浏览功能
    #autoindex_exact_size off;   #文件大小从KB开始显示
    #autoindex_localtime on;   #显示文件修改时间为服务器本地时间
	#charset utf-8,gbk;	#中文目录的话会出现乱码问题,加上

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    server_names_hash_bucket_size 128;
    client_header_buffer_size 32k;
    large_client_header_buffers 4 32k;
    client_max_body_size 500m;
    client_body_buffer_size 512k;

   # 代理的相关参数设置
    proxy_connect_timeout 5;
    proxy_read_timeout 60;
    proxy_send_timeout 5;
    proxy_buffer_size 16k;
    proxy_buffers 4 64k;
    proxy_busy_buffers_size 128k;
    proxy_temp_file_write_size 128k;
	 

   # 启用gzip压缩,提高用户访问速度
    gzip on;
    gzip_min_length 1k;
    gzip_buffers 4 16k;
    gzip_http_version 1.1;
    gzip_comp_level 2;
    gzip_types text/plain application/css application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
    gzip_vary on;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

	
	
	
    include /etc/nginx/conf.d/*.conf;
}

3.logstash运行,与es一致名称改一下就行

启动停止

sudo systemctl start logstash.service
sudo systemctl stop logstash.service

4.在kibana中可看,创建成功的索引

centos7 yum安装ELK8.X+filebeat
创建数据视图与索引关联
centos7 yum安装ELK8.X+filebeat
数据展示
centos7 yum安装ELK8.X+filebeat

五、filebeat 安装配置

1.filebeat 安装

使用yum localinstall 安装本地rpm包,执行代码如下(示例):

yum localinstall filebeat-8.5.3-x86_64.rpm

2.配置filebeat.yml文件

vi /etc/filebeat/filebeat.yml

在filebeat.yml文件中主要配置读取paths设置,和对接到logstash中,filebeat我配置的是读pm2日志,整体配置文件内容如下

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /root/.pm2/logs/*.log
  input_type: log
  fields.document_type: pm2
  fields.service: pm2
  tags: ["pm2"]

    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
  host: "localhost:5601"
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:4567"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~


3.filebeat运行,与es一致名称改一下就行

启动停止

sudo systemctl start filebeat.service
sudo systemctl stop filebeat.service

4.查看filebeat加载的日志文件,通过logstash写入的索引

centos7 yum安装ELK8.X+filebeat
centos7 yum安装ELK8.X+filebeat

总结

以上就是今天要讲的内容,详细的介绍了ELK8全家桶安装细节,有问题的请评论区留言。文章来源地址https://www.toymoban.com/news/detail-472913.html

到了这里,关于centos7 yum安装ELK8.X+filebeat的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Centos7安装mysql8.0操作步骤(yum安装方法)

    以下操作在Centos7.5上实操成功。 rpm -qa | grep -i mariadb(检查有没有mariadb) rpm -e  --nodeps  mariadb-libs-5.5.56-2.el7.x86_64(不检查依赖直接卸载) rpm -qa | grep mysql wget https://repo.mysql.com//mysql80-community-release-el7-3.noarch.rpm yum -y install mysql80-community-release-el7-3.noarch.rpm rpm --import https://rep

    2023年04月15日
    浏览(68)
  • ubuntu20.04 安装使用 elk8.x + filebeat

    下载公共签名密钥 安装必要的包 保存存储库的定义到/etc/apt/sources.list.d/elastic-8.x.list 更新包并且安装 elsticsearch8 如果有需要可以修改某些参数 启动 elasticsearch 安装 kibana 安装公共签名密钥 安装必要的包 保存存储库的定义到/etc/apt/sources.list.d/elastic-8.x.list 更新包并且安装 kib

    2024年02月05日
    浏览(35)
  • centos7通过yum方式安装python3.8

    1、在CentOS或RHEL系统上安装额外的企业级Linux软件包(EPEL)存储库。EPEL存储库提供了一些在默认存储库中不可用的附加软件包。如果您想要使用EPEL存储库中的软件包,您需要在系统上具有管理员权限。 2、在CentOS或RHEL系统上安装 yum-utils 软件包。 yum-utils 软件包提供了一组工

    2024年02月16日
    浏览(43)
  • centos7 中使用yum方式安装Elasticsearch和kibana

    Elasticsearch 是目前全文搜索引擎的首选。它可以快速地储存、搜索和分析海量数据,在企业内同样是一款应用非常广泛的搜索引擎服务。本教程实现单机centos7安装es和kibana。 浏览器打开:http://ip:9200/?pretty 安装ES时添加yum源中已经包含了kibana,下面直接使用yum安装即可。 浏览

    2024年01月23日
    浏览(49)
  • centos7 yum 安装 cmake3 version 3.17.5

    安装 验证 第三方仓库 epel 相关信息参考 : 《Centos上安装EPEL》blog.csdn.net/whatday/article/details/89150746

    2024年02月09日
    浏览(46)
  • centos7更新yum安装docker-ce使用阿里源

    centos7更新yum安装docker-ce使用阿里源 centos7更新yum安装docker-ce使用阿里源240209版 Centos7的yum使用国内源阿里源163源等提高下载速度

    2024年02月20日
    浏览(51)
  • Centos7-yum安装mysql-修改密码-无密码登录-安全配置

    在CentOS中默认安装有MariaDB(MySQL的一个分支),安装完成之后可以直接覆盖MariaDB。 rpm -qa | grep mariadb 查询是否安装了mariadb rpm -e --nodeps mariadb-libs-5.5.60-1.el7_5.x86_64 卸载 mariad wget http://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm 安装官网提供的yum源 yum -y install mysql57-communit

    2024年02月03日
    浏览(50)
  • Centos7 两种方式安装 MySQL5.7 步骤 yum 、本地 tar 文件

    1、卸载系统自带 mariadb MariaDB Server 是最流行的开源 关系型数据库 之一。它由 MySQL 的原始开发者制作,并保证保持开源。 在 CentOS 7 中默认安装有 MariaDB 可忽略,安装完成之后可以直接覆盖掉 MariaDB。 查看并卸载系统自带的 Mariadb 2、下载并安装 MySQL 官方的 Yum 由于 CentOS 的

    2024年01月24日
    浏览(50)
  • 使用Docker安装ELK(Elasticsearch+Logstash+Kibana)+filebeat____基于CentOS7.9

    目录 一、安装JDK 二、部署Elasticsearch 三、部署kibana 四、部署Logstash 五、部署filebeat 六、filebeat采集数据,logstash过滤,在kibana中显示 七、kibana增加索引 1、更新系统 2、安装Java 下面是安装OpenJDK的命令: 3、验证安装 1、查看是否安装docker 安装最新版的docker可能导致部分系统不

    2024年02月04日
    浏览(46)
  • CentOS7在Linux下用yum安装Development Tools(开发工具套件)时报错的解决方案

    目录 1、操作环境 2、问题描述 3、原因分析 4、解决方案  本机系统:Window 10 专业版 虚拟机中的操作系统:CentOS Linux release 7.4.1708 (Core) -x86_64 虚拟机中的操作系统内核:Linux 当用yum安装Development Tools时出现如下报错,无法安装。 报错释义:没有安装组信息文件,可能要运行:

    2024年02月11日
    浏览(52)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包