从0开始搭建ELK及采集日志的简单应用
關于ELK的理論介紹、架構圖解說,很多博客都有很詳細的講解可以參考。本文主要記錄了elk的簡單搭建和簡單應用。
?
安裝前準備
1、環(huán)境說明:
| 10.0.0.101(centos7) | test101 | jdk、elasticsearch、logstash、kibana及filebeat(filebeat用于測試采集test101服務器自身messages日志) |
| 10.0.0.102(centos7) | test102 | nginx及filebeat(filebeat用于測試采集test102服務器nginx日志) |
2、安裝包準備:
jdk-8u151-linux-x64.tar.gz
elasticsearch-6.4.2.tar.gz
kibana-6.4.2-linux-x86_64.tar.gz
logstash-6.4.2.tar.gz
elk官網(wǎng)下載地址:https://www.elastic.co/cn/downloads
部署ELK工具服務端
先在test101主機部署jdk、elasticsearch、logstash、kibana,部署好elk的服務端。把上面的四個安裝包上傳到test101服務器的/root下面。
1、部署jdk
# tar xf jdk-8u151-linux-x64.tar.gz -C /usr/local/ # echo -e "export JAVA_HOME=/usr/local/jdk1.8.0_151\n export JRE_HOME=\${JAVA_HOME}/jre\n export CLASSPATH=.:\${JAVA_HOME}/lib:\${JRE_HOME}/lib\n export PATH=\${JAVA_HOME}/bin:\$PATH" >>/etc/profile # source /etc/profile # java -version #或者執(zhí)行jps命令也OK備注:要是一不小心改壞了/etc/profile,可以參考博文:《/etc/profile文件改壞了,所有命令無法執(zhí)行了怎么辦?》
2、創(chuàng)建elk專用用戶
elk用戶用于啟動elasticsearch,和后面采集日志的時候,配置在filebeat配置文件里面。
# useradd elk;echo 12345678|passwd elk --stdin #創(chuàng)建elk用戶,密碼設置為123456783、部署elasticsearch
3.1 解壓安裝包:
# tar xf elasticsearch-6.4.2.tar.gz -C /usr/local/3.2 修改配置文件/usr/local/elasticsearch-6.4.2/config/elasticsearch.yml,修改如下:
[root@test101 config]# egrep -v "^#|^$" /usr/local/elasticsearch-6.4.2/config/elasticsearch.yml cluster.name: elk node.name: node-1 path.data: /opt/elk/es_data path.logs: /opt/elk/es_logs network.host: 10.0.0.101 http.port: 9200 discovery.zen.ping.unicast.hosts: ["10.0.0.101:9300"] discovery.zen.minimum_master_nodes: 1 [root@test101 config]#3.3 修改配置文件/etc/security/limits.conf和/etc/sysctl.conf如下:
# echo -e "* soft nofile 65536\n* hard nofile 131072\n* soft nproc 2048\n* hard nproc 4096\n" >>/etc/security/limits.conf # echo "vm.max_map_count=655360" >>/etc/sysctl.conf # sysctl -p3.4 創(chuàng)建data和log目錄并授權給elk用戶:
# mkdir /opt/elk/{es_data,es_logs} -p # chown elk:elk -R /opt/elk/ # chown elk:elk -R /usr/local/elasticsearch-6.4.2/3.5 啟動elasticsearch:
# cd /usr/local/elasticsearch-6.4.2/bin/ # su elk $ nohup /usr/local/elasticsearch-6.4.2/bin/elasticsearch >/dev/null 2>&1 &3.6 檢查進程和端口:
[root@test101 ~]# ss -ntlup| grep -E "9200|9300" tcp LISTEN 0 128 ::ffff:10.0.0.101:9200 :::* users:(("java",pid=6001,fd=193)) tcp LISTEN 0 128 ::ffff:10.0.0.101:9300 :::* users:(("java",pid=6001,fd=186)) [root@test101 ~]#備注:
如果萬一遇到elasticsearch服務起不來,可以排查一下es目錄的權限、服務器內存什么的:《總結—elasticsearch啟動失敗的幾種情況及解決》
4、部署logstash
4.1 解壓安裝包:
# tar xf logstash-6.4.2.tar.gz -C /usr/local/4.2 修改配置文件/usr/local/logstash-6.4.2/config/logstash.yml,修改如下:
[root@test101 logstash-6.4.2]# egrep -v "^#|^$" /usr/local/logstash-6.4.2/config/logstash.ymlpath.data: /opt/elk/logstash_datahttp.host: "10.0.0.101"path.logs: /opt/elk/logstash_logspath.config: /usr/local/logstash-6.4.2/conf.d #這一行配置文件沒有的,自己加到文件末尾就好了 [root@test101 logstash-6.4.2]#4.3 創(chuàng)建conf.d,添加日志處理文件syslog.conf:
[root@test101 conf.d]# mkdir /usr/local/logstash-6.4.2/conf.d [root@test101 conf.d]# cat /usr/local/logstash-6.4.2/conf.d/syslog.conf input {#filebeat客戶端beats {port => 5044}}#篩選#filter { }output { #標準輸出,調試使用stdout {codec => rubydebug { }}# 輸出到eselasticsearch {hosts => ["http://10.0.0.101:9200"]index => "%{type}-%{+YYYY.MM.dd}"}} [root@test101 conf.d]#4.4 創(chuàng)建創(chuàng)建data和log目錄并授權給elk用戶:
# mkdir /opt/elk/{logstash_data,logstash_logs} -p # chown -R elk:elk /opt/elk/ # chown -R elk:elk /usr/local/logstash-6.4.2/4.5 調試啟動服務:
[root@test101 conf.d]# /usr/local/logstash-6.4.2/bin/logstash -f /usr/local/logstash-6.4.2/conf.d/syslog.conf --config.test_and_exit #這一步可能需要等待一會兒才會有反應 Sending Logstash logs to /opt/elk/logstash_logs which is now configured via log4j2.properties [2018-11-01T09:49:14,299][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/elk/logstash_data/queue"} [2018-11-01T09:49:14,352][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/elk/logstash_data/dead_letter_queue"} [2018-11-01T09:49:16,547][WARN ][logstash.config.source.multilocal] Ignoring the ‘pipelines.yml‘ file because modules or command line options are specified Configuration OK [2018-11-01T09:49:26,510][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash [root@test101 conf.d]#4.6 正式啟動服務:
# nohup /usr/local/logstash-6.4.2/bin/logstash -f /usr/local/logstash-6.4.2/conf.d/syslog.conf >/dev/null 2>&1 & #后臺啟動4.7 查看進程和端口:
[root@test101 local]# ps -ef|grep logstash root 6325 926 17 10:08 pts/0 00:01:55 /usr/local/jdk1.8.0_151/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.2/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/locallogstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.2/conf.d/syslog.conf root 6430 926 0 10:19 pts/0 00:00:00 grep --color=auto logstash[root@test101 local]# netstat -tlunp|grep 6325 tcp6 0 0 :::5044 :::* LISTEN 6325/java tcp6 0 0 10.0.0.101:9600 :::* LISTEN 6325/java [root@test101 local]#5、部署kibana
5.1 解壓安裝包:
# tar xf kibana-6.4.2-linux-x86_64.tar.gz -C /usr/local/5.2 修改配置文件/usr/local/kibana-6.4.2-linux-x86_64/config/kibana.yml,修改如下:
[root@test101 ~]# egrep -v "^#|^$" /usr/local/kibana-6.4.2-linux-x86_64/config/kibana.yml server.port: 5601 server.host: "10.0.0.101" elasticsearch.url: "http://10.0.0.101:9200" kibana.index: ".kibana" [root@test101 ~]#5.3 修改kibana目錄屬主為elk:
# chown elk:elk -R /usr/local/kibana-6.4.2-linux-x86_64/5.4 啟動kibana:
# nohup /usr/local/kibana-6.4.2-linux-x86_64/bin/kibana >/dev/null 2>&1 &5.5 查看進程和端口:
[root@test101 local]# ps -ef|grep kibana root 6381 926 28 10:16 pts/0 00:00:53 /usr/local/kibana-6.4.2-linux-x86_64/bin/../node/bin/node --no-warnings /usr/local/kibana-6.4.2-linux-x86_64/bin/../src/cli root 6432 926 0 10:19 pts/0 00:00:00 grep --color=auto kibana [root@test101 local]# netstat -tlunp|grep 6381 tcp 0 0 10.0.0.101:5601 0.0.0.0:* LISTEN 6381/node [root@test101 local]#5.6?http://10.0.0.101:5601?訪問kibana界面:
至此,整個elk工具的服務端搭建完畢。
ELK采集日志應用
服務端部署好之后,就是配置日志采集了,這時候就需要用到filebeat了
應用一:采集ELK本機(test101)的messages日志和secure日志
1、在kibana主頁界面,點擊“Add log data” :
2、選擇system log:
3、選擇RPM,這里有添加日志的步驟(但是步驟有個小坑,可以參考如下的配置步驟:):
3.1 在test101服務器的es下面安裝插件:
3.2 在test101服務器下載并安裝filebeat:
# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-x86_64.rpm # rpm -vi filebeat-6.4.2-x86_64.rpm3.3 在test101服務器配置filebeat,修改/etc/filebeat/filebeat.yml下面幾個地方:
#=========================== Filebeat inputs ============================= filebeat.inputs: - type: log# Change to true to enable this input configuration.enabled: true #注意:這里默認是false,在kibana界面上沒有提到要修改,但是不改成true,kibana界面就看不到日志內容paths: #配置要采集的日志,這里我采集了messages日志和secure日志- /var/log/messages*- /var/log/secure* #============================== Kibana ===================================== setup.kibana:host: "10.0.0.101:5601" #-------------------------- Elasticsearch output ------------------------------ output.elasticsearch:hosts: ["10.0.0.101:9200"]username: "elk"password: "12345678"3.4 在test101服務器執(zhí)行如下命令修改 /etc/filebeat/modules.d/system.yml:
# filebeat modules enable system3.5 在test101服務器啟動filebeat:
# filebeat setup # service filebeat start3.6 然后回到kibana的Discover界面,搜索關鍵字messages和secure,就能看到相關的日志了:
應用二:采集10.0.0.102(test102)服務器的nginx日志
在應用一,我們采集了elk本身服務器的日志,現(xiàn)在再采集一下test102的日志
1、在nginx上安裝一個nginx:
# yum -y install nginx2、跟應用一一樣,在kibana的首頁,點擊“Add log data”,然后選擇nginx logs,找到安裝步驟:
3、選擇RPM,這里有添加日志的步驟(可以參考如下的配置步驟:):
3.1 在test101服務器的es下面安裝插件:
=======以下都在10.0.0.102(test102)服務器進行=======
3.2 在test102服務器下載并安裝filebeat:
3.3 在test102服務器配置filebeat,修改/etc/filebeat/filebeat.yml下面幾個地方:
#=========================== Filebeat inputs ============================= filebeat.inputs: - type: log# Change to true to enable this input configuration.enabled: true #注意:這里默認是false,在kibana界面上沒有提到要修改,但是不改成true,kibana界面就看不到日志內容paths: #配置要采集的日志,這里我采集了/var/log/nginx/目錄下的所有日志文件,包括access.log和error.log,就用了*- /var/log/nginx/* #============================== Kibana ===================================== setup.kibana:host: "10.0.0.101:5601" #-------------------------- Elasticsearch output ------------------------------ output.elasticsearch:hosts: ["10.0.0.101:9200"]username: "elk"password: "12345678"3.4 在test102服務器執(zhí)行如下命令修改/etc/filebeat/modules.d/nginx.yml:
# filebeat modules enable nginx執(zhí)行之后,看到文件寫入了如下的內容:
[root@test102 ~]# cat /etc/filebeat/modules.d/nginx.yml - module: nginx# Access logsaccess:enabled: true# Set custom paths for the log files. If left empty,# Filebeat will choose the paths depending on your OS.#var.paths:# Error logserror:enabled: true# Set custom paths for the log files. If left empty,# Filebeat will choose the paths depending on your OS.#var.paths: [root@test102 ~]#3.5 在test102服務器啟動filebeat:
# filebeat setup # service filebeat start3.6 然后回到kibana的Discover界面,就能看到相關的日志了:
備注:
有些文章安裝了elasticsearch-head插件,本文沒有安裝
從0開始搭建ELK及采集日志的簡單應用
標簽:tar???ali???enabled???set???stdin???參考???===???權限???ica???
原文地址:http://blog.51cto.com/10950710/2311618
來源:http://www.mamicode.com/info-detail-2503668.html
總結
以上是生活随笔為你收集整理的从0开始搭建ELK及采集日志的简单应用的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 国债的发行时间
- 下一篇: 将redis加入到elk日志系统里