发现一篇超详细的ELK搭建
ELK實時日志分析平臺環境部署--完整記錄
?
在日常運維工作中,對于系統和業務日志的處理尤為重要。今天,在這里分享一下自己部署的ELK(+Redis)-開源實時日志分析平臺的記錄過程(僅依據本人的實際操作為例說明,如有誤述,敬請指出)~
一、概念介紹
日志主要包括系統日志、應用程序日志和安全日志。系統運維和開發人員可以通過日志了解服務器軟硬件信息、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日志可以了解服務器的負荷,性能安全性,從而及時采取措施糾正錯誤。
通常,日志被分散的儲存不同的設備上。如果你管理數十上百臺服務器,你還在使用依次登錄每臺機器的傳統方法查閱日志。這樣是不是感覺很繁瑣和效率低下。當務之急我們使用集中化的日志管理,例如:開源的syslog,將所有服務器上的日志收集匯總。
集中化管理日志后,日志的統計和檢索又成為一件比較麻煩的事情,一般我們使用grep、awk和wc等Linux命令能實現檢索和統計,但是對于要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心。
開源實時日志分析ELK平臺能夠完美的解決我們上述的問題,ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成:
1)ElasticSearch是一個基于Lucene的開源分布式搜索服務器。它的特點有:分布式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。它提供了一個分布式多用戶能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java開發的,并作為Apache許可條款下的開放源碼發布,是第二流行的企業搜索引擎。設計用于云計算中,能夠達到實時搜索,穩定,可靠,快速,安裝使用方便。?
在elasticsearch中,所有節點的數據是均等的。
2)Logstash是一個完全開源的工具,他可以對你的日志進行收集、過濾、分析,并將其存儲供以后使用(如,搜索),您可以使用它。說到搜索,logstash帶有一個web界面,搜索和展示所有日志。
3)Kibana?是一個基于瀏覽器頁面的Elasticsearch前端展示工具,也是一個開源和免費的工具,它Kibana可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數據日志。
ELK工作原理展示圖:
如上圖:Logstash收集AppServer產生的Log,并存放到ElasticSearch集群中,而Kibana則從ES集群中查詢數據生成圖表,再返回給Browser。
?
二、ELK環境部署
(0)基礎環境介紹
系統: Centos7.1
防火墻: 關閉
Sellinux: 關閉
機器環境: 兩臺
elk-node1: 192.168.1.160 ? ? ??#master機器
elk-node2:192.168.1.161 ? ? ?#slave機器
注明:?
master-slave模式:
master收集到日志后,會把一部分數據碎片到salve上(隨機的一部分數據);同時,master和slave又都會各自做副本,并把副本放到對方機器上,這樣就保證了數據不會丟失。
如果master宕機了,那么客戶端在日志采集配置中將elasticsearch主機指向改為slave,就可以保證ELK日志的正常采集和web展示。
=========================================================================================
由于elk-node1和elk-node2兩臺是虛擬機,沒有外網ip,所以訪問需要通過宿主機進行代理轉發實現。
有以下兩種轉發設置:(任選其一)
通過訪問宿主機的19200,19201端口分別轉發到elk-node1,elk-node2的9200端口
通過訪問宿主機的15601端口轉發到elk-node1的5601端口
宿主機:112.110.115.10(內網ip為192.168.1.7) ?(為了不讓線上的真實ip暴露,這里任意給了一個ip做記錄)
a)通過宿主機的haproxy服務進行代理轉發,如下是宿主機上的代理配置:
[root@kvm-server conf]# pwd
/usr/local/haproxy/conf
[root@kvm-server conf]# cat haproxy.cfg
..........
..........
listen node1-9200 0.0.0.0:19200
mode tcp
option tcplog
balance roundrobin
server 192.168.1.160 192.168.1.160:9200 weight 1 check inter 1s rise 2 fall 2
listen node2-9200 0.0.0.0:19201
mode tcp
option tcplog
balance roundrobin
server 192.168.1.161 192.168.1.161:9200 weight 1 check inter 1s rise 2 fall 2
listen node1-5601 0.0.0.0:15601
mode tcp
option tcplog
balance roundrobin
server 192.168.1.160 192.168.1.160:5601 weight 1 check inter 1s rise 2 fall 2
重啟haproxy服務
[root@kvm-server conf]# /etc/init.d/haproxy restart
設置宿主機防火墻
[root@kvm-server conf]# cat /etc/sysconfig/iptables
.........
-A INPUT -p tcp -m state --state NEW -m tcp --dport 19200 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 19201 -j ACCEPT?
-A INPUT -p tcp -m state --state NEW -m tcp --dport 15601 -j ACCEPT
[root@kvm-server conf]# /etc/init.d/iptables restart
b)通過宿主機的NAT端口轉發實現
[root@kvm-server conf]# iptables -t nat -A PREROUTING -p tcp -m tcp --dport 19200 -j DNAT --to-destination 192.168.1.160:9200
[root@kvm-server conf]# iptables -t nat -A POSTROUTING -d 192.168.1.160/32 -p tcp -m tcp --sport 9200 -j SNAT --to-source 192.168.1.7
[root@kvm-server conf]# iptables -t filter -A INPUT -p tcp -m state --state NEW -m tcp --dport 19200 -j ACCEPT
[root@kvm-server conf]# iptables -t nat -A PREROUTING -p tcp -m tcp --dport 19201 -j DNAT --to-destination 192.168.1.161:9200
[root@kvm-server conf]# iptables -t nat -A POSTROUTING -d 192.168.1.161/32 -p tcp -m tcp --sport 9200 -j SNAT --to-source 192.168.1.7
[root@kvm-server conf]# iptables -t filter -A INPUT -p tcp -m state --state NEW -m tcp --dport 19201 -j ACCEPT
[root@kvm-server conf]# iptables -t nat -A PREROUTING -p tcp -m tcp --dport 15601 -j DNAT --to-destination 192.168.1.160:5601
[root@kvm-server conf]# iptables -t nat -A POSTROUTING -d 192.168.1.160/32 -p tcp -m tcp --sport 5601 -j SNAT --to-source 192.168.1.7
[root@kvm-server conf]# iptables -t filter -A INPUT -p tcp -m state --state NEW -m tcp --dport 15601 -j ACCEPT
[root@kvm-server conf]# service iptables save
[root@kvm-server conf]# service iptables restart
提醒一點:
nat端口轉發設置成功后,/etc/sysconfig/iptables文件里要注釋掉下面兩行!不然nat轉發會有問題!一般如上面在nat轉發規則設置好并save和restart防火墻之后就會自動在/etc/sysconfig/iptables文件里刪除掉下面兩行內容了。
[root@kvm-server conf]# vim /etc/sysconfig/iptables
..........
#-A INPUT -j REJECT --reject-with icmp-host-prohibited?
#-A FORWARD -j REJECT --reject-with icmp-host-prohibited
[root@linux-node1 ~]# service iptables restart
=========================================================================================
?
(1)Elasticsearch安裝配置
基礎環境安裝(elk-node1和elk-node2同時操作)
1)下載并安裝GPG Key
[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
2)添加yum倉庫
[root@elk-node1 ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
3)安裝elasticsearch
[root@elk-node1 ~]# yum install -y elasticsearch
4)安裝相關測試軟件
#提前先下載安裝epel源:epel-release-latest-7.noarch.rpm,否則yum會報錯:No Package.....
[root@elk-node1 ~]# wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
[root@elk-node1 ~]# rpm -ivh epel-release-latest-7.noarch.rpm
#安裝Redis
[root@elk-node1 ~]# yum install -y redis
#安裝Nginx
[root@elk-node1 ~]# yum install -y nginx
#安裝java
[root@elk-node1 ~]# yum install -y java
安裝完java后,檢測
[root@elk-node1 ~]# java -version
openjdk version "1.8.0_102"
OpenJDK Runtime Environment (build 1.8.0_102-b14)
OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)
?
配置部署(下面先進行elk-node1的配置)
1)配置修改配置文件
[root@elk-node1 ~]# mkdir -p /data/es-data
[root@elk-node1 ~]# vim /etc/elasticsearch/elasticsearch.yml ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 【將里面內容情況,配置下面內容】
cluster.name: huanqiu ? ? ? ? ? ? ? ? ? ? ? ? ? ?# 組名(同一個組,組名必須一致)
node.name: elk-node1 ? ? ? ? ? ? ? ? ? ? ? ? ? ?# 節點名稱,建議和主機名一致
path.data: /data/es-data ? ? ? ? ? ? ? ? ? ? ? ??# 數據存放的路徑
path.logs: /var/log/elasticsearch/ ? ? ? ? ? ??# 日志存放的路徑
bootstrap.mlockall: true ? ? ? ? ? ? ? ? ? ? ? ??# 鎖住內存,不被使用到交換分區去
network.host: 0.0.0.0 ? ? ? ? ? ? ? ? ? ? ? ? ? ?# 網絡設置
http.port: 9200 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?# 端口
2)啟動并查看
[root@elk-node1 ~]# chown -R elasticsearch.elasticsearch /data/
[root@elk-node1 ~]# systemctl start elasticsearch
[root@elk-node1 ~]# systemctl status elasticsearch
CGroup: /system.slice/elasticsearch.service
└─3005 /bin/java -Xms256m?-Xmx1g?-Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSI...
注意:上面可以看出elasticsearch設置的內存最小256m,最大1g
[root@linux-node1 src]# netstat -antlp |egrep "9200|9300"
tcp6 0 0 :::9200 :::* LISTEN 3005/java?
tcp6 0 0 :::9300 :::* LISTEN 3005/java
然后通過web訪問(訪問的瀏覽器最好用google瀏覽器)
http://112.110.115.10:19200/
3)通過命令的方式查看數據(在112.110.115.10宿主機或其他外網服務器上查看,如下)
[root@kvm-server src]#?curl -i -XGET 'http://192.168.1.160:9200/_count?pretty' -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
這樣感覺用命令來查看,特別的不爽。
4)接下來安裝插件,使用插件進行查看~ ?(下面兩個插件要在elk-node1和elk-node2上都要安裝)
4.1)安裝head插件
----------------------------------------------------------------------------------------------------
a)插件安裝方法一
[root@elk-node1 src]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
b)插件安裝方法二
首先下載head插件,下載到/usr/loca/src目錄下
下載地址:https://github.com/mobz/elasticsearch-head
----------------------------------------------------------------
head插件包百度云盤下載:https://pan.baidu.com/s/1boBE0qj
提取密碼:ifj7
----------------------------------------------------------------
[root@elk-node1 src]# unzip elasticsearch-head-master.zip
[root@elk-node1 src]# ls
elasticsearch-head-master elasticsearch-head-master.zip
在/usr/share/elasticsearch/plugins目錄下創建head目錄
然后將上面下載的elasticsearch-head-master.zip解壓后的文件都移到/usr/share/elasticsearch/plugins/head下
接著重啟elasticsearch服務即可!
[root@elk-node1 src]# cd /usr/share/elasticsearch/plugins/
[root@elk-node1 plugins]# mkdir head
[root@elk-node1 plugins]# ls
head
[root@elk-node1 plugins]# cd head
[root@elk-node1 head]# cp -r /usr/local/src/elasticsearch-head-master/* ./
[root@elk-node1 head]# pwd
/usr/share/elasticsearch/plugins/head
[root@elk-node1 head]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node1 head]# ll
total 40
-rw-r--r--. 1 elasticsearch elasticsearch 104 Sep 28 01:57 elasticsearch-head.sublime-project
-rw-r--r--. 1 elasticsearch elasticsearch 2171 Sep 28 01:57 Gruntfile.js
-rw-r--r--. 1 elasticsearch elasticsearch 3482 Sep 28 01:57 grunt_fileSets.js
-rw-r--r--. 1 elasticsearch elasticsearch 1085 Sep 28 01:57 index.html
-rw-r--r--. 1 elasticsearch elasticsearch 559 Sep 28 01:57 LICENCE
-rw-r--r--. 1 elasticsearch elasticsearch 795 Sep 28 01:57 package.json
-rw-r--r--. 1 elasticsearch elasticsearch 100 Sep 28 01:57 plugin-descriptor.properties
-rw-r--r--. 1 elasticsearch elasticsearch 5211 Sep 28 01:57 README.textile
drwxr-xr-x. 5 elasticsearch elasticsearch 4096 Sep 28 01:57 _site
drwxr-xr-x. 4 elasticsearch elasticsearch 29 Sep 28 01:57 src
drwxr-xr-x. 4 elasticsearch elasticsearch 66 Sep 28 01:57 test
[root@elk-node1 _site]# systemctl restart elasticsearch
-----------------------------------------------------------------------------------------------------
插件訪問(最好提前將elk-node2節點的配置和插件都安裝后,再來進行訪問和數據插入測試)
http://112.110.115.10:19200/_plugin/head/
先插入數據實例,測試下
如下:打開”復合查詢“,在POST選項下,任意輸入如/index-demo/test,然后在下面輸入數據(注意內容之間換行的逗號不要漏掉);
數據輸入好之后(如下輸入wangshibo;hello world內容),下面點擊”驗證JSON“->”提交請求“,提交成功后,觀察右欄里出現的信息:有index,type,version等信息,failed:0(成功消息)
再查看測試實例,如下:
"復合查詢"下,選擇GET選項,在/index-demo/test/后面輸入上面POST結果中的id號,不輸入內容,即{}括號里為空!
然后點擊”驗證JSON“->"提交請求",觀察右欄內就有了上面插入的數據了(即wangshibo,hello world)
打開"基本查詢",查看下數據,如下,即可查詢到上面插入的數據:
打開“數據瀏覽”,也能查看到插入的數據:
?
如下:一定要提前在elk-node2節點上也完成配置(配置內容在下面提到),否則上面插入數據后,集群狀態會呈現黃色yellow狀態,elk-node2完成配置加入到集群里后就會恢復到正常的綠色狀態。
4.2)安裝kopf監控插件
--------------------------------------------------------------------------------------------------------------------
a)監控插件安裝方法一
[root@elk-node1 src]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
b)監控插件安裝方法二
首先下載監控插件kopf,下載到/usr/loca/src目錄下
下載地址:https://github.com/lmenezes/elasticsearch-kopf
----------------------------------------------------------------
kopf插件包百度云盤下載:https://pan.baidu.com/s/1qYixSL2
提取密碼:ya4t
----------------------------------------------------------------
[root@elk-node1 src]# unzip elasticsearch-kopf-master.zip
[root@elk-node1 src]# ls
elasticsearch-kopf-master elasticsearch-kopf-master.zip
在/usr/share/elasticsearch/plugins目錄下創建kopf目錄
然后將上面下載的elasticsearch-kopf-master.zip解壓后的文件都移到/usr/share/elasticsearch/plugins/kopf下
接著重啟elasticsearch服務即可!
[root@elk-node1 src]# cd /usr/share/elasticsearch/plugins/
[root@elk-node1 plugins]# mkdir kopf
[root@elk-node1 plugins]# cd kopf
[root@elk-node1 kopf]# cp -r /usr/local/src/elasticsearch-kopf-master/* ./
[root@elk-node1 kopf]# pwd
/usr/share/elasticsearch/plugins/kopf
[root@elk-node1 kopf]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node1 kopf]# ll
total 40
-rw-r--r--. 1 elasticsearch elasticsearch 237 Sep 28 16:28 CHANGELOG.md
drwxr-xr-x. 2 elasticsearch elasticsearch 22 Sep 28 16:28 dataset
drwxr-xr-x. 2 elasticsearch elasticsearch 73 Sep 28 16:28 docker
-rw-r--r--. 1 elasticsearch elasticsearch 4315 Sep 28 16:28 Gruntfile.js
drwxr-xr-x. 2 elasticsearch elasticsearch 4096 Sep 28 16:28 imgs
-rw-r--r--. 1 elasticsearch elasticsearch 1083 Sep 28 16:28 LICENSE
-rw-r--r--. 1 elasticsearch elasticsearch 1276 Sep 28 16:28 package.json
-rw-r--r--. 1 elasticsearch elasticsearch 102 Sep 28 16:28 plugin-descriptor.properties
-rw-r--r--. 1 elasticsearch elasticsearch 3165 Sep 28 16:28 README.md
drwxr-xr-x. 6 elasticsearch elasticsearch 4096 Sep 28 16:28 _site
drwxr-xr-x. 4 elasticsearch elasticsearch 27 Sep 28 16:28 src
drwxr-xr-x. 4 elasticsearch elasticsearch 4096 Sep 28 16:28 tests
[root@elk-node1 _site]# systemctl restart elasticsearch
-----------------------------------------------------------------------------------------------------
訪問插件:(如下,同樣要提前安裝好elk-node2節點上的插件,否則訪問時會出現集群節點為黃色的yellow告警狀態)
http://112.110.115.10:19200/_plugin/kopf/#!/cluster
*************************************************************************
下面進行節點elk-node2的配置 ?(如上的兩個插件也在elk-node2上同樣安裝)
注釋:其實兩個的安裝配置基本上是一樣的。
[root@elk-node2 src]# mkdir -p /data/es-data?
[root@elk-node2 ~]# cat /etc/elasticsearch/elasticsearch.yml
cluster.name: huanqiu?
node.name: elk-node2
path.data: /data/es-data?
path.logs: /var/log/elasticsearch/?
bootstrap.mlockall: true?
network.host: 0.0.0.0?
http.port: 9200?
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.1.160", "192.168.1.161"]
# 修改權限配置
[root@elk-node2 src]# chown -R elasticsearch.elasticsearch /data/
# 啟動服務
[root@elk-node2 src]# systemctl start elasticsearch
[root@elk-node2 src]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2016-09-28 16:49:41 CST; 1 weeks 3 days ago
Docs: http://www.elastic.co
Process: 17798 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 17800 (java)
CGroup: /system.slice/elasticsearch.service
└─17800 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFra...
Oct 09 13:42:22 elk-node2 elasticsearch[17800]: [2016-10-09 13:42:22,295][WARN ][transport ] [elk-node2] Transport res...943817]
Oct 09 13:42:23 elk-node2 elasticsearch[17800]: [2016-10-09 13:42:23,111][WARN ][transport ] [elk-node2] Transport res...943846]
................
................
# 查看端口
[root@elk-node2 src]# netstat -antlp|egrep "9200|9300"
tcp6 0 0 :::9200 :::* LISTEN 2928/java?
tcp6 0 0 :::9300 :::* LISTEN 2928/java?
tcp6 0 0 127.0.0.1:48200 127.0.0.1:9300 TIME_WAIT -?
tcp6 0 0 ::1:41892 ::1:9300 TIME_WAIT -
*************************************************************************
通過命令的方式查看elk-node2數據(在112.110.115.10宿主機或其他外網服務器上查看,如下)
[root@kvm-server ~]#?curl -i -XGET 'http://192.168.1.161:9200/_count?pretty' -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 1,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
}
然后通過web訪問elk-node2
http://112.110.115.10:19201/
?
訪問兩個插件:
http://112.110.115.10:19201/_plugin/head/
http://112.110.115.10:19201/_plugin/kopf/#!/cluster
?
?
?
?(2)Logstash安裝配置(這個在客戶機上是要安裝的。elk-node1和elk-node2都安裝)
基礎環境安裝(客戶端安裝logstash,收集到的數據寫入到elasticsearch里,就可以登陸logstash界面查看到了)
1)下載并安裝GPG Key
[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
2)添加yum倉庫
[root@hadoop-node1 ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
3)安裝logstash
[root@elk-node1 ~]# yum install -y logstash
4)logstash啟動
[root@elk-node1 ~]# systemctl start elasticsearch
[root@elk-node1 ~]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2016-11-07 18:33:28 CST; 3 days ago
Docs: http://www.elastic.co
Main PID: 8275 (java)
CGroup: /system.slice/elasticsearch.service
└─8275 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFrac...
..........
..........
?
數據的測試
1)基本的輸入輸出
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
Settings: Default filter workers: 1
Logstash startup completed
hello ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??#輸入這個
2016-11-11T06:41:07.690Z elk-node1 hello ? ? ? ? ? ? ? ? ? ? ? ?#輸出這個
wangshibo ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?#輸入這個
2016-11-11T06:41:10.608Z elk-node1 wangshibo ? ? ? ? ? ? ??#輸出這個
2)使用rubydebug詳細輸出
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
Settings: Default filter workers: 1
Logstash startup completed
hello ??? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?#輸入這個
{ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??#輸出下面信息
? ? ? ? ? ?"message" => "hello",
? ? ? ? ? ?"@version" => "1",
? ? ? "@timestamp" => "2016-11-11T06:44:06.711Z",
? ? ? ? ? ? ? ? ? "host" => "elk-node1"
}
wangshibo?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??#輸入這個
{ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??#輸出下面信息
? ? ? ? ?"message" => "wangshibo",
? ? ? ? "@version" => "1",
? ?"@timestamp" => "2016-11-11T06:44:11.270Z",
? ? ? ? ? ? ? ?"host" => "elk-node1"
}
?
3) 把內容寫到elasticsearch中
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.1.160:9200"]} }'
Settings: Default filter workers: 1
Logstash startup completed ? ? ? ? ? ? ? ? ? ? ??#輸入下面的測試數據
123456?
wangshibo
huanqiu
hahaha
?
使用rubydebug和寫到elasticsearch中的區別:其實就在于后面標準輸出的區別,前者使用codec;后者使用elasticsearch
寫到elasticsearch中在logstash中查看,如下圖:
注意:
master收集到日志后,會把一部分數據碎片到salve上(隨機的一部分數據),master和slave又都會各自做副本,并把副本放到對方機器上,這樣就保證了數據不會丟失。
如下,master收集到的數據放到了自己的第1,3分片上,其他的放到了slave的第0,2,4分片上。
4)即寫到elasticsearch中又寫在文件中一份
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.1.160:9200"]} stdout{ codec => rubydebug}}'
Settings: Default filter workers: 1
Logstash startup completed
huanqiupc
{
? ? ? ? ? ?"message" => "huanqiupc",
? ? ? ? ? "@version" => "1",
? ? ?"@timestamp" => "2016-11-11T07:27:42.012Z",
? ? ? ? ? ? ? ? ?"host" => "elk-node1"
}
wangshiboqun
{
? ? ? ? ?"message" => "wangshiboqun",
? ? ? ? "@version" => "1",
? ?"@timestamp" => "2016-11-11T07:27:55.396Z",
? ? ? ? ? ? ? ?"host" => "elk-node1"
}
?
以上文本可以長期保留、操作簡單、壓縮比大。下面登陸elasticsearch界面中查看;
?
?logstash的配置和文件的編寫
1)logstash的配置
簡單的配置方式:
[root@elk-node1 ~]# vim /etc/logstash/conf.d/01-logstash.conf
input { stdin { } }
output {
? ? ? ? elasticsearch { hosts => ["192.168.1.160:9200"]}
? ? ? ? stdout { codec => rubydebug }
}
它的執行:
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-logstash.conf
Settings: Default filter workers: 1
Logstash startup completed
beijing ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?#輸入內容
{ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??#輸出下面信息
? ? ? ? ? ? ?"message" => "beijing",
? ? ? ? ? ? "@version" => "1",
? ? ? ?"@timestamp" => "2016-11-11T07:41:48.401Z",
? ? ? ? ? ? ? ? ? ?"host" => "elk-node1"
}
--------------------------------------------------------------------------------------------------
參考內容:
https://www.elastic.co/guide/en/logstash/current/configuration.html?
https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html
--------------------------------------------------------------------------------------------------
?
2)收集系統日志
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | [root@elk-node1 ~]# vim? file.conf input { ????file { ??????path =>?"/var/log/messages" ??????type =>?"system" ??????start_position =>?"beginning" ????} } output { ????elasticsearch { ???????hosts => ["192.168.1.160:9200"] ???????index =>?"system-%{+YYYY.MM.dd}" ????} } |
?
執行上面日志信息的收集,如下,這個命令會一直在執行中,表示日志在監控收集中;如果中斷,就表示日志不在收集!所以需要放在后臺執行~
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf &
登陸elasticsearch界面,查看本機系統日志的信息:
?
?
--------------------------------------------------------------------------------------------------
參考內容:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
--------------------------------------------------------------------------------------------------
3)收集java日志,其中包含上面講到的日志收集
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | [root@elk-node1 ~]# vim? file.conf input { ????file { ??????path =>?"/var/log/messages" ??????type =>?"system" ??????start_position =>?"beginning" ????} } input { ????file { ???????path =>?"/var/log/elasticsearch/huanqiu.log" ???????type =>?"es-error" ???????start_position =>?"beginning" ????} } output { ????if?[type] ==?"system"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"system-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"es-error"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"es-error-%{+YYYY.MM.dd}" ????????} ????} } |
注意:
如果你的日志中有type字段 那你就不能在conf文件中使用type
執行如下命令收集:
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf &
登陸elasticsearch界面,查看數據:
--------------------------------------------------------------------------------------------------
參考內容:
https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html
--------------------------------------------------------------------------------------------------
---------------
有個問題:?
每個報錯都給收集成一行了,不是按照一個報錯,一個事件模塊收集的。
下面將行換成事件的方式展示:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | [root@elk-node1 ~]# vim multiline.conf input { ????stdin { ???????codec => multiline { ??????????pattern =>?"^\[" ??????????negate =>?true ??????????what =>?"previous" ????????} ????} } output { ????stdout { ??????codec =>?"rubydebug" ?????}? } |
執行命令:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [root@elk-node1 ~]# /opt/logstash/bin/logstash -f multiline.conf Settings: Default filter workers: 1 Logstash startup completed 123 456 [123 { ????"@timestamp"?=>?"2016-11-11T09:28:56.824Z", ???????"message"?=>?"123\n456", ??????"@version"?=>?"1", ??????????"tags"?=> [ ????????[0]?"multiline" ????], ??????????"host"?=>?"elk-node1" } 123] [456] { ????"@timestamp"?=>?"2016-11-11T09:29:09.043Z", ???????"message"?=>?"[123\n123]", ??????"@version"?=>?"1", ??????????"tags"?=> [ ????????[0]?"multiline" ????], ??????????"host"?=>?"elk-node1" } |
在沒有遇到[的時候,系統不會收集,只有遇見[的時候,才算是一個事件,才收集起來。
--------------------------------------------------------------------------------------------------
參考內容
https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html
--------------------------------------------------------------------------------------------------
?
(3)Kibana安裝配置
1)kibana的安裝:
[root@elk-node1 ~]# cd /usr/local/src
[root@elk-node1 src]# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
[root@elk-node1 src]# tar zxf kibana-4.3.1-linux-x64.tar.gz
[root@elk-node1 src]# mv kibana-4.3.1-linux-x64 /usr/local/
[root@elk-node1 src]# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana
2)修改配置文件:
[root@elk-node1 config]# pwd
/usr/local/kibana/config
[root@elk-node1 config]# cp kibana.yml kibana.yml.bak
[root@elk-node1 config]# vim kibana.yml?
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.160:9200"
kibana.index: ".kibana"
因為他一直運行在前臺,要么選擇開一個窗口,要么選擇使用screen。
安裝并使用screen啟動kibana:
[root@elk-node1 ~]# yum -y install screen
[root@elk-node1 ~]# screen ? ? ? ? ? ? ? ? ? ? ? ? ?#這樣就另開啟了一個終端窗口
[root@elk-node1 ~]# /usr/local/kibana/bin/kibana
log [18:23:19.867] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [18:23:19.911] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [18:23:19.941] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [18:23:19.953] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [18:23:19.963] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [18:23:19.995] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [18:23:20.004] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [18:23:20.010] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
然后按ctrl+a+d組合鍵,這樣在上面另啟的screen屏里啟動的kibana服務就一直運行在前臺了....
[root@elk-node1 ~]# screen -ls
There is a screen on:
15041.pts-0.elk-node1 (Detached)
1 Socket in /var/run/screen/S-root.
(3)訪問kibana:http://112.110.115.10:15601/
如下,如果是添加上面設置的java日志收集信息,則在下面填寫es-error*;如果是添加上面設置的系統日志信息system*,以此類型(可以從logstash界面看到日志收集項)
?然后點擊上面的Discover,在Discover中查看:
查看日志登陸,需要點擊“Discover”-->"message",點擊它后面的“add”
注意:
需要右邊查看日志內容時帶什么屬性,就在左邊點擊相應屬性后面的“add”
如下圖,添加了message和path的屬性:
這樣,右邊顯示的日志內容的屬性就帶了message和path
點擊右邊日志內容屬性后面隱藏的<<,就可將內容向前縮進
添加新的日志采集項,點擊Settings->+Add New,比如添加system系統日志。注意后面的*不要忘了。
?
?
刪除kibana里的日志采集項,如下,點擊刪除圖標即可。
?
如果打開kibana查看日志,發現沒有日志內容,出現“No results found”,如下圖所示,這說明要查看的日志在當前時間沒有日志信息輸出,可以點擊右上角的時間鐘來調試日志信息的查看。
?
4)收集nginx的訪問日志
修改nginx的配置文件,分別在nginx.conf的http和server配置區域添加下面內容:
##### http 標簽中
? ? ? ? ? log_format json '{"@timestamp":"$time_iso8601",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"@version":"1",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"client":"$remote_addr",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"url":"$uri",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"status":"$status",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"domain":"$host",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"host":"$server_addr",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"size":$body_bytes_sent,'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"responsetime":$request_time,'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"referer": "$http_referer",'
? ? ? ? ? ? ? ? ? ? ? ? ? ?'"ua": "$http_user_agent"'
'}';
##### server標簽中
? ? ? ? ? ? access_log /var/log/nginx/access_json.log json;
?
截圖如下:
啟動nginx服務:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | [root@elk-node1 ~]# systemctl start nginx [root@elk-node1 ~]# systemctl status nginx ● nginx.service - The nginx HTTP and reverse proxy server ???Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled) ???Active: active (running) since Fri 2016-11-11 19:06:55 CST; 3s ago ??Process: 15119 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS) ??Process: 15116 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS) ??Process: 15114 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS) ?Main PID: 15122 (nginx) ???CGroup: /system.slice/nginx.service ???????????├─15122 nginx: master process /usr/sbin/nginx ???????????├─15123 nginx: worker process ???????????└─15124 nginx: worker process Nov 11 19:06:54 elk-node1 systemd[1]: Starting The nginx HTTP and reverse proxy server... Nov 11 19:06:55 elk-node1 nginx[15116]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok Nov 11 19:06:55 elk-node1 nginx[15116]: nginx: configuration file /etc/nginx/nginx.conf test is successful Nov 11 19:06:55 elk-node1 systemd[1]: Started The nginx HTTP and reverse proxy server. |
編寫收集文件
這次使用json的方式收集:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 | [root@elk-node1 ~]# vim json.conf input { ???file { ??????path =>?"/var/log/nginx/access_json.log" ??????codec =>?"json" ???} } output { ???stdout { ??????codec =>?"rubydebug" ???} } |
啟動日志收集程序:
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f json.conf ? ? ? ?#或加個&放在后臺執行
訪問nginx頁面(在elk-node1的宿主機上執行訪問頁面的命令:curl http://192.168.1.160)就會出現以下內容:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | [root@elk-node1 ~]# /opt/logstash/bin/logstash -f json.conf Settings: Default filter workers: 1 Logstash startup completed { ??????"@timestamp"?=>?"2016-11-11T11:10:53.000Z", ????????"@version"?=>?"1", ??????????"client"?=>?"192.168.1.7", ?????????????"url"?=>?"/index.html", ??????????"status"?=>?"200", ??????????"domain"?=>?"192.168.1.160", ????????????"host"?=>?"192.168.1.160", ????????????"size"?=> 3700, ????"responsetime"?=> 0.0, ?????????"referer"?=>?"-", ??????????????"ua"?=>?"curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2", ????????????"path"?=>?"/var/log/nginx/access_json.log" } |
注意:
上面的json.conf配置只是將nginx日志輸出,還沒有輸入到elasticsearch里,所以這個時候在elasticsearch界面里是采集不到nginx日志的。
需要配置一下,將nginx日志輸入到elasticsearch中,將其匯總到總文件file.conf里,如下也將nginx-log日志輸入到elasticserach里:(后續就可以只用這個匯總文件,把要追加的日志匯總到這個總文件里即可)
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | [root@elk-node1 ~]# cat file.conf input { ????file { ??????path =>?"/var/log/messages" ??????type =>?"system" ??????start_position =>?"beginning" ????} ????file { ???????path =>?"/var/log/elasticsearch/huanqiu.log" ???????type =>?"es-error" ???????start_position =>?"beginning" ???????codec => multiline { ???????????pattern =>?"^\[" ???????????negate =>?true ???????????what =>?"previous" ???????} ????} ????file { ???????path =>?"/var/log/nginx/access_json.log" ???????codec => json ???????start_position =>?"beginning" ???????type =>?"nginx-log" ????} } output { ????if?[type] ==?"system"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"system-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"es-error"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"es-error-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"nginx-log"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"nignx-log-%{+YYYY.MM.dd}" ????????} ????} } |
?
可以加上--configtest參數,測試下配置文件是否有語法錯誤或配置不當的地方,這個很重要!!
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf --configtest
Configuration OK
然后接著執行logstash命令(由于上面已經將這個執行命令放到了后臺,所以這里其實不用執行,也可以先kill之前的,再放后臺執行),然后可以再訪問nginx界面測試下
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf &
登陸elasticsearch界面查看:
?將nginx日志整合到kibana界面里,如下:
?
5)收集系統日志
編寫收集文件并執行。
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [root@elk-node1 ~]# cat syslog.conf input { ????syslog { ????????type =>?"system-syslog" ????????host =>?"192.168.1.160" ????????port =>?"514" ????} } output { ????stdout { ????????codec =>?"rubydebug" ????} } |
對上面的采集文件進行執行:
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f syslog.conf
重新開啟一個窗口,查看服務是否啟動:
[root@elk-node1 ~]# netstat -ntlp|grep 514
tcp6 0 0 192.168.1.160:514 :::* LISTEN 17842/java?
[root@elk-node1 ~]# vim /etc/rsyslog.conf
#*.* @@remote-host:514 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??【在此行下面添加如下內容】
*.* @@192.168.1.160:514
[root@elk-node1 ~]# systemctl restart rsyslog
回到原來的窗口(即上面采集文件的執行終端),就會出現數據:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | [root@elk-node1 ~]# /opt/logstash/bin/logstash -f syslog.conf Settings: Default filter workers: 1 Logstash startup completed { ???????????"message"?=>?"Stopping System Logging Service...\n", ??????????"@version"?=>?"1", ????????"@timestamp"?=>?"2016-11-13T10:35:30.000Z", ??????????????"type"?=>?"system-syslog", ??????????????"host"?=>?"192.168.1.160", ??????????"priority"?=> 30, ?????????"timestamp"?=>?"Nov 13 18:35:30", ?????????"logsource"?=>?"elk-node1", ???????????"program"?=>?"systemd", ??????????"severity"?=> 6, ??????????"facility"?=> 3, ????"facility_label"?=>?"system", ????"severity_label"?=>?"Informational" } ........ ........ |
再次添加到總文件file.conf中:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | [root@elk-node1 ~]# cat file.conf input { ????file { ??????path =>?"/var/log/messages" ??????type =>?"system" ??????start_position =>?"beginning" ????} ????file { ???????path =>?"/var/log/elasticsearch/huanqiu.log" ???????type =>?"es-error" ???????start_position =>?"beginning" ???????codec => multiline { ???????????pattern =>?"^\[" ???????????negate =>?true ???????????what =>?"previous" ???????} ????} ????file { ???????path =>?"/var/log/nginx/access_json.log" ???????codec => json ???????start_position =>?"beginning" ???????type =>?"nginx-log" ????} ????syslog { ????????type =>?"system-syslog" ????????host =>?"192.168.1.160" ????????port =>?"514" ????} } output { ????if?[type] ==?"system"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"system-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"es-error"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"es-error-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"nginx-log"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"nignx-log-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"system-syslog"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"system-syslog-%{+YYYY.MM.dd}" ????????} ????} } |
執行總文件(先測試下總文件配置是否有誤,然后先kill之前在后臺啟動的file.conf文件,再次執行):
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf?--configtest
Configuration OK
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf &
測試:
向日志中添加數據,看elasticsearch和kibana的變化:
[root@elk-node1 ~]# logger "hehehehehehe1"
[root@elk-node1 ~]# logger "hehehehehehe2"
[root@elk-node1 ~]# logger "hehehehehehe3"
[root@elk-node1 ~]# logger "hehehehehehe4"
[root@elk-node1 ~]# logger "hehehehehehe5"
添加到kibana界面中:
?
?
6)TCP日志的收集
編寫日志收集文件,并執行:(有需要的話,可以將下面收集文件的配置匯總到上面的總文件file.conf里,進而輸入到elasticsearch界面里和kibana里查看)
[root@elk-node1 ~]# cat tcp.conf
input {
tcp {
host => "192.168.1.160"
port => "6666"
}
}
output {
stdout {
codec => "rubydebug"
}
}
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f tcp.conf
開啟另外一個窗口,測試一(安裝nc命令:yum install -y nc):
[root@elk-node1 ~]# nc 192.168.1.160 6666 </etc/resolv.conf
回到原來的窗口(即上面采集文件的執行終端),就會出現數據:
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f tcp.conf?
Settings: Default filter workers: 1
Logstash startup completed
{
? ? ? ? "message" => "",
? ? ? ?"@version" => "1",
? ?"@timestamp" => "2016-11-13T11:01:15.280Z",
? ? ? ? ? ? ? "host" => "192.168.1.160",
? ? ? ? ? ? ? "port" => 49743
}
測試二:
[root@elk-node1 ~]# echo "hehe" | nc 192.168.1.160 6666
[root@elk-node1 ~]# echo "hehe" > /dev/tcp/192.168.1.160/6666
回到之前的執行端口,在去查看,就會顯示出來:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | [root@elk-node1 ~]# /opt/logstash/bin/logstash -f tcp.conf Settings: Default filter workers: 1 Logstash startup completed<br>....... { ???????"message"?=>?"hehe", ??????"@version"?=>?"1", ????"@timestamp"?=>?"2016-11-13T11:39:58.263Z", ??????????"host"?=>?"192.168.1.160", ??????????"port"?=> 53432 } { ???????"message"?=>?"hehe", ??????"@version"?=>?"1", ????"@timestamp"?=>?"2016-11-13T11:40:13.458Z", ??????????"host"?=>?"192.168.1.160", ??????????"port"?=> 53457 } |
7)使用filter
編寫文件:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [root@elk-node1 ~]# cat grok.conf input { ????stdin{} } filter { ??grok { ????match => {?"message"?=>?"%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"?} ??} } output { ????stdout{ ????????codec =>?"rubydebug" ????} } |
執行檢測:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | [root@elk-node1 ~]# /opt/logstash/bin/logstash -f grok.conf Settings: Default filter workers: 1 Logstash startup completed 55.3.244.1 GET /index.html 15824 0.043??????????????????? #輸入這個,下面就會自動形成字典的形式 { ???????"message"?=>?"55.3.244.1 GET /index.html 15824 0.043", ??????"@version"?=>?"1", ????"@timestamp"?=>?"2016-11-13T11:45:47.882Z", ??????????"host"?=>?"elk-node1", ????????"client"?=>?"55.3.244.1", ????????"method"?=>?"GET", ???????"request"?=>?"/index.html", ?????????"bytes"?=>?"15824", ??????"duration"?=>?"0.043" } |
其實上面使用的那些變量在程序中都有定義:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | [root@elk-node1 ~]# cd /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns/ [root@elk-node1 patterns]# ls aws???? bro?? firewalls????? haproxy? junos???????? mcollective?????????? mongodb? postgresql? redis bacula? exim? grok-patterns? java???? linux-syslog? mcollective-patterns? nagios?? rails?????? ruby [root@elk-node1 patterns]# cat grok-patterns filter { ??????# drop sleep events ????grok { ????????match => {?"message"?=>"SELECT SLEEP"?} ????????add_tag => [?"sleep_drop"?] ????????tag_on_failure => [] # prevent?default?_grokparsefailure tag?on?real records ??????} ?????if?"sleep_drop"?in?[tags] { ????????drop {} ?????} ?????grok { ????????match => [?"message",?"(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id: %{NUMBER:row_id:int}\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n#\s*"?] ??????} ??????date { ????????match => [?"timestamp",?"UNIX"?] ????????remove_field => [?"timestamp"?] ??????} } |
8)mysql慢查詢
收集文件:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | [root@elk-node1 ~]# cat mysql-slow.conf input { ????file { ????????path =>?"/root/slow.log" ????????type =>?"mysql-slowlog" ????????codec => multiline { ????????????pattern =>?"^# User@Host" ????????????negate =>?true ????????????what =>?"previous" ????????} ????} } filter { ??????# drop sleep events ????grok { ????????match => {?"message"?=>"SELECT SLEEP"?} ????????add_tag => [?"sleep_drop"?] ????????tag_on_failure => [] # prevent?default?_grokparsefailure tag?on?real records ??????} ?????if?"sleep_drop"?in?[tags] { ????????drop {} ?????} ?????grok { ????????match => [?"message",?"(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id: %{NUMBER:row_id:int}\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n#\s*"?] ??????} ??????date { ????????match => [?"timestamp",?"UNIX"?] ????????remove_field => [?"timestamp"?] ??????} } output { ????stdout { ???????codec =>"rubydebug" ????} } |
執行檢測:
上面需要的/root/slow.log是自己上傳的,然后自己插入數據保存后,會顯示:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | [root@elk-node1 ~]# /opt/logstash/bin/logstash -f mysql-slow.conf Settings: Default filter workers: 1 Logstash startup completed { ????"@timestamp"?=>?"2016-11-14T06:53:54.100Z", ???????"message"?=>?"# Time: 161114 11:05:18", ??????"@version"?=>?"1", ??????????"path"?=>?"/root/slow.log", ??????????"host"?=>?"elk-node1", ??????????"type"?=>?"mysql-slowlog", ??????????"tags"?=> [ ????????[0]?"_grokparsefailure" ????] } { ????"@timestamp"?=>?"2016-11-14T06:53:54.105Z", ???????"message"?=>?"# User@Host: test[test] @? [124.65.197.154]\n# Query_time: 1.725889? Lock_time: 0.000430 Rows_sent: 0? Rows_examined: 0\nuse test_zh_o2o_db;\nSET timestamp=1479092718;\nSELECT trigger_name, event_manipulation, event_object_table, action_statement, action_timing, DEFINER FROM information_schema.triggers WHERE BINARY event_object_schema='test_zh_o2o_db' AND BINARY event_object_table='customer';\n# Time: 161114 12:10:30", ??????"@version"?=>?"1", ??????????"tags"?=> [ ????????[0]?"multiline", ????????[1]?"_grokparsefailure" ????], ??????????"path"?=>?"/root/slow.log", ??????????"host"?=>?"elk-node1", ??????????"type"?=>?"mysql-slowlog" } ......... ......... |
?
----------------------------------------------------------------------------------------------------------------------------------
接下來描述會遇見到的一個問題:
一旦我們的elasticsearch出現問題,就不能進行日志采集處理了!
這種情況下該怎么辦呢?
解決方案;
可以在client和elasticsearch之間添加一個中間件作為緩存,先將采集到的日志內容寫到中間件上,然后再從中間件輸入到elasticsearch中。
這就完美的解決了上述的問題了。
(4)ELK中使用redis作為中間件,緩存日志采集內容
1)redis的配置和啟動
[root@elk-node1 ~]# vim /etc/redis.conf ? ? ? ? ? ? ??#修改下面兩行內容
daemonize yes
bind 192.168.1.160
[root@elk-node1 ~]# systemctl start redis
[root@elk-node1 ~]# lsof -i:6379
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 19474 redis 4u IPv4 1344465 0t0 TCP elk-node1:6379 (LISTEN)
[root@elk-node1 ~]# redis-cli -h 192.168.1.160
192.168.1.160:6379> info
# Server
redis_version:2.8.19
.......
2)編寫從Client端收集數據的文件
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [root@elk-node1 ~]# vim redis-out.conf input { ???stdin {} } output { ???redis { ??????host =>?"192.168.1.160" ??????port =>?"6379" ??????db =>?"6" ??????data_type =>?"list" ??????key =>?"demo" ???} } |
3)執行收集數據的文件,并輸入數據hello redis?
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf?
Settings: Default filter workers: 1
Logstash startup completed ? ? ? ? ? ??#下面輸入數據hello redis
hello redis
4)在redis中查看數據
[root@elk-node1 ~]# redis-cli -h 192.168.1.160
192.168.1.160:6379>?info
# Server
.......
.......
# Keyspace
db6:keys=1,expires=0,avg_ttl=0 ? ? ? ? ? ? ? ? ??#在最下面一行,顯示是db6
192.168.1.160:6379>?select 6
OK
192.168.1.160:6379[6]>?keys *
1) "demo"
192.168.1.160:6379[6]>?LINDEX demo -1
"{\"message\":\"hello redis\",\"@version\":\"1\",\"@timestamp\":\"2016-11-14T08:04:25.981Z\",\"host\":\"elk-node1\"}"
5)繼續隨便寫點數據
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | [root@elk-node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf Settings: Default filter workers: 1 Logstash startup completed hello redis 123456 asdf ert wang shi bo guohuihui as we r g asdfjkdfsak 5423wer 34rt3 6y 7uj u io9 sdjfhsdk890 huanqiu huanqiuchain hqsb asda??? |
6)在redis中查看
在redis中查看長度:
[root@elk-node1 ~]# redis-cli -h 192.168.1.160
192.168.1.160:6379>?info
# Server
redis_version:2.8.19
.......
.......
# Keyspace
db6:keys=1,expires=0,avg_ttl=0 ? ? ?#顯示是db6
192.168.1.160:6379>?select 6
OK
192.168.1.160:6379[6]>?keys *
1) "demo"
192.168.1.160:6379[6]>?LLEN demo
(integer) 24
?
7)將redis中的內容寫到ES中
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | [root@elk-node1 ~]# vim redis-in.conf input { ????redis { ??????host =>?"192.168.1.160" ??????port =>?"6379" ??????db =>?"6" ??????data_type =>?"list" ??????key =>?"demo" ???} } output { ????elasticsearch { ??????hosts => ["192.168.1.160:9200"] ??????index =>?"redis-in-%{+YYYY.MM.dd}" ????} } |
執行:
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f redis-in.conf --configtest
Configuration OK
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f redis-in.conf &
在redis中查看,發現數據已被讀出:
192.168.1.160:6379[6]> LLEN demo
(integer) 0
登陸elasticsearch界面查看:
?
?
8)接著,將收集到的所有日志寫入到redis中。這了重新定義一個添加redis緩存后的總文件shipper.conf。(可以將之前執行的總文件file.conf停掉)
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | [root@elk-node1 ~]# vim shipper.conf input { ????file { ??????path =>?"/var/log/messages" ??????type =>?"system" ??????start_position =>?"beginning" ????} ?? ????file { ???????path =>?"/var/log/elasticsearch/huanqiu.log" ???????type =>?"es-error" ???????start_position =>?"beginning" ???????codec => multiline { ???????????pattern =>?"^\[" ???????????negate =>?true ???????????what =>?"previous" ???????} ????} ????file { ???????path =>?"/var/log/nginx/access_json.log" ???????codec => json ???????start_position =>?"beginning" ???????type =>?"nginx-log" ????} ????syslog { ????????type =>?"system-syslog" ????????host =>?"192.168.1.160" ????????port =>?"514" ????} ?? } ?? ?? output { ???if?[type] ==?"system"{ ?????redis { ????????host =>?"192.168.1.160" ????????port =>?"6379" ????????db =>?"6" ????????data_type =>?"list" ????????key =>?"system" ?????} ???} ?? ????if?[type] ==?"es-error"{ ??????redis { ????????host =>?"192.168.1.160" ????????port =>?"6379" ????????db =>?"6" ????????data_type =>?"list" ????????key =>?"demo" ????????} ?????} ????if?[type] ==?"nginx-log"{??? ???????redis { ??????????host =>?"192.168.1.160" ??????????port =>?"6379" ??????????db =>?"6" ??????????data_type =>?"list" ??????????key =>?"nginx-log" ???????} ????} ????if?[type] ==?"system-syslog"{ ???????redis { ??????????host =>?"192.168.1.160" ??????????port =>?"6379" ??????????db =>?"6" ??????????data_type =>?"list" ??????????key =>?"system-syslog" ???????}??? ?????} } |
執行上面的文件(提前將上面之前啟動的file.conf文件的執行給結束掉!)
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f shipper.conf --configtest
Configuration OK
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f shipper.conf
Settings: Default filter workers: 1
Logstash startup completed
在redis中查看:
[root@elk-node1 ~]# redis-cli -h 192.168.1.160
192.168.1.160:6379>?info
# Server
redis_version:2.8.19
.......
.......
# Keyspace
db6:keys=1,expires=0,avg_ttl=0 ? ? ? ? ? ? ? ? ? ? ?#顯示是db6
192.168.1.160:6379>?select 6
OK
192.168.1.160:6379[6]>?keys *
1) "demo"
2) "system"
192.168.1.160:6379[6]>?keys *
1) "nginx-log"
2) "demo"
3) "system"
另開一個窗口,添加點日志:
[root@elk-node1 ~]# logger "12325423"
[root@elk-node1 ~]# logger "12325423"
[root@elk-node1 ~]# logger "12325423"
[root@elk-node1 ~]# logger "12325423"
[root@elk-node1 ~]# logger "12325423"
[root@elk-node1 ~]# logger "12325423"
又會增加日志:
192.168.1.160:6379[6]>?keys *
1) "system-syslog"
2) "nginx-log"
3) "demo"
4) "system"
?
其實可以在任意的一臺ES中將數據從redis讀取到ES中。
下面咱們在elk-node2節點,將數據從redis讀取到ES中:
編寫文件:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | [root@elk-node2 ~]# cat file.conf input { ?????redis { ????????type =>?"system" ????????host =>?"192.168.1.160" ????????port =>?"6379" ????????db =>?"6" ????????data_type =>?"list" ????????key =>?"system" ?????} ??????redis { ????????type =>?"es-error" ????????host =>?"192.168.1.160" ????????port =>?"6379" ????????db =>?"6" ????????data_type =>?"list" ????????key =>?"es-error" ????????} ???????redis { ??????????type =>?"nginx-log" ??????????host =>?"192.168.1.160" ??????????port =>?"6379" ??????????db =>?"6" ??????????data_type =>?"list" ??????????key =>?"nginx-log" ???????} ???????redis { ??????????type =>?"system-syslog" ??????????host =>?"192.168.1.160" ??????????port =>?"6379" ??????????db =>?"6" ??????????data_type =>?"list" ??????????key =>?"system-syslog" ???????}??? } output { ????if?[type] ==?"system"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"system-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"es-error"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"es-error-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"nginx-log"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"nignx-log-%{+YYYY.MM.dd}" ????????} ????} ????if?[type] ==?"system-syslog"{ ????????elasticsearch { ???????????hosts => ["192.168.1.160:9200"] ???????????index =>?"system-syslog-%{+YYYY.MM.dd}" ????????} ????} } |
執行:
[root@elk-node2 ~]# /opt/logstash/bin/logstash -f file.conf --configtest
Configuration OK
[root@elk-node2 ~]# /opt/logstash/bin/logstash -f file.conf &
去redis中檢查,發現數據已經被讀出到elasticsearch中了。
192.168.1.160:6379[6]> keys *
(empty list or set)
?
?
同時登陸logstash和kibana看,發現可以正常收集到日志了。
可以執行這個 去查看nginx日志?
[root@elk-node1 ~]# ab -n10000 -c1 http://192.168.1.160/
也可以啟動多個redis寫到ES中,具體根據自己的實際情況而定。
?轉載至https://www.cnblogs.com/kevingrace/p/5919021.html
轉載于:https://www.cnblogs.com/technologykai/articles/8508818.html
總結
以上是生活随笔為你收集整理的发现一篇超详细的ELK搭建的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 多个线程对hashmap进行put操作的
- 下一篇: JMS编程模型