ELK+redis搭建nginx日志分析平台
ELK+redis搭建nginx日志分析平臺
發表于 2015-08-19?? |?? 分類于 Linux/Unix?? | ?
ELK簡介
ELKStack即Elasticsearch + Logstash + Kibana。日志監控和分析在保障業務穩定運行時,起到了很重要的作用。比如對nginx日志的監控分析,nginx是有日志文件的,它的每個請求的狀態等都有日志文件進行記錄,所以可以通過讀取日志文件來分析;redis的list結構正好可以作為隊列使用,用來存儲logstash傳輸的日志數據。然后elasticsearch就可以進行分析和查詢了。
本文搭建的的是一個分布式的日志收集和分析系統。logstash有agent和indexer兩個角色。對于agent角色,放在單獨的web機器上面,然后這個agent不斷地讀取nginx的日志文件,每當它讀到新的日志信息以后,就將日志傳送到網絡上的一臺redis隊列上。對于隊列上的這些未處理的日志,有不同的幾臺logstash indexer進行接收和分析。分析之后存儲到elasticsearch進行搜索分析。再由統一的kibana進行日志web界面的展示[3]。
目前我用兩臺機器做測試,hadoop-master安裝nginx和logstash agent(tar源碼包安裝),hadoop-slave機器安裝安裝logstash agent、elasticsearch、redis、nginx。
同時分析兩臺機器的nginx日志,具體配置可參見說明文檔。以下記錄了ELK+redis來收集和分析日志的配置過程,參考了官方文檔和前人的文章。
系統環境
主機環境
hadoop-master?? ?192.168.186.128?? #logstash index、nginx
hadoop-slave?? ?192.168.186.129?? #安裝logstash agent、elasticsearch、redis、nginx
系統信息
[root@hadoop-slave ~]# java -version?? #Elasticsearch是java開發的,需要JDK環境,本機安裝JDK 1.8
java version "1.8.0_20"
Java(TM) SE Runtime Environment (build 1.8.0_20-b26)
Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)
[root@hadoop-slave ~]# cat /etc/issue
CentOS release 6.4 (Final)
Kernel \r on an \m
Redis安裝
[root@hadoop-slave ~]# wget https://github.com/antirez/redis/archive/2.8.20.tar.gz
[root@hadoop-slave ~]# tar -zxf 2.8.20
[root@hadoop-slave ~]# mv redis-2.8.20/ /usr/local/src/
[root@hadoop-slave src]# cd redis-2.8.20/
[root@hadoop-slave src]# make
執行完后,會在當前目錄中的src目錄中生成相應的執行文件,如:redis-server redis-cli等;
我們在/usr/local/目錄中創建redis位置目錄和相應的數據存儲目錄、配置文件目錄等.
[root@hadoop-slave local]# mkdir /usr/local/redis/{conf,run,db} -pv
[root@hadoop-slave local]# cd /usr/local/src/redis-2.8.20/
[root@hadoop-slave redis-2.8.20]# cp redis.conf /usr/local/redis/conf/
[root@hadoop-slave redis-2.8.20]# cd src/
[root@hadoop-slave src]# cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server mkreleasehdr.sh /usr/local/redis/
`
到此Redis安裝完成了。
下面來試著啟動一下,并查看相應的端口是否已經啟動:
[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf?? & #可以打入后臺
[root@hadoop-slave redis]# netstat -antulp | grep 6379
tcp??????? 0????? 0 0.0.0.0:6379??????????????? 0.0.0.0:*?????????????????? LISTEN????? 72669/redis-server ?
tcp??????? 0????? 0 :::6379???????????????????? :::*??????????????????????? LISTEN????? 72669/redis-server
啟動沒問題了,ok!
Elasticserach安裝
ElasticSearch默認的對外服務的HTTP端口是9200,節點間交互的TCP端口是9300,注意打開tcp端口。
Elasticsearch安裝
從官網下載最新版本的tar包
Search & Analyze in Real Time: Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management.
[root@hadoop-slave ~]# wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.tar.gz
[root@hadoop-slave ~]# mkdir /usr/local/elk
[root@hadoop-slave ~]# tar zxf elasticsearch-1.7.1.tar.gz -C /usr/local/elk/
[root@hadoop-slave bin]# ln -s /usr/local/elk/elasticsearch-1.7.1/bin/elasticsearch /usr/bin
[root@hadoop-slave bin]# elasticsearch start
[2015-08-17 20:49:21,566][INFO ][node???????????????????? ] [Eliminator] version[1.7.1], pid[5828], build[b88f43f/2015-07-29T09:54:16Z]
[2015-08-17 20:49:21,585][INFO ][node???????????????????? ] [Eliminator] initializing ...
[2015-08-17 20:49:21,870][INFO ][plugins????????????????? ] [Eliminator] loaded [], sites []
[2015-08-17 20:49:22,101][INFO ][env????????????????????? ] [Eliminator] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [27.9gb], net total_space [37.1gb], types [ext4]
[2015-08-17 20:50:08,097][INFO ][node???????????????????? ] [Eliminator] initialized
[2015-08-17 20:50:08,099][INFO ][node???????????????????? ] [Eliminator] starting ...
[2015-08-17 20:50:08,593][INFO ][transport??????????????? ] [Eliminator] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.186.129:9300]}
[2015-08-17 20:50:08,764][INFO ][discovery??????????????? ] [Eliminator] elasticsearch/XbpOYtsYQbO-6kwawxd7nQ
[2015-08-17 20:50:12,648][INFO ][cluster.service????????? ] [Eliminator] new_master [Eliminator][XbpOYtsYQbO-6kwawxd7nQ][hadoop-slave][inet[/192.168.186.129:9300]], reason: zen-disco-join (elected_as_master)
[2015-08-17 20:50:12,683][INFO ][http???????????????????? ] [Eliminator] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.186.129:9200]}
[2015-08-17 20:50:12,683][INFO ][node???????????????????? ] [Eliminator] started
[2015-08-17 20:50:12,771][INFO ][gateway????????????????? ] [Eliminator] recovered [0] indices into cluster_state
#可以用` -d`參數打入后臺運行`elasticsearch start -d`
`
測試
出現200返回碼表示ok
[root@hadoop-slave ~]# elasticsearch start -d
[root@hadoop-slave ~]# curl -X GET http://localhost:9200
{
? "status" : 200,
? "name" : "Wasp",
? "cluster_name" : "elasticsearch",
? "version" : {
??? "number" : "1.7.1",
??? "build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
??? "build_timestamp" : "2015-07-29T09:54:16Z",
??? "build_snapshot" : false,
??? "lucene_version" : "4.10.4"
? },
? "tagline" : "You Know, for Search"
}
Logstash安裝
Logstash is a flexible, open source, data collection, enrichment, and transport pipeline designed to efficiently process a growing list of log, event, and unstructured data sources for distribution into a variety of outputs, including Elasticsearch.
Logstash默認的對外端口是9292,如果防火墻開啟了要打開tcp端口。
源碼安裝
192.168.186.128主機源碼安裝,解壓到/usr/local/目錄下
[root@hadoop-master ~]# wget https://download.elastic.co/logstash/logstash/logstash-1.5.3.tar.gz
[root@hadoop-master ~]# tar -zxf logstash-1.5.3.tar.gz -C /usr/local/
[root@hadoop-master logstash-1.5.3]# mkdir /usr/local/logstash-1.5.3/etc
yum安裝
192.168.186.129采用yum安裝
[root@hadoop-slave ~]# rpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch? #download public key
[root@hadoop-slave ~]# vi /etc/yum.repos.d/CentOS-Base.repo
?[logstash-1.5]
name=Logstash repository for 1.5.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.5/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
[root@hadoop-slave ~]# yum install logstash??? #yum安裝會安裝在/opt目錄下
測試
[root@hadoop-slave ~]# cd /opt/logstash/
[root@hadoop-slave logstash]# ls
bin? CHANGELOG.md? CONTRIBUTORS? Gemfile? Gemfile.jruby-1.9.lock? lib? LICENSE? NOTICE.TXT? vendor
[root@hadoop-slave logstash]# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
然后你會發現終端在等待你的輸入。沒問題,敲入 Hello World,回車,然后看看會返回什么結果!
[root@hadoop-slave logstash]# vi logstash-simple.conf???? #sleasticsearch的host是本機
input { stdin { } }
output {
? elasticsearch { host => localhost }
? stdout { codec => rubydebug }
}
[root@hadoop-slave logstash]# ./bin/logstash -f logstash-simple.conf? #可以打入后臺運行
……
{
?????? "message" => "",
????? "@version" => "1",
??? "@timestamp" => "2015-08-18T06:26:19.348Z",
????????? "host" => "hadoop-slave"
}
……
表明elasticsearch已經收到logstash傳來的數據了,通信ok!
也可以通過下面的方式
1
2
[root@hadoop-slave etc]# curl 'http://192.168.186.129:9200/_search?pretty'
#出現一堆數據表示ok!
logstash配置
logstash語法
摘錄自說明文檔:
Logstash 社區通常習慣用 shipper,broker 和 indexer 來描述數據流中不同進程各自的角色。如下圖:
broker一般選擇redis。不過我見過很多運用場景里都沒有用 logstash 作為 shipper(也是agent的概念),或者說沒有用 elasticsearch 作為數據存儲也就是說也沒有 indexer。所以,我們其實不需要這些概念。只需要學好怎么使用和配置 logstash 進程,然后把它運用到你的日志管理架構中最合適它的位置就夠了。
設置nginx日志格式
兩臺機器都安裝了nginx,所以都要修改nginx.conf,設置日志格式。
[root@hadoop-master ~]# cd /usr/local/nginx/conf/
[root@hadoop-master conf]# vi nginx.conf???? #設置log_format,去掉注釋
??? log_format? main? '$remote_addr - $remote_user [$time_local] "$request" '
????????????????????? '$status $body_bytes_sent "$http_referer" '
????????????????????? '"$http_user_agent" "$http_x_forwarded_for"';
access_log? logs/host.access.log? main;? #設置access日志,有訪問時自動寫入此文件
[root@hadoop-master conf]# nginx -s reload
hadoop-slave機器同上操作
開啟logstash agent
logstash agent負責收集信息傳送到redis隊列上
[root@hadoop-master ~]# cd /usr/local/logstash-1.5.3/
[root@hadoop-master logstash-1.5.3]# mkdir etc
[root@hadoop-master etc]# vi logstash_agent.conf
input {
??????? file {
??????????????? type => "nginx access log"
??????????????? path => ["/usr/local/nginx/logs/host.access.log"]? ?
??????? }
}
output {
??????? redis {
??????????????? host => "192.168.186.129" #redis server
??????????????? data_type => "list"
??????????????? key => "logstash:redis"
??????? }
}
[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &
#在另一臺機器上的logstash_agent也同樣配置
開啟logstash indexer
[root@hadoop-slave conf]# cd /opt/logstash/
[root@hadoop-slave logstash]# cd etc/
[root@hadoop-slave etc]# vi logstash_indexer.conf
input {
??????? redis {
??????????????? host => "192.168.186.129"
??????????????? data_type => "list"
??????????????? key => "logstash:redis"
??????????????? type => "redis-input"
??????? }
}
filter {
??????? grok {
??????????????? type => "nginx_access"
??????????????? match => [
??????????????????????? "message", "%{IPORHOST:http_host} %{IPORHOST:client_ip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:http_status_code} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{NUMBER:time_duration:float} %{NUMBER:time_backend_response:float}",
??????????????????????? "message", "%{IPORHOST:http_host} %{IPORHOST:client_ip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:http_status_code} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{NUMBER:time_duration:float}"
??????????????? ]
??????? }
}
output {
??????? elasticsearch {
??????????????? embedded => false
??????????????? protocol => "http"
??????????????? host => "localhost"
??????????????? port => "9200"
??????? }
}
[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_indexer.conf &
配置完成!
Kibana安裝
Explore and Visualize Your Data: Kibana is an open source data visualization platform that allows you to interact with your data through stunning, powerful graphics that can be combined into custom dashboards that help you share insights from your data far and wide.
[root@hadoop-slave ~]# wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
[root@hadoop-slave elk]# tar -zxf kibana-4.1.1-linux-x64.tar.gz
[root@hadoop-slave elk]# mv kibana-4.1.1-linux-x64 /usr/local/elk
[root@hadoop-slave bin]# pwd
/usr/local/elk/kibana/bin
[root@hadoop-slave bin]# ./kibana? &
打開http://192.168.186.129:5601/
如果需要遠程訪問,需要打開iptables的tcp的5601端口。
ELK+redis測試
如果ELK+redis都沒啟動,以下命令啟動:
[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf? & #啟動redis
[root@hadoop-slave ~]# elasticsearch start -d? #啟動elasticsearch
[root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf &
[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_indexer.conf &
[root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_agent.conf &
[root@hadoop-slave bin]# ./kibana? & #啟動kibana
`
打開http://192.168.186.129/ 和 http://192.168.186.128/
每刷新一次頁面會產生一條訪問記錄,記錄在host.access.log文件中。
[root@hadoop-master logs]# cat host.access.log
……
192.168.186.1 - - [18/Aug/2015:22:59:00 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:00:21 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:06:38 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:15:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
192.168.186.1 - - [18/Aug/2015:23:16:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-"
[root@hadoop-master logs]#
打開kibana頁面即可顯示兩臺機器nginx的訪問日志信息,顯示時間是由于虛擬機的時區和物理機時區不一致,不影響。
此時訪問出現如下界面
轉載于:https://www.cnblogs.com/zhg-linux/p/6120736.html
總結
以上是生活随笔為你收集整理的ELK+redis搭建nginx日志分析平台的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Android平台下OpenGL初步
- 下一篇: SpringMVC RequestMap