openresty+consul动态配置更新(服务变更发现)
最近在做數據收集平臺,用openresty往kafka里push數據,不管是kafka broker也好,還是一個配置也好,希望做到動態更新,不需要reload openresty。尤其是針對接口調用的認證,配置很少,如果每次都去redis或mysql中去取感覺也沒有必要,直接用lua做配置表無疑性能提高不少。再說kafka broker問題,雖說producer會感知到broker的增加(http://blog.csdn.net/liuzhenfeng/article/details/50688842),但如果寫在配置的那個默認的broker不可用,這就有點尷尬了,有可能業務重啟了,還需要修改配置文件;或通過服務注冊重新發現新的可用broker。
到最后還是跑到了服務注冊與發現的問題,可以通過consul,zookeeper或etcd實現,下面是我在openresty+consul實現的動態配置更新。
原理很簡單,在openresty通過長輪訓和版本號及時獲取consul的kv store變化。consul提供了time_wait和修改版本號概念,如果consul發現該kv沒有變化就會hang住這個請求5分鐘,在這5分鐘內如果有任何變化都會及時返回結果。通過比較版本號我們就知道是超時了還是kv的確被修改了。其實原理和上一篇nginx upsync一樣(http://blog.csdn.net/yueguanghaidao/article/details/52801043)
consul的node和service也支持阻塞查詢,相對來說用service更好一點,畢竟支持服務的健康檢查。阻塞api和kv一樣,加一個index就好了
curl “172.20.20.10:8500/v1/catalog/nodes?index=4”
代碼如下:
local json = require "cjson" local http = require "resty.http"-- consul watcher -- 設置key -- curl -X PUT http://172.20.20.10:8500/v1/kv/broker/kafka/172.20.20.11:8080 -- -- 獲取所有前綴key -- curl http://172.20.20.10:8500/v1/kv/broker/kafka/?recurse -- [{"LockIndex":0,"Key":"broker/kafka/172.20.20.11:8080","Flags":0,"Value":null,"CreateIndex":34610,"ModifyIndex":34610}] -- -- 獲取所有key,index大于34610版本號(當有多個key時需要獲取最大版本號) -- 沒有更新,consul阻塞5分鐘 -- curl "http://172.20.20.10:8500/v1/kv/broker/kafka/?recurse&index=34610"local cache = {} setmetatable(cache, { __mode="kv"} )local DEFAULT_TIMEOUT = 6 * 60 * 1000 -- consul默認超時5分鐘local _M = {} local mt = { __index = _M }function _M.new(self, watch_url, callback)local watch = cache[watch_url]if watch ~= nil thenreturn watchendlocal httpc, err = http.new()if not httpc thenreturn nil, errendhttpc:set_timeout(DEFAULT_TIMEOUT)local recurse_url = watch_url .. "?recurse"watch = ?setmetatable({httpc = httpc,recurse_url = recurse_url,modify_index = 0,running = false,stop = false,callback = callback,}, mt)cache[watch_url] = watchreturn watch endfunction _M.start(self)if self.running thenngx.log(ngx.ERR, "watch already start, url:", self.recurse_url)returnendlocal is_exiting = ngx.worker.exitinglocal watch_index= function()repeatlocal prev_index = self.modify_indexlocal wait_url = self.recurse_url .. "&index=" .. prev_indexngx.log(ngx.ERR, "wait:", wait_url)local result = self:request(wait_url)if result thenself:get_modify_index(result)if self.modify_index > prev_index then -- modify_index changengx.log(ngx.ERR, "watch,url:", self.recurse_url, " index change")self:callback(result)endenduntil self.stop or is_exiting()ngx.log(ngx.ERR, "watch exit, url: ", self.recurse_url)endlocal ok, err = ngx.timer.at(1, watch_index)if not ok thenngx.log(ngx.ERR, "failed to create watch timer: ", err)returnendself.running = true endfunction _M.stop(self)self.stop = truengx.log(ngx.ERR, "watch stop, url:", self.recurse_url) endfunction _M.get_modify_index(self, result)local key = "ModifyIndex"local max_index = self.modify_indexfor _, v in ipairs(result) dolocal index = v[key]if index > max_index thenmax_index = indexendendself.modify_index = max_index endfunction _M.request(self, url)local res, err = self.httpc:request_uri(url)if not res thenngx.log(ngx.ERR, "watch request error, url:", url, " error:", err)return nil, errendreturn json.decode(res.body) endreturn _M
使用就很簡單了
對于不同的變更需求,通過watch的callback回調函數做處理就好了。
這里有一點需要注意, 由于處理函數基本是一個死循環,所以需要判斷當前nginx worker是否還存在,如果until那不加ngx.worker.exiting()判斷,當reload時會導致該worker永遠不會掛掉,一直處于is shutting down狀態,我們知道這是nginx reload時老的worker退出時狀態。
為什么會出現這種情況,根據openresty實現介紹
According to the current implementation, each "running timer" will take one (fake) connection record from the global connection record list configured by the standard worker_connections directive in nginx.conf. So ensure that the worker_connections directive is set to a large enough value that takes into account both the real connections and fake connections required by timer callbacks (as limited by the lua_max_running_timers directive).
也就是說每一個timer都相當于一個假請求(fake request),這會占用一個work_connection,而nginx reload時老worker會處理完所有的請求才會退出。文檔也說了如果timer使用的比較多,需要調大nginx.conf配置文件中的worker_connections參數。
在這里不得不說openresty的高性能,producer使用了async單進程輕松pqs過萬,當然這也和你配置參數和消息大小有關,建議producer使用async模式代替sync模式,有著10倍的性能差異。
---------------------?
作者:yueguanghaidao?
來源:CSDN?
原文:https://blog.csdn.net/yueguanghaidao/article/details/52862066?
版權聲明:本文為博主原創文章,轉載請附上博文鏈接!
總結
以上是生活随笔為你收集整理的openresty+consul动态配置更新(服务变更发现)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 微服务之consul(一)
- 下一篇: Nginx容器动态流量管理方案-ngin