redis storm mysql_flume+kafka+storm+redis/mysql启动命令记录
1.flume啟動
bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties --name fks -Dflume.root.logger=INFO,console
2.啟動kafka
[root@Cassandra kafka]# bin/zookeeper-server-start.sh config/zookeeper.properties&
[root@Cassandra kafka]# bin/kafka-server-start.sh config/server.properties &
3.啟動storm
3臺機器分別啟動zookeeper
1master啟動:bin/storm nimbus&
bin/storm ui&
2slave啟動:bin/storm supervisor&
啟動topology:storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis
4.啟動redis
src/redis-server
在前端flume的機器指定文件夾中拷貝進文件,redis已更新
storm jar tools/Storm4.jar com.qihoo.datacenter.step3tomysql.ToMysqlTopology tomysql
storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis
# fks : flume kafka storm integration
fks.sources=r1
fks.sinks=k1
fks.channels=c1
# configure r1
fks.sources.r1.type=spooldir
fks.sources.r1.spoolDir=/data/flumeread
fks.sources.r1.fileHeader = false
# configure k1
fks.sinks.k1.type=com.qihoo.datacenter.sink.KafkaSink
# configure c1
fks.channels.c1.type=file
fks.channels.c1.checkpointDir=/data/flumewrite/example_fks_001
fks.channels.c1.dataDirs=/data/flumewrite2/example_fks_001
# bind source and sink
fks.sources.r1.channels=c1
fks.sinks.k1.channel=c1
總結
以上是生活随笔為你收集整理的redis storm mysql_flume+kafka+storm+redis/mysql启动命令记录的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: qt 调用qpainter_在Qt5.4
- 下一篇: 宽带故障怎么处理(宽带常见故障处理流程与