大数据笔记(三十二)——SparkStreaming集成Kafka与Flume
三、集成:數據源
1、Apache Kafka:一種高吞吐量的分布式發布訂閱消息系統
(1)
(*)消息的類型
(*)常見的消息系統
Kafka、Redis -----> 只支持Topic
JMS(Java Messaging Service標準):Topic、Queue -----> Weblogic
(*)角色:生產者:產生消息
消費者:接收消息(處理消息)
(2)Kafka的消息系統的體系結構
?
?
(3)搭建Kafka的環境:單機單Broker的模式
測試Kafka
?創建Topic
?發送消息
bin/kafka-console-producer.sh --broker-list bigdata11:9092 --topic mydemo1?接收消息: 從zookeeper中獲取topic的信息
bin/kafka-console-consumer.sh --zookeeper bigdata11:2181 --topic mydemo1(4)集成Spark Streaming:兩種方式
注意:依賴的jar包很多(還有沖突),強烈建議使用Maven方式
讀到數據:都是key value
(*)基于接收器方式(receiver)
Receiver的實現使用到Kafka高層次的API.對于所有的Receivers,接收到的數據將會保存在Spark executors中,然后由Spark Streaming 啟動Job來處理這些數據
?
1 package main.scala.demo 2 3 import org.apache.spark.SparkConf 4 import org.apache.spark.streaming.kafka.KafkaUtils 5 import org.apache.spark.streaming.{Seconds, StreamingContext} 6 7 object KafkaReceiverDemo { 8 9 def main(args: Array[String]): Unit = { 10 val conf = new SparkConf().setAppName("KafkaReceiverDemo").setMaster("local[2]") 11 val ssc = new StreamingContext(conf,Seconds(10)) 12 13 //指定Topic信息:從mydemo1的topic中,每次接受一條消息 14 val topic = Map("mydemo1" -> 1) 15 16 //創建Kafka輸入流(DStream),基于Receiver方式,鏈接到ZK 17 //參數:SparkStream,ZK地址,groupId,topic 18 val kafkaStream = KafkaUtils.createStream(ssc,"192.168.153.11:2181","mygroup",topic) 19 20 //接受數據,并處理 21 val lines = kafkaStream.map(e=>{ 22 //e代表是每次接受到的數據 23 new String(e.toString()) 24 } 25 ) 26 27 //輸出 28 lines.print() 29 30 ssc.start() 31 ssc.awaitTermination() 32 } 33 }啟動Kafka,在上面發送一條消息,結果
(*)直接讀取方式:推薦(效率更高)
這種方式定期的從Kafka的topic+partition中查詢最新的偏移量,再根據定義的偏移量在每個batch里面處理數據。當需要處理的數據來臨時,spark通過調用kafka簡單的消費者API讀取一定范圍內的數據。
?
package main.scala.demoimport kafka.serializer.StringDecoder import org.apache.spark.SparkConf import org.apache.spark.streaming.kafka.KafkaUtils import org.apache.spark.streaming.{Seconds, StreamingContext}object KafkaDirectDemo {def main(args: Array[String]): Unit = {val conf = new SparkConf().setAppName("KafkaReceiverDemo").setMaster("local[2]")val ssc = new StreamingContext(conf,Seconds(10))//指定Topic信息val topic = Set("mydemo1")//直接讀取Broker,指定就是Broker的地址val brokerList = Map[String,String]("metadata.broker.list"->"192.168.153.11:9092")//創建一個DStream key value key的解碼器 value的解碼器val lines = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,brokerList,topic)//讀取消息val message = lines.map(e=>{new String(e.toString())})message.print()ssc.start()ssc.awaitTermination()} }?
2、集成Apache Flume:兩種方式
注意:依賴jar包Flume lib下面的Jar包,以及
(1)基于Flume Push模式: 推模式。Flume被用于在Flume agents 之間推送數據。在這種方式下,Spark Streaming可以建立一個receiver,起到一個avro receiver的作用。Flume可以直接將數據推送到該receiver。
a4.conf配置。
#bin/flume-ng agent -n a4 -f myagent/a4.conf -c conf -Dflume.root.logger=INFO,console #定義agent名, source、channel、sink的名稱 a4.sources = r1 a4.channels = c1 a4.sinks = k1#具體定義source a4.sources.r1.type = spooldir a4.sources.r1.spoolDir = /root/training/logs#具體定義channel a4.channels.c1.type = memory a4.channels.c1.capacity = 10000 a4.channels.c1.transactionCapacity = 100#具體定義sink a4.sinks = k1 a4.sinks.k1.type = avro a4.sinks.k1.channel = c1 a4.sinks.k1.hostname = 192.168.153.1 a4.sinks.k1.port = 1234#組裝source、channel、sink a4.sources.r1.channels = c1 a4.sinks.k1.channel = c1?
package flumeimport org.apache.spark.SparkConf import org.apache.spark.streaming.flume.FlumeUtils import org.apache.spark.streaming.{Seconds, StreamingContext}object MyFlumeStream {def main(args: Array[String]): Unit = {val conf = new SparkConf().setAppName("SparkFlumeNGWordCount").setMaster("local[2]")val ssc = new StreamingContext(conf, Seconds(5))//創建FlumeEvent的DStreamval flumeEvent = FlumeUtils.createStream(ssc,"192.168.153.1",1234)//將FlumeEvent中的事件轉成字符串val lineDStream = flumeEvent.map( e => {new String(e.event.getBody.array)})//輸出結果 lineDStream.print()ssc.start()ssc.awaitTermination();} }?測試:
1.啟動Spark streaming程序MyFlumeStream
2.啟動Flume:bin/flume-ng agent -n a4 -f myagent/a4.conf -c conf -Dflume.root.logger=INFO,console
3.拷貝日志文件到/root/training/logs目錄
4.觀察輸出,采集到數據:
?
(2)自定義sink方式(Pull模式): 拉模式。Flume將數據推送到sink中,并且保持數據buffered狀態。Spark Streaming使用一個可靠的Flume接收器從sink拉取數據。這種模式更加健壯和可靠,需要為Flume配置一個正常的sink
(*)將Spark的jar包拷貝到Flume的lib目錄下
(*)下面的這個jar包也需要拷貝到Flume的lib目錄下
(*)同時加入IDEA工程的classpath
#bin/flume-ng agent -n a1 -f myagent/a1.conf -c conf -Dflume.root.logger=INFO,console a1.channels = c1 a1.sinks = k1 a1.sources = r1a1.sources.r1.type = spooldir a1.sources.r1.spoolDir = /root/training/logsa1.channels.c1.type = memory a1.channels.c1.capacity = 100000 a1.channels.c1.transactionCapacity = 100000a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink a1.sinks.k1.channel = c1 a1.sinks.k1.hostname = 192.168.153.11 a1.sinks.k1.port = 1234#組裝source、channel、sink a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1?
package flumeimport org.apache.spark.SparkConf import org.apache.spark.storage.StorageLevel import org.apache.spark.streaming.flume.FlumeUtils import org.apache.spark.streaming.{Seconds, StreamingContext}object FlumeLogPull {def main(args: Array[String]) {val conf = new SparkConf().setAppName("SparkFlumeNGWordCount").setMaster("local[2]")val ssc = new StreamingContext(conf, Seconds(10))//創建FlumeEvent的DStreamval flumeEvent = FlumeUtils.createPollingStream(ssc,"192.168.153.11",1234,StorageLevel.MEMORY_ONLY_SER_2)//將FlumeEvent中的事件轉成字符串val lineDStream = flumeEvent.map( e => {new String(e.event.getBody.array)})//輸出結果 lineDStream.print()ssc.start()ssc.awaitTermination();} }?開啟flume:
bin/flume-ng agent -n a1 -f myagent/a1.conf -c conf -Dflume.root.logger=INFO,console
測試步驟和推模式類似。
轉載于:https://www.cnblogs.com/lingluo2017/p/8709122.html
總結
以上是生活随笔為你收集整理的大数据笔记(三十二)——SparkStreaming集成Kafka与Flume的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 使用 SQLServer On Linu
- 下一篇: web开发快速提高工作效率的一些资源