kafka偏移量保存到mysql里_SparkStreaming+kafka保存offset的偏移量到mysql案例
MySQL創建存儲offset的表格
mysql> use test
mysql> create table hlw_offset(
topic varchar(32),
groupid varchar(50),
partitions int,
fromoffset bigint,
untiloffset bigint,
primary key(topic,groupid,partitions)
);
2.Maven依賴包
2.11.8
2.3.1
2.5.0
--------------------------------------------------
org.scala-lang
scala-library
${scala.version}
org.apache.spark
spark-core_2.11
${spark.version}
org.apache.spark
spark-sql_2.11
${spark.version}
org.apache.spark
spark-streaming_2.11
${spark.version}
org.apache.spark
spark-streaming-kafka-0-8_2.11
${spark.version}
mysql
mysql-connector-java
5.1.27
org.scalikejdbc
scalikejdbc_2.11
2.5.0
org.scalikejdbc
scalikejdbc-config_2.11
2.5.0
com.typesafe
config
1.3.0
org.apache.commons
commons-lang3
3.5
實現思路
1)StreamingContext
2)從kafka中獲取數據(從外部存儲獲取offset-->根據offset獲取kafka中的數據)
3)根據業務進行邏輯處理
4)將處理結果存到外部存儲中--保存offset
5)啟動程序,等待程序結束
代碼實現
1:SparkStreaming主體代碼如下
import kafka.common.TopicAndPartition
import kafka.message.MessageAndMetadata
import kafka.serializer.StringDecoder
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.{HasOffsetRanges, KafkaUtils}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import scalikejdbc._
import scalikejdbc.config._
object JDBCOffsetApp {
def main(args: Array[String]): Unit = {
//創建SparkStreaming入口
val conf = new SparkConf().setMaster("local[2]").setAppName("JDBCOffsetApp")
val ssc = new StreamingContext(conf,Seconds(5))
//kafka消費主題
val topics = ValueUtils.getStringValue("kafka.topics").split(",").toSet
//kafka參數
//這里應用了自定義的ValueUtils工具類,來獲取application.conf里的參數,方便后期修改
val kafkaParams = Map[String,String](
"metadata.broker.list"->ValueUtils.getStringValue("metadata.broker.list"),
"auto.offset.reset"->ValueUtils.getStringValue("auto.offset.reset"),
"group.id"->ValueUtils.getStringValue("group.id")
)
//先使用scalikejdbc從MySQL數據庫中讀取offset信息
//+------------+------------------+------------+------------+-------------+
//| topic | groupid | partitions | fromoffset | untiloffset |
//+------------+------------------+------------+------------+-------------+
//MySQL表結構如上,將“topic”,“partitions”,“untiloffset”列讀取出來
//組成 fromOffsets: Map[TopicAndPartition, Long],后面createDirectStream用到
DBs.setup()
val fromOffset = DB.readOnly( implicit session => {
SQL("select * from hlw_offset").map(rs => {
(TopicAndPartition(rs.string("topic"),rs.int("partitions")),rs.long("untiloffset"))
}).list().apply()
}).toMap
//如果MySQL表中沒有offset信息,就從0開始消費;如果有,就從已經存在的offset開始消費
val messages = if (fromOffset.isEmpty) {
println("從頭開始消費...")
KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
} else {
println("從已存在記錄開始消費...")
val messageHandler = (mm:MessageAndMetadata[String,String]) => (mm.key(),mm.message())
KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder,(String,String)](ssc,kafkaParams,fromOffset,messageHandler)
}
messages.foreachRDD(rdd=>{
if(!rdd.isEmpty()){
//輸出rdd的數據量
println("數據統計記錄為:"+rdd.count())
//官方案例給出的獲得rdd offset信息的方法,offsetRanges是由一系列offsetRange組成的數組
// trait HasOffsetRanges {
// def offsetRanges: Array[OffsetRange]
// }
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
offsetRanges.foreach(x => {
//輸出每次消費的主題,分區,開始偏移量和結束偏移量
println(s"---${x.topic},${x.partition},${x.fromOffset},${x.untilOffset}---")
//將最新的偏移量信息保存到MySQL表中
DB.autoCommit( implicit session => {
SQL("replace into hlw_offset(topic,groupid,partitions,fromoffset,untiloffset) values (?,?,?,?,?)")
.bind(x.topic,ValueUtils.getStringValue("group.id"),x.partition,x.fromOffset,x.untilOffset)
.update().apply()
})
})
}
})
ssc.start()
ssc.awaitTermination()
}
}
2:自定義的ValueUtils工具類如下
import com.typesafe.config.ConfigFactory
import org.apache.commons.lang3.StringUtils
object ValueUtils {
val load = ConfigFactory.load()
def getStringValue(key:String, defaultValue:String="") = {
val value = load.getString(key)
if(StringUtils.isNotEmpty(value)) {
value
} else {
defaultValue
}
}
}
3:application.conf內容如下
metadata.broker.list = "192.168.137.251:9092"
auto.offset.reset = "smallest"
group.id = "hlw_offset_group"
kafka.topics = "hlw_offset"
serializer.class = "kafka.serializer.StringEncoder"
request.required.acks = "1"
# JDBC settings
db.default.driver = "com.mysql.jdbc.Driver"
db.default.url="jdbc:mysql://hadoop000:3306/test"
db.default.user="root"
db.default.password="123456"
4:自定義kafka producer
import java.util.{Date, Properties}
import kafka.producer.{KeyedMessage, Producer, ProducerConfig}
object KafkaProducer {
def main(args: Array[String]): Unit = {
val properties = new Properties()
properties.put("serializer.class",ValueUtils.getStringValue("serializer.class"))
properties.put("metadata.broker.list",ValueUtils.getStringValue("metadata.broker.list"))
properties.put("request.required.acks",ValueUtils.getStringValue("request.required.acks"))
val producerConfig = new ProducerConfig(properties)
val producer = new Producer[String,String](producerConfig)
val topic = ValueUtils.getStringValue("kafka.topics")
//每次產生100條數據
var i = 0
for (i
測試
1:啟動kafka服務,并創建主題
[hadoop@hadoop000 bin]$ ./kafka-server-start.sh -daemon /home/hadoop/app/kafka_2.11-0.10.0.1/config/server.properties
[hadoop@hadoop000 bin]$ ./kafka-topics.sh --list --zookeeper localhost:2181/kafka
[hadoop@hadoop000 bin]$ ./kafka-topics.sh --create --zookeeper localhost:2181/kafka --replication-factor 1 --partitions 1 --topic hlw_offset
2:測試前查看MySQL中offset表,剛開始是個空表
mysql> select * from hlw_offset;
Empty set (0.00 sec)
3:通過kafka producer產生500條數據
4:啟動SparkStreaming程序
//控制臺輸出結果:
從頭開始消費...
數據統計記錄為:500
---hlw_offset,0,0,500---
查看MySQL表,offset記錄成功
mysql> select * from hlw_offset;
+------------+------------------+------------+------------+-------------+
| topic | groupid | partitions | fromoffset | untiloffset |
+------------+------------------+------------+------------+-------------+
| hlw_offset | hlw_offset_group | 0 | 0 | 500 |
+------------+------------------+------------+------------+-------------+
5:關閉SparkStreaming程序,再使用kafka producer生產300條數據,再次啟動spark程序(如果spark從500開始消費,說明成功讀取了offset,做到了只讀取一次語義)
6:查看更新后的offset MySQL數據
mysql> select * from hlw_offset;
+------------+------------------+------------+------------+-------------+
| topic | groupid | partitions | fromoffset | untiloffset |
+------------+------------------+------------+------------+-------------+
| hlw_offset | hlw_offset_group | 0 | 500 | 800 |
+------------+------------------+------------+------------+-------------+
總結
以上是生活随笔為你收集整理的kafka偏移量保存到mysql里_SparkStreaming+kafka保存offset的偏移量到mysql案例的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: mysql 烂泥_烂泥:学习mysql的
- 下一篇: java websocket 库_Jav