Spark 调用 hive使用动态分区插入数据
生活随笔
收集整理的這篇文章主要介紹了
Spark 调用 hive使用动态分区插入数据
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
spark 調用sql插入hive 失敗 ,執行語句如下
spark.sql("INSERT INTO default.test_table_partition partition(province,city) SELECT xxx,xxx md5(province),md5(city) FROM test_table")報錯如下,需動態插入分區
Exception in thread "main" org.apache.spark.SparkException: Dynamic partition strict mode requires at least one static partition column. To turn this off set hive.exec.dynamic.partition.mode=nonstrictat org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:314)at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:66)at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:61)at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:77)at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:183)at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:183)at org.apache.spark.sql.Dataset$$anonfun$54.apply(Dataset.scala:2841)at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2840)at org.apache.spark.sql.Dataset.<init>(Dataset.scala:183)at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:68)at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)?在spark配置中加入:
.config("hive.exec.dynamici.partition",true)
.config("hive.exec.dynamic.partition.mode","nonstrict")
?
相關參數說明:
Hive.exec.dynamic.partition ?是否啟動動態分區。false(不開啟) true(開啟)默認是 falsehive.exec.dynamic.partition.mode ?打開動態分區后,動態分區的模式,有?strict和?nonstrict?兩個值可選,strict?要求至少包含一個靜態分區列,nonstrict則無此要求。各自的好處,大家自己查看哈。hive.exec.max.dynamic.partitions 允許的最大的動態分區的個數。可以手動增加分區。默認1000?
總結
以上是生活随笔為你收集整理的Spark 调用 hive使用动态分区插入数据的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: UML序列图
- 下一篇: Hive 大小表关联查询异常