oracle导入初始数据死机,Sqoop导入Oracle数据至hive卡死在hive.HiveImport: Connecting to jdbc:hive2不执行...
環境信息:
HDP-3.1.4
已經下載好odjbc8.jar驅動程序放置在/usr/hdp/current/sqoop-client/lib/目錄
Sqoop讀取Oracle數據庫數據導入Hive時,一直卡在INFO hive.HiveImport: Connecting to jdbc:hive2:// 不懂,日志執行到下面這樣子就不執行了。
20/10/19 05:16:56 WARN hive.TableDefWriter: Column SJYRQ had to be cast to a less precise type in Hive
20/10/19 05:16:56 WARN hive.TableDefWriter: Column TYSJ had to be cast to a less precise type in Hive
20/10/19 05:16:56 WARN hive.TableDefWriter: Column HFSJ had to be cast to a less precise type in Hive
20/10/19 05:16:56 INFO hive.HiveImport: Loading uploaded data into Hive
20/10/19 05:16:57 INFO hive.HiveImport: SLF4J: Class path contains multiple SLF4J bindings.
20/10/19 05:16:57 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
20/10/19 05:16:57 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
20/10/19 05:16:57 INFO hive.HiveImport: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
20/10/19 05:16:57 INFO hive.HiveImport: SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
20/10/19 05:16:59 INFO hive.HiveImport: Connecting to jdbc:hive2://node93.prpq:2181,node94.prpq:2181,node92.prpq:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
方案一:
在執行sqoop命令的節點新建/etc/hive/conf/beeline-hs2-connection.xml配置文件,內容如下:
beeline.hs2.connection.user
hdfs
beeline.hs2.connection.password
hdfs
由于我們使用hdfs賬號進行sqoop導入操作,所以賬號和密碼都配置成hdfs,比方案二好的地方是,如果hive的配置發生變化,不會被覆蓋。
方案二:
參考HiveServer2 Clients配置/etc/hive/3.1.4.0-315/0/beeline-site.xml,修改點如下,原始內容:
beeline.hs2.jdbc.url.container
jdbc:hive2://cdh-m1:2181,cdh-n1:2181,cdh-n2:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
修改為帶用戶名密碼的
beeline.hs2.jdbc.url.container
jdbc:hive2://cdh-m1:2181,cdh-n1:2181,cdh-n2:2181/;user=hdfs;password=hdfs;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
之后再執行sqoop import能夠正常執行完畢了
總結
以上是生活随笔為你收集整理的oracle导入初始数据死机,Sqoop导入Oracle数据至hive卡死在hive.HiveImport: Connecting to jdbc:hive2不执行...的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 微信放大镜在哪
- 下一篇: oracle修改asm参数文件,学习笔记