DistCp迁移Hive数据过程中源集群增加删除文件等场景测试
1 概述
??由于在數(shù)據(jù)遷移過程中,上層任務(wù)仍在運(yùn)行,會出現(xiàn)源集群新增文件及刪除文件的情況,因此進(jìn)行測試
2 需要同步的數(shù)據(jù)文件說明
??源集群:192.168.40.100:8020,目標(biāo)集群:192.168.40.200:8020
 ??數(shù)據(jù)目錄及其大小如下
??這個文件對應(yīng)的表為dwd_pollution_distcp,共有20140801-20140930這些分區(qū)
3 元數(shù)據(jù)遷移
??源集群與目標(biāo)集群元數(shù)據(jù)存儲都是采用pg,pg庫遷移如下:
 ??①新主機(jī)PostgreSQL需要事先建立和原主機(jī)名稱相同的用戶和數(shù)據(jù)庫
 ??②備份原主機(jī)數(shù)據(jù)庫,格式:pg_dump -U -p > 
 ??在源集群的pg庫中導(dǎo)出數(shù)據(jù)
??③導(dǎo)入數(shù)據(jù)到新主機(jī)
 ??再目標(biāo)集群的pg庫執(zhí)行導(dǎo)入命令
4 源集群新增文件測試
4.1 執(zhí)行distcp命令
hadoop distcp -p hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/??distcp執(zhí)行期間執(zhí)行如下SQL,插入數(shù)據(jù)到新分區(qū)
insert overwrite table iot.dwd_pollution_distcp partition(stat_date) select ozone,particullate_matter,carbon_monoxide,sulfure_dioxide,nitrogen_dioxide,longitude,latitude,date_time,"20201021" as stat_date from iot.dwd_pollution??任務(wù)完成,并且沒有報錯
... 20/10/21 11:16:21 INFO mapreduce.Job: Job job_1602207790062_0240 completed successfully 20/10/21 11:16:21 INFO mapreduce.Job: Counters: 38File System CountersFILE: Number of bytes read=0FILE: Number of bytes written=4923884FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=5921518162HDFS: Number of bytes written=5921449140HDFS: Number of read operations=1962HDFS: Number of large read operations=0HDFS: Number of write operations=1249HDFS: Number of bytes read erasure-coded=0Job Counters Launched map tasks=22Other local map tasks=22Total time spent by all maps in occupied slots (ms)=3720826Total time spent by all reduces in occupied slots (ms)=0Total time spent by all map tasks (ms)=3720826Total vcore-milliseconds taken by all map tasks=3720826Total megabyte-milliseconds taken by all map tasks=3810125824Map-Reduce FrameworkMap input records=245Map output records=0Input split bytes=2530Spilled Records=0Failed Shuffles=0Merged Map outputs=0GC time elapsed (ms)=4309CPU time spent (ms)=167940Physical memory (bytes) snapshot=8456990720Virtual memory (bytes) snapshot=59629797376Total committed heap usage (bytes)=13333692416Peak Map Physical memory (bytes)=418676736Peak Map Virtual memory (bytes)=2735362048File Input Format Counters Bytes Read=66492File Output Format Counters Bytes Written=0DistCp CountersBandwidth in Btyes=69105400Bytes Copied=5921449140Bytes Expected=5921449140Files Copied=182DIR_COPY=634.2 數(shù)據(jù)文件核對
??源集群文件大小
[hdfs@hadoop-local-02 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 5.7 G 17.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcp[hdfs@hadoop-local-02 ~]$ hdfs dfs -ls /user/hive/warehouse/iot.db/dwd_pollution_distcp/ Found 63 items??目標(biāo)集群文件大小
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 5.5 G 16.5 G /user/hive/warehouse/iot.db/dwd_pollution_distcp [hdfs@hadoop-cdh-03 ~]$ hdfs dfs -ls /user/hive/warehouse/iot.db/dwd_pollution_distcp/ Found 62 items??發(fā)現(xiàn)再執(zhí)行DistCp過程中插入的數(shù)據(jù)不影響DistCp的使用,但是插入的數(shù)據(jù)不能及時發(fā)現(xiàn)
4.3 源集群新增文件處理方法
??可以重新執(zhí)行命令使用DistCp的update參數(shù)更新增量數(shù)據(jù)
hadoop distcp -update -p hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp4.4 核對更新后文件是否一致
??(1)文件核對
 ??源集群文件大小
??目標(biāo)集群文件大小
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 5.7 G 17.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcp [hdfs@hadoop-cdh-03 ~]$ hdfs dfs -ls /user/hive/warehouse/iot.db/dwd_pollution_distcp/ Found 63 items??(2)SQL核對
 ??首先在目標(biāo)集群修復(fù)分區(qū),因為新插入的那個分區(qū)元數(shù)據(jù)在目標(biāo)集群的hive元數(shù)據(jù)中沒有
??源集群hive查詢條數(shù)
hive> select count(*) from iot.dwd_pollution_distcp; OK 138250266??目標(biāo)集群hive查詢條數(shù)
hive> select count(*) from iot.dwd_pollution_distcp; OK 1382502665 源集群刪除文件測試
??這里所說的刪除文件是指SQL在執(zhí)行的時候執(zhí)行了overwrite操作,會先刪除原有數(shù)據(jù)在新增。測試前清空目標(biāo)集群中遷移過去的數(shù)據(jù),源集群恢復(fù)最原始狀態(tài)
5.1 執(zhí)行distcp命令
hadoop distcp -p hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/??distcp執(zhí)行期間執(zhí)行如下SQL,會將這個表的20140801-20140930分區(qū)先刪除再新增數(shù)據(jù)
insert overwrite table iot.dwd_pollution_distcp partition(stat_date) select ozone,particullate_matter,carbon_monoxide,sulfure_dioxide,nitrogen_dioxide,longitude,latitude,date_time,stat_date as stat_date from iot.dwd_pollution??錯誤信息如下
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140810/000008_0 to hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140810/000008_0at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:256)... 10 more Caused by: java.io.FileNotFoundException: File does not exist: /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140810/000008_0at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:85)at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:152)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1909)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:735)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:415)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:870)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:816)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2680)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:855)at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1768)at org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1714)at org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1744)at org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1610)at org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1607)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1619)at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:119)at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)... 11 more5.2 核對數(shù)據(jù)
??查看源集群文件,部分文件如下
[hdfs@hadoop-local-02 ~]$ hdfs dfs -du -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 969.3 K 2.8 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140801 1.7 M 5.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140802 2.2 M 6.7 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140803 2.5 M 7.4 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140804 2.7 M 8.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140805??查看目標(biāo)集群文件,部分文件如下
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -h /user/hive/warehouse/iot.db/dwd_pollution_distcp 969.3 K 2.8 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140801 16.8 M 50.4 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140802 2.2 M 6.7 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140803 2.5 M 7.4 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140804 26.6 M 79.8 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20140805??發(fā)現(xiàn)數(shù)據(jù)完全核對不上
5.3 使用update與delete參數(shù)修復(fù)
??重新執(zhí)行命令,并且加上update與delete參數(shù)
hadoop distcp -p -update -delete hdfs://192.168.40.100:8020/user/hive/warehouse/iot.db/dwd_pollution_distcp hdfs://192.168.40.200:8020/user/hive/warehouse/iot.db/??查看源集群文件大小與數(shù)據(jù)記錄數(shù)
[hdfs@hadoop-local-02 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp/ 376.5 M 1.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcphive> select count(*) from dwd_pollution_distcp; OK 8919106 Time taken: 7.39 seconds, Fetched: 1 row(s)??查看目標(biāo)集群文件大小與數(shù)據(jù)記錄數(shù)
[hdfs@hadoop-cdh-03 ~]$ hdfs dfs -du -s -h /user/hive/warehouse/iot.db/dwd_pollution_distcp/ 376.5 M 1.1 G /user/hive/warehouse/iot.db/dwd_pollution_distcphive> select count(*) from dwd_pollution_distcp; OK 8919106 Time taken: 7.39 seconds, Fetched: 1 row(s)??注意:hive要先執(zhí)行MSCK REPAIR TABLE dwd_pollution_distcp進(jìn)行分區(qū)修復(fù),因為前面的測試導(dǎo)致分區(qū)元數(shù)據(jù)信息沒更新
5.4 執(zhí)行overwrite時的另一種情況
??前面測試的是overwrite時數(shù)據(jù)還沒copy完,還有一種情況就是overwrite是數(shù)據(jù)已經(jīng)copy完。測試前清空目標(biāo)集群中遷移過去的數(shù)據(jù)
 ??(1)執(zhí)行distcp
??(2)在程序快執(zhí)行完時執(zhí)行SQL
insert overwrite table iot.dwd_pollution_distcp partition(stat_date) select ozone,particullate_matter,carbon_monoxide,sulfure_dioxide,nitrogen_dioxide,longitude,latitude,date_time,'20141001' as stat_date from iot.dwd_pollution;??程序正常執(zhí)行成功
 ??(3)查詢未執(zhí)行SQl前后20141001這個分區(qū)的數(shù)據(jù)量
??源集群執(zhí)行前
345.3 K 1.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20141001??源集群執(zhí)行后
188.2 M 564.7 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20141001??查看目標(biāo)集群
345.3 K 1.0 M /user/hive/warehouse/iot.db/dwd_pollution_distcp/stat_date=20141001??(4)修復(fù)
 ??重新執(zhí)行distcp命令加上update和delete參數(shù)
6 總結(jié)
??(1)disctp在拷貝過程中對源集群拷貝的文件夾下增加文件不影??響拷貝,但是這個增加的文件不會再拷貝列表中。后續(xù)可以通過update參數(shù)對這個增加的文件再進(jìn)行遷移。
??(2)遷移的過程中老集群目錄因sql執(zhí)行了ovewrite刪除了文件怎么辦?
 ??分兩種情況:①執(zhí)行ovewrite的時候?qū)?yīng)分區(qū)數(shù)據(jù)已經(jīng)copy完,執(zhí)行distcp程序不報錯,只能在通過sql查看條數(shù)來判斷
??②執(zhí)行ovewrite的時候?qū)?yīng)分區(qū)數(shù)據(jù)沒有copy完,執(zhí)行distcp程序報錯,可以通過-i參數(shù)忽略失敗,那么其他沒有影響的分區(qū)會正常copy處理:使用-update對文件進(jìn)行更新(如果目的文件的名稱和大小與源文件不同,則覆蓋;若目的文件大小和名稱與源文件相同則跳過),需要加-detele(如果目的目錄存在源目錄中不存在的文件,則刪除;走h(yuǎn)dfs垃圾回收站),如果該分區(qū)ovewrite之后數(shù)據(jù)減少,導(dǎo)致存在了老集群不存在的文件,需要將其刪除
??(3)遷移了一半,任務(wù)失敗了怎么辦?
??①-update配合-detele進(jìn)行重新執(zhí)行命令
??②刪除掉新集群中的臟數(shù)據(jù),重新執(zhí)行遷移命令。不加-overwrite參數(shù),來跳過已存在的文件。
總結(jié)
以上是生活随笔為你收集整理的DistCp迁移Hive数据过程中源集群增加删除文件等场景测试的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
                            
                        - 上一篇: oracle磁带库清洁带标签,磁带库、磁
 - 下一篇: 执行计划 分析一条sql语句的效率 my