Sqoop数据迁移工具的使用
文章作者:foochane?
原文鏈接:https://foochane.cn/article/2019063001.html
Sqoop數據遷移工具的使用 sqoop簡單介紹 sqoop數據到HDFS/HIVE sqoop數據到MySQL
1 sqoop簡單介紹
sqoop是apache旗下一款“Hadoop和關系數據庫服務器之間傳送數據”的工具。用于數據的導入和導出。
- 導入數據:MySQL,Oracle導入數據到Hadoop的HDFS、HIVE、HBASE等數據存儲系統;
- 導出數據:從Hadoop的文件系統中導出數據到關系數據庫mysql等
sqoop的工作機制是將導入或導出命令翻譯成mapreduce程序來實現,在翻譯出的mapreduce中主要是對inputformat和outputformat進行定制。
2 sqoop安裝
安裝sqoop前要先安裝好java環境和hadoop環境。
sqoop只是一個工具,安裝在那個節點都可以,只要有java環境和hadoop環境,并且能連接到對應數據庫即可。
2.1 下載并解壓
下載地址:http://mirror.bit.edu.cn/apache/sqoop/
下載:sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
解壓到安裝目錄下
2.2 修改配置文件
將sqoop-env-template.sh復制一份重命名為sqoop-env.sh文件,在sqoop-env.sh文件中添加如下內容:
export HADOOP_COMMON_HOME=/usr/local/bigdata/hadoop-2.7.1 export HADOOP_MAPRED_HOME=/usr/local/bigdata/hadoop-2.7.1 export HIVE_HOME=/usr/local/bigdata/hive-2.3.52.3 安裝mysql的jdbc啟動
將 mysql-connector-java-5.1.45.jar 拷貝到sqoop的lib目錄下。
$ sudo apt-get install libmysql-java #之前已經裝過了 $ ln -s /usr/share/java/mysql-connector-java-5.1.45.jar /usr/local/bigdata/sqoop-1.4.7/lib也可以自己手動復制 mysql-connector-java-5.1.45.jar。
2.4 驗證sqoop
查看sqoop版本
$ bin/sqoop-version Warning: /usr/local/bigdata/sqoop-1.4.7/bin/../../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/bigdata/sqoop-1.4.7/bin/../../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 19/06/30 03:03:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 Sqoop 1.4.7 git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8 Compiled by maugli on Thu Dec 21 15:59:58 STD 20會出現幾個警告,暫時先不管。
驗證sqoop到mysql業務庫之間的連通性:
$ bin/sqoop-list-databases --connect jdbc:mysql://Master:3306 --username hiveuser --password 123456 $ bin/sqoop-list-tables --connect jdbc:mysql://Master:3306/metastore --username hiveuser --password 1234563 sqoop數據導入
3.1 從MySql導數據到HDFS
先在mysql中,建表插入測試數據;
SET FOREIGN_KEY_CHECKS=0;-- ---------------------------- -- Table structure for `emp` -- ---------------------------- DROP TABLE IF EXISTS `emp`; CREATE TABLE `emp` (`id` int(11) DEFAULT NULL,`name` varchar(100) DEFAULT NULL,`deg` varchar(100) DEFAULT NULL,`salary` int(11) DEFAULT NULL,`dept` varchar(10) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;-- ---------------------------- -- Records of emp -- ---------------------------- INSERT INTO `emp` VALUES ('1201', 'gopal', 'manager', '50000', 'TP'); INSERT INTO `emp` VALUES ('1202', 'manisha', 'Proof reader', '50000', 'TP'); INSERT INTO `emp` VALUES ('1203', 'khalil', 'php dev', '30000', 'AC'); INSERT INTO `emp` VALUES ('1204', 'prasanth', 'php dev', '30000', 'AC'); INSERT INTO `emp` VALUES ('1205', 'kranthi', 'admin', '20000', 'TP');-- ---------------------------- -- Table structure for `emp_add` -- ---------------------------- DROP TABLE IF EXISTS `emp_add`; CREATE TABLE `emp_add` (`id` int(11) DEFAULT NULL,`hno` varchar(100) DEFAULT NULL,`street` varchar(100) DEFAULT NULL,`city` varchar(100) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;-- ---------------------------- -- Records of emp_add -- ---------------------------- INSERT INTO `emp_add` VALUES ('1201', '288A', 'vgiri', 'jublee'); INSERT INTO `emp_add` VALUES ('1202', '108I', 'aoc', 'sec-bad'); INSERT INTO `emp_add` VALUES ('1203', '144Z', 'pgutta', 'hyd'); INSERT INTO `emp_add` VALUES ('1204', '78B', 'old city', 'sec-bad'); INSERT INTO `emp_add` VALUES ('1205', '720X', 'hitec', 'sec-bad');-- ---------------------------- -- Table structure for `emp_conn` -- ---------------------------- DROP TABLE IF EXISTS `emp_conn`; CREATE TABLE `emp_conn` (`id` int(100) DEFAULT NULL,`phno` varchar(100) DEFAULT NULL,`email` varchar(100) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;-- ---------------------------- -- Records of emp_conn -- ---------------------------- INSERT INTO `emp_conn` VALUES ('1201', '2356742', 'gopal@tp.com'); INSERT INTO `emp_conn` VALUES ('1202', '1661663', 'manisha@tp.com'); INSERT INTO `emp_conn` VALUES ('1203', '8887776', 'khalil@ac.com'); INSERT INTO `emp_conn` VALUES ('1204', '9988774', 'prasanth@ac.com'); INSERT INTO `emp_conn` VALUES ('1205', '1231231', 'kranthi@tp.com');導入:
bin/sqoop import \ --connect jdbc:mysql://Master:3306/test \ --username root \ --password root \ --target-dir /sqooptest \ --fields-terminated-by ',' \ --table emp \ --m 2 \ --split-by id;- --connect:指定數據庫
- --username:指定用戶名
- --password:指定密碼
- --table:指定要導入的表
- --target-dir:指定hdfs的目錄
- --fields-terminated-by:指定文件分割符
- --m: 指定maptask個數,如果大于1,必須指定split-by參數,如指定為2,最后生產的文件會是兩個
- --split-by:指定分片的字段
如果表的數據量不是很大就不用指定設置--m參數了
注意導入前前啟動hdfs和yarn,并且提交的yarn上運行,而不是在本地運行。
示例:
$ bin/sqoop import --connect jdbc:mysql://Master:3306/test --username hadoop --password 123456 --target-dir /sqooptest --fields-terminated-by ',' --table emp --m 1 --split-by id; Warning: /usr/local/bigdata/sqoop-1.4.7/bin/../../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/bigdata/sqoop-1.4.7/bin/../../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 19/06/30 05:00:43 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 19/06/30 05:00:43 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 19/06/30 05:00:44 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 19/06/30 05:00:44 INFO tool.CodeGenTool: Beginning code generation Sun Jun 30 05:00:45 UTC 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 19/06/30 05:00:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `emp` AS t LIMIT 1 19/06/30 05:00:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `emp` AS t LIMIT 1 19/06/30 05:00:46 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/bigdata/hadoop-2.7.1 Note: /tmp/sqoop-hadoop/compile/cd17c36add75dfe67edd3facf7538def/emp.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 19/06/30 05:00:56 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/cd17c36add75dfe67edd3facf7538def/emp.jar 19/06/30 05:00:56 WARN manager.MySQLManager: It looks like you are importing from mysql. 19/06/30 05:00:56 WARN manager.MySQLManager: This transfer can be faster! Use the --direct 19/06/30 05:00:56 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path. 19/06/30 05:00:56 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 19/06/30 05:00:56 INFO mapreduce.ImportJobBase: Beginning import of emp SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/bigdata/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/bigdata/hbase-2.0.5/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 19/06/30 05:00:58 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 19/06/30 05:01:06 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 19/06/30 05:01:07 INFO client.RMProxy: Connecting to ResourceManager at Master/192.168.233.200:8032 Sun Jun 30 05:01:55 UTC 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 19/06/30 05:01:56 INFO db.DBInputFormat: Using read commited transaction isolation 19/06/30 05:01:56 INFO mapreduce.JobSubmitter: number of splits:1 19/06/30 05:01:58 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1561868076549_0002 19/06/30 05:02:05 INFO impl.YarnClientImpl: Submitted application application_1561868076549_0002 19/06/30 05:02:06 INFO mapreduce.Job: The url to track the job: http://Master:8088/proxy/application_1561868076549_0002/ 19/06/30 05:02:06 INFO mapreduce.Job: Running job: job_1561868076549_0002 19/06/30 05:02:47 INFO mapreduce.Job: Job job_1561868076549_0002 running in uber mode : false 19/06/30 05:02:48 INFO mapreduce.Job: map 0% reduce 0% 19/06/30 05:03:35 INFO mapreduce.Job: map 100% reduce 0% 19/06/30 05:03:36 INFO mapreduce.Job: Job job_1561868076549_0002 completed successfully 19/06/30 05:03:37 INFO mapreduce.Job: Counters: 30File System CountersFILE: Number of bytes read=0FILE: Number of bytes written=135030FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=87HDFS: Number of bytes written=151HDFS: Number of read operations=4HDFS: Number of large read operations=0HDFS: Number of write operations=2Job CountersLaunched map tasks=1Other local map tasks=1Total time spent by all maps in occupied slots (ms)=42476Total time spent by all reduces in occupied slots (ms)=0Total time spent by all map tasks (ms)=42476Total vcore-seconds taken by all map tasks=42476Total megabyte-seconds taken by all map tasks=43495424Map-Reduce FrameworkMap input records=5Map output records=5Input split bytes=87Spilled Records=0Failed Shuffles=0Merged Map outputs=0GC time elapsed (ms)=250CPU time spent (ms)=2700Physical memory (bytes) snapshot=108883968Virtual memory (bytes) snapshot=1934733312Total committed heap usage (bytes)=18415616File Input Format CountersBytes Read=0File Output Format CountersBytes Written=151 19/06/30 05:03:37 INFO mapreduce.ImportJobBase: Transferred 151 bytes in 150.6495 seconds (1.0023 bytes/sec) 19/06/30 05:03:37 INFO mapreduce.ImportJobBase: Retrieved 5 records.查看是否導入成功:
$ hdfs dfs -cat /sqooptest/part-m-* 1201,gopal,manager,50000,TP 1202,manisha,Proof reader,50000,TP 1203,khalil,php dev,30000,AC 1204,prasanth,php dev,30000,AC 1205,kranthi,admin,20000,TP3.2 從MySql導數據到Hive
命令:
bin/sqoop import \ --connect jdbc:mysql://Master:3306/test \ --username hadoop \ --password 123456 \ --table emp \ --hive-import \ --split-by id \ --m 1;導入到hive,需要添加--hive-import參數,不用指定--target-dir其他參數跟導入到hdfs上一樣。
3.3 導入表數據子集
有時候我們并不需要,導入數據表中的全部數據,sqoop也支持導入數據表的部分數據。
這是可以使用Sqoop的where語句。where子句的一個子集。它執行在各自的數據庫服務器相應的SQL查詢,并將結果存儲在HDFS的目標目錄。
where子句的語法如下:
--where <condition>下面的命令用來導入emp_add表數據的子集。居住城市為:sec-bad
bin/sqoop import \ --connect jdbc:mysql://Master:3306/test \ --username hadoop \ --password 123456 \ --where "city ='sec-bad'" \ --target-dir /wherequery \ --table emp_add \--m 1另外也可以使用select語句:
bin/sqoop import \ --connect jdbc:mysql://Master:3306/test \ --username hadoop \ --password 123456 \ --target-dir /wherequery2 \ --query 'select id,name,deg from emp WHERE id>1207 and $CONDITIONS' \ --split-by id \ --fields-terminated-by '\t' \ --m 23.4 增量導入
增量導入是僅導入新添加的表中的行的技術。
sqoop支持兩種增量MySql導入到hive的模式,一種是append,即通過指定一個遞增的列。另種是可以根據時間戳。
3.4.1 append
指定如下參數:
--incremental append --check-column num_id --last-value 0--check-column 表示指定遞增的字段,--last-value指定上一次到入的位置
如:
bin/sqoop import \ --connect jdbc:mysql://Master:3306/test \ --username hadoop \ --password 123456 \ --table emp --m 1 \ --incremental append \ --check-column id \ --last-value 12083.4.2 根據時間戳
命令中添加如下參數:
--incremental lastmodified --check-column created --last-value '2012-02-01 11:0:00'就是只導入created 比2012-02-01 11:0:00更大的數據。
4 Sqoop的數據導出
將數據從HDFS把文件導出到RDBMS數據庫,導出前目標表必須存在于目標數據庫中。默認操作是從將文件中的數據使用INSERT語句插入到表中。更新模式下,是生成UPDATE語句更新表數據
語法
導入過程
1、首先需要手動創建mysql中的目標表
mysql> USE db; mysql> CREATE TABLE employee ( id INT NOT NULL PRIMARY KEY, name VARCHAR(20), deg VARCHAR(20),salary INT,dept VARCHAR(10));2、執行導出命令
bin/sqoop export \ --connect jdbc:mysql://Master:3306/test \ --username root \ --password root \ --table employee \ --export-dir /user/hadoop/emp/3、驗證表mysql命令行。
mysql>select * from employee;如果給定的數據存儲成功,那么可以找到數據在如下的employee表。
+------+--------------+-------------+-------------------+--------+ | Id | Name | Designation | Salary | Dept | +------+--------------+-------------+-------------------+--------+ | 1201 | gopal | manager | 50000 | TP | | 1202 | manisha | preader | 50000 | TP | | 1203 | kalil | php dev | 30000 | AC | | 1204 | prasanth | php dev | 30000 | AC | | 1205 | kranthi | admin | 20000 | TP | | 1206 | satish p | grp des | 20000 | GR | +------+--------------+-------------+-------------------+--------+轉載于:https://www.cnblogs.com/foochane/p/11110583.html
總結
以上是生活随笔為你收集整理的Sqoop数据迁移工具的使用的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Linux ifconfig 配置网络接
- 下一篇: 命令行获取docker远程仓库镜像列表