Vertica集群扩容实验过程记录
需求:
將3個(gè)節(jié)點(diǎn)的Vertica集群擴(kuò)容,額外增加3個(gè)節(jié)點(diǎn),即擴(kuò)展到6個(gè)節(jié)點(diǎn)的Vertica集群。
實(shí)驗(yàn)環(huán)境:
RHEL 6.5 + Vertica 7.2.2-2
步驟:
- 1.三節(jié)點(diǎn)Vertica集群環(huán)境創(chuàng)建
- 2.模擬創(chuàng)建業(yè)務(wù)最小測(cè)試用例
- 3.集群擴(kuò)容前準(zhǔn)備
- 4.集群擴(kuò)容:增加3個(gè)節(jié)點(diǎn)到集群
- Reference
1.三節(jié)點(diǎn)Vertica集群環(huán)境創(chuàng)建
三節(jié)點(diǎn)IP地址和主機(jī)名規(guī)劃:
192.168.56.121 vnode01 192.168.56.122 vnode02 192.168.56.123 vnode03數(shù)據(jù)存儲(chǔ)規(guī)劃目錄及所屬用戶/用戶組:
mkdir -p /data/verticadb chown -R dbadmin:verticadba /data/verticadb這個(gè)3節(jié)點(diǎn)Vertica集群的安裝過(guò)程不再贅述,綜合參考我以前寫(xiě)過(guò)的幾篇文章,你一定可以完美的搞定^_^。
FYI:
Linux快速配置集群ssh互信
Vertica 7.1安裝最佳實(shí)踐(RHEL6.4)
Vertica 安裝,建庫(kù),新建測(cè)試用戶并授予權(quán)限,建表,入庫(kù)
Tips:7.2版本的安裝提示依賴dialog這個(gè)包,如果系統(tǒng)沒(méi)有預(yù)安裝這個(gè)包,可以從對(duì)應(yīng)系統(tǒng)光盤(pán)中找到這個(gè)包,直接rpm在各節(jié)點(diǎn)安裝即可。如下:
[root@vnode01 Packages]# cluster_copy_all_nodes /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm /root dialog-1.1-9.20080819.1.el6.x86_64.rpm 100% 197KB 197.1KB/s 00:00 dialog-1.1-9.20080819.1.el6.x86_64.rpm 100% 197KB 197.1KB/s 00:00 [root@vnode01 Packages]# cluster_run_all_nodes "hostname; rpm -ivh /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm" vnode01 warning: /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ################################################## dialog ################################################## vnode02 warning: /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ################################################## dialog ################################################## vnode03 warning: /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ################################################## dialog ################################################## [root@vnode01 Packages]# [root@vnode01 Packages]# cluster_run_all_nodes "hostname; rpm -q dialog" vnode01 dialog-1.1-9.20080819.1.el6.x86_64 vnode02 dialog-1.1-9.20080819.1.el6.x86_64 vnode03 dialog-1.1-9.20080819.1.el6.x86_64最終安裝完畢,集群狀態(tài)應(yīng)該是這樣:
dbadmin=> select * from nodes;node_name | node_id | node_state | node_address | node_address_family | export_address | export_address_family | catalog_path | node_type | is_ephemeral | standing_in_for | node_down_since --------------------+-------------------+------------+----------------+---------------------+----------------+-----------------------+------------------------------------------------------------+-----------+--------------+-----------------+-----------------v_testmpp_node0001 | 45035996273704982 | UP | 192.168.56.121 | ipv4 | 192.168.56.121 | ipv4 | /data/verticadb/TESTMPP/v_testmpp_node0001_catalog/Catalog | PERMANENT | f | | v_testmpp_node0002 | 45035996273721500 | UP | 192.168.56.122 | ipv4 | 192.168.56.122 | ipv4 | /data/verticadb/TESTMPP/v_testmpp_node0002_catalog/Catalog | PERMANENT | f | | v_testmpp_node0003 | 45035996273721504 | UP | 192.168.56.123 | ipv4 | 192.168.56.123 | ipv4 | /data/verticadb/TESTMPP/v_testmpp_node0003_catalog/Catalog | PERMANENT | f | | (3 rows)dbadmin=>2.模擬創(chuàng)建業(yè)務(wù)最小測(cè)試用例
為了更好的模擬已經(jīng)有業(yè)務(wù)在數(shù)據(jù)庫(kù)上,我們來(lái)模擬創(chuàng)建業(yè)務(wù)最小測(cè)試用例:
FYI:
- Vertica 業(yè)務(wù)用戶指定資源池加載數(shù)據(jù)
- Vertica 分區(qū)表設(shè)計(jì)(續(xù))
在參考Vertica 業(yè)務(wù)用戶指定資源池加載數(shù)據(jù)這篇文章操作時(shí),在GRANT目錄讀權(quán)限時(shí)遇到了一個(gè)錯(cuò)誤,可能是版本差異,錯(cuò)誤現(xiàn)象及解決方法如下:
--錯(cuò)誤現(xiàn)象: dbadmin=> CREATE LOCATION '/tmp' NODE 'v_testmpp_node0001' USAGE 'USER'; CREATE LOCATION dbadmin=> GRANT READ ON LOCATION '/tmp' TO test; ROLLBACK 5365: User available location ["/tmp"] does not exist on node ["v_testmpp_node0002"] dbadmin=> --解決:刪除剛創(chuàng)建的節(jié)點(diǎn)1上的location,然后重新CREATE LOCATION,這一次指定參數(shù)“ALL NODES”: dbadmin=> SELECT DROP_LOCATION('/tmp' , 'v_testmpp_node0001');DROP_LOCATION ---------------/tmp dropped. (1 row)dbadmin=> CREATE LOCATION '/tmp' ALL NODES USAGE 'USER'; CREATE LOCATION dbadmin=> GRANT READ ON LOCATION '/tmp' TO test; GRANT PRIVILEGE3.集群擴(kuò)容前準(zhǔn)備
集群擴(kuò)容前,需要配置好增加的各個(gè)節(jié)點(diǎn)。
3.1 確認(rèn)規(guī)劃的IP地址和主機(jī)名,數(shù)據(jù)存儲(chǔ)目錄
IP地址和主機(jī)名規(guī)劃:
192.168.56.124 vnode04 192.168.56.125 vnode05 192.168.56.126 vnode06數(shù)據(jù)存儲(chǔ)規(guī)劃目錄及所屬用戶/用戶組:
mkdir -p /data/verticadb --更改目錄所有者,所有組,這里不用-R,因?yàn)橐寻惭b的節(jié)點(diǎn)該目錄下會(huì)有大量子目錄 chown dbadmin:verticadba /data/verticadb3.2 root用戶互信配置
--清除root用戶ssh互信的當(dāng)前所有配置信息(節(jié)點(diǎn)1執(zhí)行)【因?yàn)閞oot用戶的互信刪除不會(huì)影響到Vertica集群,所以才可以這樣操作】 cluster_run_all_nodes "hostname ; rm -rf ~/.ssh" rm -rf ~/.ssh--節(jié)點(diǎn)1的hosts文件(vi /etc/hosts) 192.168.56.121 vnode01 192.168.56.122 vnode02 192.168.56.123 vnode03 192.168.56.124 vnode04 192.168.56.125 vnode05 192.168.56.126 vnode06--節(jié)點(diǎn)1的環(huán)境變量(vi ~/.bash_profile) export NODE_LIST='vnode01 vnode02 vnode03 vnode04 vnode05 vnode06' --重新登錄或source生效變量 source ~/.bash_profile然后依據(jù)Linux快速配置集群ssh互信重新配置root用戶的互信。
3.3 數(shù)據(jù)存儲(chǔ)規(guī)劃目錄統(tǒng)一
cluster_run_all_nodes "hostname; mkdir -p /data/verticadb"3.4 確認(rèn)所有節(jié)點(diǎn)防火墻和SELinux關(guān)閉
cluster_run_all_nodes "hostname; service iptables status" cluster_run_all_nodes "hostname; getenforce"3.5 確認(rèn)依賴包dialog已安裝
cluster_run_all_nodes "hostname; rpm -q dialog"4.集群擴(kuò)容:增加3個(gè)節(jié)點(diǎn)到集群
4.1 增加3個(gè)節(jié)點(diǎn)到集群
/opt/vertica/sbin/update_vertica --add-hosts host(s) --rpm package
實(shí)際我這里是增加3個(gè)節(jié)點(diǎn),指定這三個(gè)節(jié)點(diǎn)的主機(jī)名稱
/opt/vertica/sbin/update_vertica --add-hosts vnode04,vnode05,vnode06 --rpm /root/vertica-7.2.2-2.x86_64.RHEL6.rpm --failure-threshold=HALT -u dbadmin -p vertica執(zhí)行過(guò)程如下:
[root@vnode01 ~]# /opt/vertica/sbin/update_vertica --add-hosts vnode04,vnode05,vnode06 --rpm /root/vertica-7.2.2-2.x86_64.RHEL6.rpm --failure-threshold=HALT -u dbadmin -p vertica Vertica Analytic Database 7.2.2-2 Installation Tool>> Validating options...Mapping hostnames in --add-hosts (-A) to addresses...vnode04 => 192.168.56.124vnode05 => 192.168.56.125vnode06 => 192.168.56.126>> Starting installation tasks. >> Getting system information for cluster (this may take a while)...Default shell on nodes: 192.168.56.126 /bin/bash 192.168.56.125 /bin/bash 192.168.56.124 /bin/bash 192.168.56.123 /bin/bash 192.168.56.122 /bin/bash 192.168.56.121 /bin/bash>> Validating software versions (rpm or deb)...>> Beginning new cluster creation...successfully backed up admintools.conf on 192.168.56.123 successfully backed up admintools.conf on 192.168.56.122 successfully backed up admintools.conf on 192.168.56.121 >> Creating or validating DB Admin user/group...Successful on hosts (6): 192.168.56.126 192.168.56.125 192.168.56.124 192.168.56.123 192.168.56.122 192.168.56.121Provided DB Admin account details: user = dbadmin, group = verticadba, home = /home/dbadminCreating group... Group already existsValidating group... OkayCreating user... User already existsValidating user... Okay>> Validating node and cluster prerequisites...Prerequisites not fully met during local (OS) configuration for verify-192.168.56.126.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.123.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.121.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.122.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.125.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.124.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.System prerequisites passed. Threshold = HALT>> Establishing DB Admin SSH connectivity...Installing/Repairing SSH keys for dbadmin>> Setting up each node and modifying cluster...Creating Vertica Data Directory...Updating agent... Creating node node0004 definition for host 192.168.56.124 ... Done Creating node node0005 definition for host 192.168.56.125 ... Done Creating node node0006 definition for host 192.168.56.126 ... Done>> Sending new cluster configuration to all nodes...Starting agent...>> Completing installation...Running upgrade logic No spread upgrade required: /opt/vertica/config/vspread.conf not found on any node Installation complete.Please evaluate your hardware using Vertica's validation tools:https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=VALSCRIPTTo create a database:1. Logout and login as dbadmin. (see note below)2. Run /opt/vertica/bin/adminTools as dbadmin3. Select Create Database from the Configuration MenuNote: Installation may have made configuration changes to dbadminthat do not take effect until the next session (logout and login).To add or remove hosts, select Cluster Management from the Advanced Menu.4.2 需要更改數(shù)據(jù)存儲(chǔ)目錄的所有者,所有組
--安裝軟件之后需要更改目錄所有者,所有組,這里不用-R,因?yàn)橐寻惭b的節(jié)點(diǎn)該目錄下會(huì)有大量子目錄 cluster_run_all_nodes "hostname; chown dbadmin:verticadba /data/verticadb"4.3 數(shù)據(jù)庫(kù)填加集群中剛剛擴(kuò)容的3個(gè)節(jié)點(diǎn)
dbadmin用戶登錄,使用admintools工具添加節(jié)點(diǎn):
7 Advanced Menu -> 6 Cluster Management -> 1 Add Host(s) -> Select database 空格選擇數(shù)據(jù)庫(kù) -> Select host(s) to add to database 空格選擇要添加的節(jié)點(diǎn) -> Are you sure you want to add ['192.168.56.124', '192.168.56.125', '192.168.56.126'] to the database? -> Failed to add nodes to database | | ROLLBACK 2382: Cannot create another node. The current license permits 3 node(s) and the database catalog already contains 3 node(s)這是因?yàn)樯鐓^(qū)版Vertica最多只允許有3個(gè)節(jié)點(diǎn)。
如果購(gòu)買(mǎi)了HP官方的Vertica的正式授權(quán)或是臨時(shí)授權(quán),則可以導(dǎo)入授權(quán),再添加新的集群節(jié)點(diǎn)到數(shù)據(jù)庫(kù)。
如果有正式授權(quán)就會(huì)繼續(xù)提示:
等同步完成,
Data Rebalance completed successfully. Press <Enter> to return to the Administration Tools menu.此時(shí)Vertica集群擴(kuò)容就算全部完成了。
Reference
- Understanding Cluster Rebalancing in HP Vertica
總結(jié)
以上是生活随笔為你收集整理的Vertica集群扩容实验过程记录的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: [翻译][1.4.2]Flask-Adm
- 下一篇: Linux的I/O多路复用机制之--se