Greenplum 6安装指南(CentOS 7.X)
一、基本概念
Greenplum是一個面向數(shù)據(jù)倉庫應(yīng)用的關(guān)系型數(shù)據(jù)庫,因?yàn)橛辛己玫捏w系結(jié)構(gòu),所以
在數(shù)據(jù)存儲、高并發(fā)、高可用、線性擴(kuò)展、反應(yīng)速度、易用性和性價比等方面有非常
明顯的優(yōu)勢。Greenplum是一種基于PostgreSQL的分布式數(shù)據(jù)庫,其采用sharednothing架構(gòu),主機(jī)、操作系統(tǒng)、內(nèi)存、存儲都是自我控制的,不存在共享。
本質(zhì)上講Greenplum是一個關(guān)系型數(shù)據(jù)庫集群,它實(shí)際上是由數(shù)個獨(dú)立的數(shù)據(jù)庫服務(wù)
組合成的邏輯數(shù)據(jù)庫。與RAC不同,這種數(shù)據(jù)庫集群采取的是MPP(Massively
Parallel Processing)架構(gòu)。跟MySQL、Oracle 等關(guān)系型數(shù)據(jù)不同,Greenplum可以理解為分布式關(guān)系型數(shù)據(jù)庫。
關(guān)于Greenplum的更多信息請?jiān)L問https://greenplum.org/
二、安裝準(zhǔn)備
1.下載離線安裝包
https://github.com/greenplum-db/gpdb/releases/tag/6.1.02.上傳到服務(wù)器,放在/home/softs下(自定義目錄)
3.關(guān)閉防火墻
- 關(guān)閉防火墻(所有機(jī)器)iptables (centos6.x)關(guān)閉:service iptables stop永久關(guān)閉:chkconfig iptables off - 檢查firewalld (centos 7.x)關(guān)閉:systemctl stop firewalld永久關(guān)閉 :systemctl disable firewalld4.關(guān)閉SELINUX(所有機(jī)器)
[root@mdw ~]# vi /etc/selinux/config 確保SELINUX=disabled5.配置/etc/hosts (所有機(jī)器)
為之后GP在各個節(jié)點(diǎn)之間相互通信做準(zhǔn)備。 修改各臺主機(jī)的主機(jī)名稱。 一般建議的命名規(guī)則如下: 項(xiàng)目名_gp_節(jié)點(diǎn)Master : dis_gp_mdwStandby Master : dis_gp_smdwSegment Host : dis_gp_sdw1 dis_gp_sdw2 以此類推如果Standby也搭建在某Segment host下,則命名為:dis_gp_sdw3_smdw[root@mdw ~]# vi /etc/hosts
添加每臺機(jī)器的ip 和hostname,確保所有機(jī)器的/etc/hosts中包含以下信息: 192.168.xxx.xxx gp-mdw 192.168.xxx.xxx gp-sdw1 192.168.xxx.xxx gp-sdw2 192.168.xxx.xxx gp-sdw3-mdw26.修改主機(jī)名
Centos7.x vi /etc/hostname Centos6.x vi /etc/sysconfig/network 修改完之后 reboot機(jī)器7.配置sysctl.conf(所有機(jī)器)
vi /etc/sysctl.conf kernel.shmall = 197951838 # echo $(expr $(getconf _PHYS_PAGES) / 2) kernel.shmmax = 810810728448 # echo $(expr $(getconf _PHYS_PAGES) / 2 \* $(getconf PAGE_SIZE)) kernel.shmmni = 4096 vm.overcommit_memory = 2 vm.overcommit_ratio = 75 #vm.overcommit_ratio = (RAM - 0.026 * gp_vmem_rq) / RAM #gp_vmem_rq = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7 net.ipv4.ip_local_port_range = 10000 65535 kernel.sem = 500 2048000 200 4096 kernel.sysrq = 1 kernel.core_uses_pid = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.msgmni = 2048 net.ipv4.tcp_syncookies = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.conf.all.arp_filter = 1 net.core.netdev_max_backlog = 10000 net.core.rmem_max = 2097152 net.core.wmem_max = 2097152 vm.swappiness = 10 vm.zone_reclaim_mode = 0 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 #vm.min_free_kbytes = 487119#awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo#對于大于64GB的主機(jī),加上下面4行 vm.dirty_background_ratio = 0 vm.dirty_ratio = 0 vm.dirty_background_bytes = 1610612736 # 1.5GB vm.dirty_bytes = 4294967296 # 4GB #對于小于64GB的主機(jī)刪除dirty_background_bytes dirty_bytes,加上下面2行 vm.dirty_background_ratio = 3 vm.dirty_ratio = 10 #vm.min_free_kbytes在內(nèi)存 > 64GB系統(tǒng)的時候可以設(shè)置,一般比較少設(shè)置此參數(shù)。 #vm.min_free_kbytes,確保網(wǎng)絡(luò)和存儲驅(qū)動程序PF_MEMALLOC得到分配。這對內(nèi)存大的系統(tǒng)尤其重要。一般系統(tǒng)上,默認(rèn)值通常太低??梢允褂胊wk命令計(jì)算vm.min_free_kbytes的值,通常是建議的系統(tǒng)物理內(nèi)存的3%: #awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo >> /etc/sysctl.conf #不要設(shè)置 vm.min_free_kbytes 超過系統(tǒng)內(nèi)存的5%,這樣做可能會導(dǎo)致內(nèi)存不足。 file-max:這個參數(shù)表示進(jìn)程可以同時打開的最大句柄數(shù),這個參數(shù)直接限制最大并發(fā)連接數(shù)。 tcp_tw_reuse:這個參數(shù)設(shè)置為1,表示允許將TIME-WAIT狀態(tài)的socket重新用于新的TCP鏈接。這個對服務(wù)器來說很有意義,因?yàn)榉?wù)器上總會有大量TIME-WAIT狀態(tài)的連接。 tcp_keepalive_time:這個參數(shù)表示當(dāng)keepalive啟用時,TCP發(fā)送keepalive消息的頻度。默認(rèn)是7200 seconds,意思是如果某個TCP連接在idle 2小時后,內(nèi)核才發(fā)起probe。若將其設(shè)置得小一點(diǎn),可以更快地清理無效的連接。 tcp_fin_timeout:這個參數(shù)表示當(dāng)服務(wù)器主動關(guān)閉連接時,socket保持在FIN-WAIT-2狀態(tài)的最大時間。 tcp_max_tw_buckets:這個參數(shù)表示操作系統(tǒng)允許TIME_WAIT套接字?jǐn)?shù)量的最大值,如果超過這個數(shù)字,TIME_WAIT套接字將立刻被清除并打印警告信息。默認(rèn)是i180000,過多TIME_WAIT套接字會使Web服務(wù)器變慢。 tcp_max_syn_backlog:這個參數(shù)表示TCP三次握手建立階段接受WYN請求隊(duì)列的最大長度,默認(rèn)1024,將其設(shè)置大一些可以使出現(xiàn)Nginx繁忙來不及accept新連接的情況時,Linux不至于丟失客戶端發(fā)起的連接請求。 ip_local_port_range:這個參數(shù)定義了在UDP和TCP連接中本地端口的取值范圍。 net.ipv4.tcp_rmem:這個參數(shù)定義了TCP接受緩存(用于TCP接收滑動窗口)的最小值,默認(rèn)值,最大值。 net.ipv4.tcp_wmem:這個參數(shù)定義了TCP發(fā)送緩存(用于TCP發(fā)送滑動窗口)的最小值,默認(rèn)值,最大值。 netdev_max_backlog:當(dāng)網(wǎng)卡接收數(shù)據(jù)包的速度大于內(nèi)核處理的速度時,會有一個隊(duì)列保存這些數(shù)據(jù)包。這個參數(shù)表示該隊(duì)列的最大值。 rmem_default:這個參數(shù)表示內(nèi)核套接字接收緩存區(qū)默認(rèn)的大小。 wmem_default:這個參數(shù)表示內(nèi)核套接字發(fā)送緩存區(qū)默認(rèn)的大小。 rmem_max:這個參數(shù)表示內(nèi)核套接字接收緩存區(qū)默認(rèn)的最大大小。 wmem_max:這個參數(shù)表示內(nèi)核套接字發(fā)送緩存區(qū)默認(rèn)的最大大小。8.配置資源限制參數(shù)(所有機(jī)器)
在/etc/security/limits.conf文件下增加以下參數(shù)
vi /etc/security/limits.conf * soft nofile 524288 * hard nofile 524288 * soft nproc 131072 * hard nproc 131072 “*” 星號表示所有用戶 noproc 是代表最大進(jìn)程數(shù) nofile 是代表最大文件打開數(shù) RHEL / CentOS 6.X 修改:/etc/security/limits.d/90-nproc.conf 文件的nproc [root@mdw ~]# vi /etc/security/limits.d/90-nproc.conf確保 * soft nproc 131072 RHEL / CentOS 7.X 修改: 修改:/etc/security/limits.d/20-nproc.conf 文件的nproc [root@mdw ~]# vi /etc/security/limits.d/20-nproc.conf確保 * soft nproc 131072 ulimit -u 命令顯示每個用戶可用的最大進(jìn)程數(shù) max user processes,驗(yàn)證返回值為131072.9.檢查字符集
[root@mdw greenplum-db]# echo $LANGen_US.UTF-8如果為zh_CN.UTF-8則要修改 CentOS 6.X /etc/syscconfig/i18nCentOS 7.X /etc/locale.conf10…SSH連接閾值
Greenplum數(shù)據(jù)庫管理程序中的gpexpand‘ gpinitsystem、gpaddmirrors,使用 SSH連接來執(zhí)行任務(wù)。在規(guī)模較大的Greenplum集群中,程序的ssh連接數(shù)可能會超出主機(jī)的未認(rèn)證連接的最大閾值。發(fā)生這種情況時,會收到以下錯誤:ssh_exchange_identification:連接被遠(yuǎn)程主機(jī)關(guān)閉。為避免這種情況,可以更新 /etc/ssh/sshd_config 或者 /etc/sshd_config 文件的 MaxStartups 和 MaxSessions 參數(shù)vi /etc/ssh/sshd_configMaxStartups 300:30:1000重啟sshd,使參數(shù)生效service sshd restart11.同步集群時鐘(NTP) (此項(xiàng)為操作,環(huán)境已經(jīng)設(shè)置好ntp)
為了保證集群各個服務(wù)的時間一致,首先在master 服務(wù)器上,編輯 /etc/ntp.conf,配置時鐘服務(wù)器為數(shù)據(jù)中心的ntp服務(wù)器。若沒有,先修改master 服務(wù)器的時間到正確的時間,再修改其他節(jié)點(diǎn)的 /etc/ntp.conf,讓他們跟隨master服務(wù)器的時間。[root@mdw ~]# vi /etc/ntp.conf在server 最前面加上master:把server1,2,3,4全刪改成 server xxx,可以問公司IT人員公司的時鐘IP,如果沒有就設(shè)置成server 1.cn.pool.ntp.orgsegment:server mdw prefer # 優(yōu)先主節(jié)點(diǎn)server smdw # 其次standby 節(jié)點(diǎn),若沒有standby ,可以配置成數(shù)據(jù)中心的時鐘服務(wù)器[root@mdw ~]# service ntpd restart # 修改完重啟ntp服務(wù)12.創(chuàng)建gpadmin用戶(所有機(jī)器)
在每個節(jié)點(diǎn)上創(chuàng)建gpadmin用戶,用于管理和運(yùn)行g(shù)p集群[root@mdw ~]# groupadd gpadmin[root@mdw ~]# useradd gpadmin -g gpadmin -s /bin/bash[root@mdw ~]# passwd gpadmin密碼:gpadmin三、集群安裝部署
1.安裝依賴(所有機(jī)器)root用戶執(zhí)行
[root@mdw ~]# yum install -y zip unzip openssh-clients ed ntp net-tools perl perl-devel perl-ExtUtils* mlocate lrzsz parted apr apr-util bzip2 krb5-devel libevent libyaml rsync2.執(zhí)行安裝程序(root用戶執(zhí)行)
執(zhí)行安裝腳本,默認(rèn)安裝到/usr/local/ 目錄下。[root@mdw ~]# rpm -ivh greenplum-db-6.12.1-rhel7-x86_64.rpm安裝完成后可在/usr/local下看到greenplum-db-6.12.1和它的軟連接greenplum-db由于權(quán)限的問題,建議手動修改安裝路徑,放在/home/gpadmin下,執(zhí)行以下語句1.進(jìn)入安裝父目錄cd /usr/local2.把安裝目錄移動到/home/gpadminmv greenplum-db-6.12.1 /home/gpadmin3.刪除軟連接/bin/rm –r greenplum-db4.在/home/gpadmin下建立新的軟鏈接ln -s /home/gpadmin/greenplum-db-6.12.1 /home/gpadmin/greenplum-db5.修改greenplum_path.sh (重要)【greenplum-db-6.12.1可能不用修改】vi /home/gpadmin/greenplum-db/greenplum_path.sh把 GPHOME=/usr/local/greenplum-db-6.12.1修改為GPHOME=/home/gpadmin/greenplum-db6.把文件賦權(quán)給gpadmincd /homechown -R gpadmin:gpadmin /home/gpadmin 執(zhí)行安裝腳本,默認(rèn)安裝到/usr/local/ 目錄下。[root@mdw ~]# rpm -ivh greenplum-db-6.12.1-rhel7-x86_64.rpm安裝完成后可在/usr/local下看到greenplum-db-6.12.1和它的軟連接greenplum-db由于權(quán)限的問題,建議手動修改安裝路徑,放在/home/gpadmin下,執(zhí)行以下語句 。。。。。6.把文件賦權(quán)給gpadmincd /homechown -R gpadmin:gpadmin /home/gpadmin3.集群互信,免密登陸(root用戶執(zhí)行)
生成密鑰 GP6.x開始gpssh-exkeys命令已經(jīng)不帶自動生成密鑰了,所以需要自己手動生成 cd /home/gpadmin/greenplum-db [root@mdw greenplum-db]# ssh-keygen -t rsa 提示語不用管,一直按Enter鍵使用默認(rèn)值即可4.將本機(jī)的公鑰復(fù)制到各個節(jié)點(diǎn)機(jī)器的authorized_keys文件中
[root@mdw greenplum-db]# ssh-copy-id gp-sdw1[root@mdw greenplum-db]# ssh-copy-id gp-sdw2[root@mdw greenplum-db]# ssh-copy-id gp-sdw3-mdw25.使用gpssh-exkeys 工具,打通n-n的免密登陸
vi all_host 增加所有hostname到文件中 gp-mdw gp-sdw1 gp-sdw2 gp-sdw3-mdw2 [root@mdw greenplum-db]# source /home/gpadmin/greenplum-db/greenplum_path.sh [root@mdw greenplum-db]# gpssh-exkeys -f all_host6.同步master 配置到各個主機(jī)
打通gpadmin 用戶免密登錄 [root@mdw greenplum-db-6.2.1]# su - gpadmin [gpadmin@mdw ~]$ source /home/gpadmin/greenplum-db/greenplum_path.sh [gpadmin@mdw ~]$ ssh-keygen -t rsa [gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw1 [gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw2 [gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw3-mdw2 [gpadmin@mdw greenplum-db]$ mkdir gpconfigs [gpadmin@mdw greenplum-db]$ cd gpconfigs [gpadmin@mdw greenplum-db]$ vi all_hosts 把所有主機(jī)hostname添加進(jìn)去gp-mdw gp-sdw1 gp-sdw2 gp-sdw3-mdw2[gpadmin@mdw ~]$ gpssh-exkeys -f /home/gpadmin/gpconfigs/all_hosts [gpadmin@mdw greenplum-db]$ vi /home/gpadmin/gpconfigs/seg_hosts 把所有數(shù)據(jù)節(jié)點(diǎn)hostname添加進(jìn)去gp-sdw1 gp-sdw2 gp-sdw3-mdw27.批量設(shè)置greenplum在gpadmin用戶的環(huán)境變量(gpadmin用戶下)
添加gp的安裝目錄,和環(huán)境信息到用戶的環(huán)境變量中。 vi .bashrc source /home/gpadmin/greenplum-db/greenplum_path.sh8.批量復(fù)制系統(tǒng)參數(shù)到其他節(jié)點(diǎn)(如果前面已經(jīng)每臺機(jī)器設(shè)置過可以跳過)
示例: [gpadmin@mdw gpconfigs]$ exit [root@mdw ~]# source /home/gpadmin/greenplum-db/greenplum_path.sh [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/hosts root@=:/etc/hosts [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/security/limits.conf root@=:/etc/security/limits.conf [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/sysctl.conf root@=:/etc/sysctl.conf [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/security/limits.d/20-nproc.conf root@=:/etc/security/limits.d/20-nproc.conf [root@mdw ~]# gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'sysctl -p' [root@mdw ~]# gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'reboot'9.集群節(jié)點(diǎn)安裝
3.5.1 模擬gpseginstall 腳本(gpadmin用戶執(zhí)行) GP6.x無gpseginstall 命令,以下模擬此命令主要過程,完成gpsegment的部署 gpadmin 用戶下執(zhí)行 [root@gp-mdw gpadmin]# su - gpadmin [gpadmin@gp-mdw ~]$ cd /home/gpadmin [gpadmin@gp-mdw ~]$ tar -cf gp6.tar greenplum-db-6.12.1/ [gpadmin@gp-mdw ~]$ vi /home/gpadmin/gpconfigs/gpseginstall_hosts 添加 gp-sdw1 gp-sdw2 gp-sdw3-smdw10.把壓縮包分發(fā)到segment上
[gpadmin@gp-mdw ~]$ gpscp -f /home/gpadmin/gpconfigs/gpseginstall_hosts gp6.tar gpadmin@=:/home/gpadmin11.通過gpssh命令鏈接到各個segment上執(zhí)行命令
[gpadmin@mdw gpconfigs]$ gpssh -f /home/gpadmin/gpconfigs/gpseginstall_hosts tar -xf gp6.tar ln -s greenplum-db-6.12.1 greenplum-db exit12.環(huán)境變量文件分發(fā)到其他節(jié)點(diǎn)
[gpadmin@mdw gpconfigs]$ exit[root@mdw greenplum-db-6.2.1]# su - gpadmin[gpadmin@mdw ~]$ cd gpconfigs[gpadmin@mdw gpconfigs]$ vi seg_hosts把segment的hostname都添加到文件中g(shù)p-sdw1gp-sdw2gp-sdw3-smdw[gpadmin@mdw gpconfigs]$ gpscp -f /home/gpadmin/gpconfigs/seg_hosts /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc13.創(chuàng)建集群數(shù)據(jù)目錄(root用戶執(zhí)行)
1. 創(chuàng)建master 數(shù)據(jù)目錄 mkdir -p /data/master chown -R gpadmin:gpadmin /data source /home/gpadmin/greenplum-db/greenplum_path.sh 如果有standby節(jié)點(diǎn)則需要執(zhí)行下面2句 gp-sdw3-mdw2 這個hostname靈活變更 gpssh -h gp-sdw3-mdw2 -e 'mkdir -p /data/master' gpssh -h gp-sdw3-mdw2 -e 'chown -R gpadmin:gpadmin /data'2. 創(chuàng)建segment數(shù)據(jù)目錄 source /home/gpadmin/greenplum-db/greenplum_path.sh gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/p1' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/p2' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/m1' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/m2' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'chown -R gpadmin:gpadmin /data'四、集群初始化
1.編寫初始化配置文件(gpadmin用戶)
拷貝配置文件模板 su - gpadmincp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config2.根據(jù)需要修改參數(shù)
vi /home/gpadmin/gpconfigs/gpinitsystem_config 注意:To specify PORT_BASE, review the port range specified in the net.ipv4.ip_local_port_range parameter in the /etc/sysctl.conf file. 主要修改的參數(shù):#primary的數(shù)據(jù)目錄 declare -a DATA_DIRECTORY=(/data/p1 /data/p2) #master節(jié)點(diǎn)的hostname MASTER_HOSTNAME=gp-mdw #master節(jié)點(diǎn)的主目錄 MASTER_DIRECTORY=/data/master #mirror的端口要把前面的#去掉(啟用mirror) MIRROR_PORT_BASE=7000 #mirror的數(shù)據(jù)目錄要把前面的#去掉(啟用mirror) declare -a MIRROR_DATA_DIRECTORY=(/data/m1 /data/m2)3.集群初始化(gpadmin用戶)
執(zhí)行腳本: gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config --locale=C -h /home/gpadmin/gpconfigs/gpseginstall_hosts --mirror-mode=spread注意:spread是指spread分布策略,只允許主機(jī)數(shù)>每個主機(jī)中的段實(shí)例數(shù)情況(number of hosts is greater than the number of segment instances.) 如果不指定mirror_mode,則是默認(rèn)的group策略,這樣做的情況在段實(shí)例數(shù)>1時,down機(jī)之后不會導(dǎo)致它的鏡像全在另外一臺機(jī)器中,降低另外一臺機(jī)器的性能瓶頸。 安裝成功日志 . . . . server shutting down 20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process 20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1 20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[ERROR]:-Failed to kill processes for segment /data/master/gpseg-1: ([Errno 3] No such process) 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/master/gpseg-1 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Gathering information and validating the environment... 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.1.0 build commit:6788ca8c13b2bd6e8976ccffea07313cbab30560' 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232' 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance in admin mode 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Setting new master era 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Master Started... 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Shutting down master 20201220:12:07:02:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait... .. 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Process results... 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:----------------------------------------------------- 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Successful segment starts = 6 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Failed segment starts = 0 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:----------------------------------------------------- 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Successfully started 6 of 6 segment instances 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:----------------------------------------------------- 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance gp-mdw directory /data/master/gpseg-1 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Command pg_ctl reports Master gp-mdw instance active 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-No standby master configured. skipping... 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Database successfully started 20201220:12:07:05:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode 20201220:12:07:06:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances 20201220:12:07:06:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait... ...... 20201220:12:07:07:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait... ................................................................. 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------ 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Parallel process exit status 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------ 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as completed = 6 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as killed = 0 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as failed = 0 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------ 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Scanning utility log file for any warning messages 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Log file scan check passed 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------------- 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To complete the environment configuration, please 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1" 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- to access the Greenplum scripts for this instance: 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- or, use -d /data/master/gpseg-1 option for the Greenplum scripts 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- Example gpstate -d /data/master/gpseg-1 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20201220.log 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Review options for gpinitstandby 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------------- 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-The Master /data/master/gpseg-1/pg_hba.conf post gpinitsystem 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-new array must be explicitly added to this file 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-located in the /home/gpadmin/greenplum-db/docs directory 20201220:12:08:15:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------4.安裝中途失敗回退
安裝中途失敗,提示使用 bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_* 回退,執(zhí)行該腳本即可,例如:
20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[FATAL]:-Unknown host gpzq-sh-mb: ping: unknown host gpzq-sh-mb unknown host Script Exiting! 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Run command bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20191218_203938 to remove these changes 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-End Function BACKOUT_COMMAND[gpadmin@mdw gpAdminLogs]$ ls backout_gpinitsystem_gpadmin_20191218_203938 gpinitsystem_20191218.log [gpadmin@mdw gpAdminLogs]$ bash backout_gpinitsystem_gpadmin_20191218_203938 Stopping Master instance waiting for server to shut down.... done server stopped Removing Master log file Removing Master lock files Removing Master data directory files若執(zhí)行后仍然未清理干凈,可執(zhí)行一下語句后,再重新安裝: pg_ctl -D /data/master/gpseg-1 stop rm -f /tmp/.s.PGSQL.5432 /tmp/.s.PGSQL.5432.lock 主節(jié)點(diǎn) rm -rf /data/master/gpseg* 所有數(shù)據(jù)節(jié)點(diǎn) rm -rf /data/p1/gpseg-x rm -rf /data/p2/gpseg-xrm -rf /data/p1/gpseg* rm -rf /data/p2/gpseg*5.安裝成功后設(shè)置環(huán)境變量(gpadmin用戶)
編輯gpadmin 用戶的環(huán)境變量,增加(重要) export MASTER_DATA_DIRECTORY=/data/master/gpseg-1 除此之外,通常還增加:(可以不設(shè)置) export PGPORT=5432 # 根據(jù)實(shí)際情況填寫 export PGUSER=gpadmin # 根據(jù)實(shí)際情況填寫 export PGDATABASE=gpdw # 根據(jù)實(shí)際情況填寫 前面已經(jīng)添加過 source /usr/local/greenplum-db/greenplum_path.sh,此處操作如下: vi .bashrc export MASTER_DATA_DIRECTORY=/data/master/gpseg-16.安裝成功后配置
psql 登陸gp 并設(shè)置密碼(gpadmin用戶) psql -h hostname -p port -d database -U user -W password -h后面接對應(yīng)的master或者segment主機(jī)名 -p后面接master或者segment的端口號 -d后面接數(shù)據(jù)庫名可將上述參數(shù)配置到用戶環(huán)境變量中,linux 中使用gpadmin用戶不需要密碼。psql -h 127.0.0.1 -p 5432 -d database -U gpadmin psql 登錄,并設(shè)置gpadmin用戶密碼示例: psql -d postgresalter user gpadmin encrypted password 'gpadmin';su gpadmin psql -p 5432 修改數(shù)據(jù)庫密碼 alter role gpadmin with password '123456';退出: \q顯示數(shù)據(jù)庫列表 \lList of databasesName | Owner | Encoding | Collate | Ctype | Access privileges -----------+---------+----------+---------+-------+---------------------postgres | gpadmin | UTF8 | C | C | template0 | gpadmin | UTF8 | C | C | =c/gpadmin +| | | | | gpadmin=CTc/gpadmintemplate1 | gpadmin | UTF8 | C | C | =c/gpadmin +| | | | | gpadmin=CTc/gpadmin (3 rows)7.客戶端登陸gp
簡單介紹
客戶端認(rèn)證是由一個配置文件(通常名為pg_hba.conf)控制的, 它存放在數(shù)據(jù)庫集群的數(shù)據(jù)目錄里。HBA的意思是"host-based authentication", 也就是基于主機(jī)的認(rèn)證。在initdb初始化數(shù)據(jù)目錄的時候, 它會安裝一個缺省的pg_hba.conf文件。不過我們也可以把認(rèn)證配置文件放在其它地方
8.初始化standby節(jié)點(diǎn)
gpinitstandby -s gp-sdw3-smdw [gpadmin@gp-mdw ~]$ gpinitstandby -s gp-sdw3-smdw 20210311:19:25:38:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization... 20210311:19:25:38:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking for data directory /data/master/gpseg-1 on gp-sdw3-smdw 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------ 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------ 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master hostname = gp-mdw 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master data directory = /data/master/gpseg-1 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master port = 5432 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master hostname = gp-sdw3-smdw 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master port = 5432 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /data/master/gpseg-1 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum update system catalog = On Do you want to continue with standby master initialization? Yy|Nn (default=N): > y 20210311:19:25:42:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-The packages on gp-sdw3-smdw are consistent. 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Adding standby master to catalog... 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Database catalog updated successfully. 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Updating pg_hba.conf file... 20210311:19:25:45:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully. 20210311:19:25:49:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Starting standby master 20210311:19:25:49:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking if standby master is running on host: gp-sdw3-smdw in directory: /data/master/gpseg-1 20210311:19:25:53:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files... 20210311:19:25:54:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully. 20210311:19:25:54:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Successfully created standby master on gp-sdw3-smdw總結(jié)
以上是生活随笔為你收集整理的Greenplum 6安装指南(CentOS 7.X)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 虚幻引擎C++开发学习(一)
- 下一篇: matlab圆周运动仿真,Matlab软