通过rhel7的kvm虚拟机实现3节点Postgres-XL(包括gtm standby)
????關于postgres-xl的實驗是在我自己筆記本rhel7.2_x64的環境下,用kvm實現的,總共有6臺虛擬機:
????一臺openfiler2.99發布共享存儲,一臺gtm master,一臺gtm slave,三臺gtm_proxy/coordinator/datanode,除了openfiler之外其余5臺虛擬機皆以最小化安裝rhel7.2_x64初始化,且具備兩張網卡,一張用于192.168.122.* 提供服務,一張用于10.10.10.* 讀取openfiler發布的共享存儲,具體的postgres-xl服務規劃如下。
| gtm_mast | gtm master | 192.168.122.179 | 20001 | /pgdata/gtm/data | |
| gtm_slav | gtm slave | 192.168.122.189 | 20001 | /pgdata/gtm/data | |
| gtm_pxy01 | gtm proxy | 192.168.122.171 | 20001 | /pgdata/gtm_pxy01/data | |
| gtm_pxy02 | gtm proxy | 192.168.122.172 | 20001 | /pgdata/gtm_pxy02/data | |
| gtm_pxy03 | gtm proxy | 192.168.122.173 | 20001 | /pgdata/gtm_pxy03/data | |
| coord01 | coordinator | 192.168.122.171 | 15432 | /pgdata/coord01/data | 40101 |
| coord02 | coordinator | 192.168.122.172 | 15432 | /pgdata/coord02/data | 40102 |
| coord03 | coordinator | 192.168.122.173 | 15432 | /pgdata/coord03/data | 40103 |
| datan01 | datanode | 192.168.122.181 | 25431 | /pgdata/datan01/data | 40401 |
| datan02 | datanode | 192.168.122.182 | 25432 | /pgdata/datan02/data | 40402 |
| datan03 | datanode | 192.168.122.183 | 25433 | /pgdata/datan03/data | 40403 |
一. 虛擬機操作系統配置
1. 主機名配置
對每個虛擬機修改主機名
hostnamectl set-hostname rhel7pg171將 /etc/hosts 文件修改成格式化文件(帶域名就三列,不帶域名則兩列就行)
從 /etc/hosts 文件可以看出每臺虛擬機的主機名為 rhel7pgxxx
2. 安全設置
對每個虛擬機關閉selinux,關閉防火墻
setenforce 0 sed -i.bak "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config systemctl disable firewalld.service systemctl stop firewalld.service iptables --flush3. 本地yum服務配置
掛載本地cdrom中的操作系統ISO到本地目錄
mkdir -p /mnt/iso mount /dev/cdrom /mnt/iso # 寫入fstab,下次重起操作系統的時候自動掛載 echo "/dev/cdrom /mnt/iso iso9660 defaults 0 0" >> /etc/fstab創建本地yum源
vi /etc/yum.repos.d/base.repo cat /etc/yum.repos.d/base.repo [rhel7] name=rhel7 baseurl=file:///mnt/iso gpgcheck=0[rhel7-HA] name=rhel7-HA baseurl=file:///mnt/iso/addons/HighAvailability gpgcheck=0[rhel7-RS] name=rhel7-RS baseurl=file:///mnt/iso/addons/ResilientStorage更新yum服務信息
yum clean all yum list yum group list4. 時間同步服務配置
安裝chrony包
yum install chrony.x86_64 -y編輯配置文件,注釋掉默認的server,添加已知可用的或者自建的時間服務器,本實驗在openfiler的虛擬機192.168.122.100通過ntp發布了一個時間同步源
vi /etc/chrony.conf # server 0.rhel.pool.ntp.org iburst # server 1.rhel.pool.ntp.org iburst # server 2.rhel.pool.ntp.org iburst # server 3.rhel.pool.ntp.org iburst server 192.168.122.100 iburst重啟時間同步服務:
systemctl restart chronyd.service查看時間同步狀態:
systemctl status chronyd.service設置開機啟動服務:
systemctl enable chronyd.service查看時間同步源:
chronyc sources -v查看時間同步源狀態:
chronyc sourcestats -v5. 重啟
重啟所有虛擬機,生效主機名和selinux的配置
init 6二. postgres-xl軟件安裝
1. 依賴包的安裝
各種依賴包:
yum install -y make mpfr libmpc cpp kernel-headers glibc-headers glibc-devel libgomp libstdc++-devel libquadmath libgfortran libgnat libgnat-devel libobjc gcc gcc-c++ libquadmath-devel gcc-gfortran gcc-gnat gcc-objc gcc-objc++ ncurses-devel readline readline-devel zlib-devel m4 flex bison mailcapperl的支持:
yum install -y perl \ perl-Carp \ perl-constant \ perl-Encode \ perl-Exporter \ perl-File-Path \ perl-File-Temp \ perl-Filter \ perl-Getopt-Long \ perl-HTTP-Tiny \ perl-libs \ perl-macros \ perl-parent \ perl-PathTools \ perl-Pod-Escapes \ perl-podlators \ perl-Pod-Perldoc \ perl-Pod-Simple \ perl-Pod-Usage \ perl-Scalar-List-Utils \ perl-Socket \ perl-Storable \ perl-Text-ParseWords \ perl-threads \ perl-threads-shared \ perl-Time-HiRes \ perl-Time-Local2. postgres-xl主體的安裝
gunzip postgres-xl-9.5r1.4.tar.gz tar -xvf postgres-xl-9.5r1.4.tar cd postgres-xl-9.5r1.4 ./configure gmake gmake install3. pgxc_ctl插件的安裝
pgxc_ctl插件中,假如我們有需要配置備份配置文件的選項的話,需要修改下源代碼。postgres-xl-9.5r1.4/contrib/pgxc_ctl源代碼中do_command.c的static void init_all(void) 下面第二行,即第524行的init_gtm_master(true);前面,必須插入一行doConfigBackup();再保存編譯make&make install , 要不然用pgxc_ctl init all的時候,configBackup=y的功能沒辦法正常使用
cd postgres-xl-9.5r1.4/contrib/pgxc_ctl vi do_command.c make make install4. 用戶環境配置
此處環境變量要寫入 .bashrc 中,因為pgxc_ctl會通過ssh信任協議直接遠程到服務器上運行命令,這樣的話不會去讀取.bash_profile,只會讀取.bashrc,如果不把PATH等環境變量配置到.bashrc的化,之后初始化集群init all 的時候就會報“命令不存在”的錯誤。
/usr/sbin/groupadd -g 2001 postgres /usr/sbin/useradd -u 2001 -g postgres postgres echo "postgres_passwd" | passwd --stdin postgres echo "export PGHOME=/usr/local/pgsql" >> /home/postgres/.bashrc echo "export LD_LIBRARY_PATH==$PGHOME/lib" >> /home/postgres/.bashrc echo 'export PG_CONFIG=$PGHOME/bin/pg_config' >> /home/postgres/.bashrc echo 'export pg_config=$PGHOME/bin/pg_config' >> /home/postgres/.bashrc echo 'export PATH=$PATH:$PGHOME/bin' >> /home/postgres/.bashrc5. 鏈接庫文件
source /home/postgres/.bash_profile echo "$PGHOME/lib" >> /etc/ld.so.conf /sbin/ldconfig cat /etc/ld.so.conf三. postgres-xl初始化
1. ssh互信配置
配置每個節點的postgres用戶互信
ssh互信配置方法非常多,可以參考 《配置SSH互信》
http://blog.163.com/cao_jfeng...
我使用的是oracle的腳本
測試ssh互通
2. 創建PGDATA目錄
3臺datanode
mkdir -p /pgdata/datan01 mkdir -p /pgdata/datan02 mkdir -p /pgdata/datan03 mkdir -p /pgdata/coord01 mkdir -p /pgdata/coord02 mkdir -p /pgdata/coord03 mkdir -p /pgdata/gtm_pxy01 mkdir -p /pgdata/gtm_pxy02 mkdir -p /pgdata/gtm_pxy03gtm master和gtm slave
mkdir -p /pgdata/gtm所有節點
chown -R postgres:postgres /pgdata3. datanode的特殊準備
用openfiler配置3個3GB的共享磁盤給datan01 datan02 datan03
在datan01 datan02 datan03上執行
在其中某個datanode上對共享磁盤進行分區,分成一個分區即可,并對其進行格式化
fdisk /dev/sda fdisk /dev/sdb fdisk /dev/sdc partprobe /dev/sda partprobe /dev/sdb partprobe /dev/sdc mkfs.xfs /dev/sda1 mkfs.xfs /dev/sdb1 mkfs.xfs /dev/sdc1 tune2fs -c 0 -i 0 /dev/sda1 tune2fs -c 0 -i 0 /dev/sdb1 tune2fs -c 0 -i 0 /dev/sdc1重啟所有datanode進行重新識別
掛載測試
mount /dev/sda1 /pgdata/datan01/ mount /dev/sdb1 /pgdata/datan02/ mount /dev/sdc1 /pgdata/datan03/ umount /dev/sda1 umount /dev/sdb1 umount /dev/sdc1添加每個datanode的臨時ip和掛載文件系統
cd /etc/sysconfig/network-scripts/ cp -rp ifcfg-eth0 ifcfg-eth0:1 vi ifcfg-eth0:1 systemctl restart network mount /dev/sdc1 /pgdata/datan03/所有節點
chown -R postgres:postgres /pgdata4. pgxc_ctl配置文件編寫
進入gtm_mast
su - postgres pgxc_ctl PGXC prepare PGXC q cd pgxc_ctl vi pgxc_ctl.conf編輯好的配置文件如下:
#!/usr/bin/env bash## Postgres-XC Configuration file for pgxc_ctl utility. ## Configuration file can be specified as -c option from pgxc_ctl command. Default is# $PGXC_CTL_HOME/pgxc_ctl.org.## This is bash script so you can make any addition for your convenience to configure# your Postgres-XC cluster.## Please understand that pgxc_ctl provides only a subset of configuration which pgxc_ctl# provide. Here's several several assumptions/restrictions pgxc_ctl depends on.## 1) All the resources of pgxc nodes has to be owned by the same user. Same user means# user with the same user name. User ID may be different from server to server.# This must be specified as a variable $pgxcOwner.## 2) All the servers must be reacheable via ssh without password. It is highly recommended# to setup key-based authentication among all the servers.## 3) All the databases in coordinator/datanode has at least one same superuser. Pgxc_ctl# uses this user to connect to coordinators and datanodes. Again, no password should# be used to connect. You have many options to do this, pg_hba.conf, pg_ident.conf and# others. Pgxc_ctl provides a way to configure pg_hba.conf but not pg_ident.conf. This# will be implemented in the later releases.## 4) Gtm master and slave can have different port to listen, while coordinator and datanode# slave should be assigned the same port number as master.## 5) Port nuber of a coordinator slave must be the same as its master.## 6) Master and slave are connected using synchronous replication. Asynchronous replication# have slight (almost none) chance to bring total cluster into inconsistent state.# This chance is very low and may be negligible. Support of asynchronous replication# may be supported in the later release.## 7) Each coordinator and datanode can have only one slave each. Cascaded replication and# multiple slave are not supported in the current pgxc_ctl.## 8) Killing nodes may end up with IPC resource leak, such as semafor and shared memory.# Only listening port (socket) will be cleaned with clean command.## 9) Backup and restore are not supported in pgxc_ctl at present. This is a big task and# may need considerable resource.##========================================================================================### pgxcInstallDir variable is needed if you invoke "deploy" command from pgxc_ctl utility.# If don't you don't need this variable.pgxcInstallDir=/usr/local/pgsql#---- OVERALL -----------------------------------------------------------------------------#pgxcOwner=postgres # owner of the Postgres-XC databaseo cluster. Here, we use this# both as linus user and database user. This must be# the super user of each coordinator and datanode.pgxcUser=$pgxcOwner # OS user of Postgres-XC ownertmpDir=/tmp # temporary dir used in XC serverslocalTmpDir=$tmpDir # temporary dir used here locallyconfigBackup=y # If you want config file backup, specify y to this value.configBackupHost=192.168.122.189 # host to backup config fileconfigBackupDir=/home/postgres/pgxc_ctl # Backup directoryconfigBackupFile=pgxc_ctl.conf # Backup file name --> Need to synchronize when original changed.#---- GTM ------------------------------------------------------------------------------------# GTM is mandatory. You must have at least (and only) one GTM master in your Postgres-XC cluster.# If GTM crashes and you need to reconfigure it, you can do it by pgxc_update_gtm command to update# GTM master with others. Of course, we provide pgxc_remove_gtm command to remove it. This command# will not stop the current GTM. It is up to the operator.#---- GTM Master -----------------------------------------------#---- Overall ----gtmName=gtm_mastgtmMasterServer=192.168.122.179gtmMasterPort=20001gtmMasterDir=/pgdata/gtm/data#---- Configuration ---gtmExtraConfig=none # Will be added gtm.conf for both Master and Slave (done at initilization only)gtmMasterSpecificExtraConfig=none # Will be added to Master's gtm.conf (done at initialization only)#---- GTM Slave -----------------------------------------------# Because GTM is a key component to maintain database consistency, you may want to configure GTM slave# for backup.#---- Overall ------gtmSlave=y # Specify y if you configure GTM Slave. Otherwise, GTM slave will not be configured and# all the following variables will be reset.gtmSlaveName=gtm_slavgtmSlaveServer=192.168.122.189 # value none means GTM slave is not available. Give none if you don't configure GTM Slave.gtmSlavePort=20001 # Not used if you don't configure GTM slave.gtmSlaveDir=/pgdata/gtm/data # Not used if you don't configure GTM slave.# Please note that when you have GTM failover, then there will be no slave available until you configure the slave# again. (pgxc_add_gtm_slave function will handle it)#---- Configuration ----gtmSlaveSpecificExtraConfig=none # Will be added to Slave's gtm.conf (done at initialization only)#---- GTM Proxy -------------------------------------------------------------------------------------------------------# GTM proxy will be selected based upon which server each component runs on.# When fails over to the slave, the slave inherits its master's gtm proxy. It should be# reconfigured based upon the new location.## To do so, slave should be restarted. So pg_ctl promote -> (edit postgresql.conf and recovery.conf) -> pg_ctl restart## You don't have to configure GTM Proxy if you dont' configure GTM slave or you are happy if every component connects# to GTM Master directly. If you configure GTL slave, you must configure GTM proxy too.#---- Shortcuts ------gtmProxyDir=/pgdata/gtm_pxy#---- Overall -------gtmProxy=y # Specify y if you conifugre at least one GTM proxy. You may not configure gtm proxies# only when you dont' configure GTM slaves.# If you specify this value not to y, the following parameters will be set to default empty values.# If we find there're no valid Proxy server names (means, every servers are specified# as none), then gtmProxy value will be set to "n" and all the entries will be set to# empty values.gtmProxyNames=(gtm_pxy01 gtm_pxy02 gtm_pxy03) # No used if it is not configuredgtmProxyServers=(192.168.122.171 192.168.122.172 192.168.122.173) # Specify none if you dont' configure it.gtmProxyPorts=(20001 20001 20001) # Not used if it is not configured.gtmProxyDirs=($gtmProxyDir'01/data' $gtmProxyDir'02/data' $gtmProxyDir'03/data')# Not used if it is not configured.#---- Configuration ----gtmPxyExtraConfig=none # Extra configuration parameter for gtm_proxy. Coordinator section has an example.gtmPxySpecificExtraConfig=(none none none)#---- Coordinators ----------------------------------------------------------------------------------------------------#---- shortcuts ----------coordMasterDir=/pgdata/coord##coordSlaveDir=$HOME/pgxc/nodes/coord_slave##coordArchLogDir=$HOME/pgxc/nodes/coord_archlog#---- Overall ------------coordNames=(coord01 coord02 coord03) # Master and slave use the same namecoordPorts=(15432 15432 15432) # Master portspoolerPorts=(40101 40102 40103) # Master pooler portscoordPgHbaEntries=(0.0.0.0/0) # Assumes that all the coordinator (master/slave) accepts# the same connection# This entry allows only $pgxcOwner to connect.# If you'd like to setup another connection, you should# supply these entries through files specified below.# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust". If you don't want# such setups, specify the value () to this variable and suplly what you want using coordExtraPgHba# and/or coordSpecificExtraPgHba variables.#coordPgHbaEntries=(::1/128) # Same as above but for IPv6 addresses#---- Master -------------coordMasterServers=(192.168.122.171 192.168.122.172 192.168.122.173) # none means this master is not availablecoordMasterDirs=($coordMasterDir'01/data' $coordMasterDir'02/data' $coordMasterDir'03/data')coordMaxWALsernder=0 # max_wal_senders: needed to configure slave. If zero value is specified,# it is expected to supply this parameter explicitly by external files# specified in the following. If you don't configure slaves, leave this value to zero.coordMaxWALSenders=(0 0 0)# max_wal_senders configuration for each coordinator.#---- Slave -------------coordSlave=n # Specify y if you configure at least one coordiantor slave. Otherwise, the following# configuration parameters will be set to empty values.# If no effective server names are found (that is, every servers are specified as none),# then coordSlave value will be set to n and all the following values will be set to# empty values.##coordSlaveSync=y # Specify to connect with synchronized mode.##coordSlaveServers=(node07 node08 node09 node06) # none means this slave is not available##coordSlavePorts=(20004 20005 20004 20005) # Master ports##coordSlavePoolerPorts=(20010 20011 20010 20011) # Master pooler ports##coordSlaveDirs=($coordSlaveDir $coordSlaveDir $coordSlaveDir $coordSlaveDir)##coordArchLogDirs=($coordArchLogDir $coordArchLogDir $coordArchLogDir $coordArchLogDir)#---- Configuration files---# Need these when you'd like setup specific non-default configuration # These files will go to corresponding files for the master.# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries # Or you may supply these files manually.coordExtraConfig=coordExtraConfig # Extra configuration file for coordinators. # This file will be added to all the coordinators'# postgresql.conf# Pleae note that the following sets up minimum parameters which you may want to change.# You can put your postgresql.conf lines here.cat > $coordExtraConfig <<EOF#================================================# Added to all the coordinator postgresql.conf# Original: $coordExtraConfiglog_destination = 'stderr'logging_collector = onlog_directory = 'pg_log'listen_addresses = '*'max_connections = 100EOF# Additional Configuration file for specific coordinator master.# You can define each setting by similar means as above.coordSpecificExtraConfig=(none none none)coordExtraPgHba=none # Extra entry for pg_hba.conf. This file will be added to all the coordinators' pg_hba.confcoordSpecificExtraPgHba=(none none none)#----- Additional Slaves -----## Please note that this section is just a suggestion how we extend the configuration for# multiple and cascaded replication. They're not used in the current version.###coordAdditionalSlaves=n # Additional slave can be specified as follows: where you##coordAdditionalSlaveSet=(cad1) # Each specifies set of slaves. This case, two set of slaves are# configured##cad1_Sync=n # All the slaves at "cad1" are connected with asynchronous mode.# If not, specify "y"# The following lines specifies detailed configuration for each# slave tag, cad1. You can define cad2 similarly.##cad1_Servers=(node08 node09 node06 node07) # Hosts##cad1_dir=$HOME/pgxc/nodes/coord_slave_cad1##cad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)##cad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1##cad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)#---- Datanodes -------------------------------------------------------------------------------------------------------#---- Shortcuts --------------datanodeMasterDir=/pgdata/datan##datanodeSlaveDir=$HOME/pgxc/nodes/dn_slave##datanodeArchLogDir=$HOME/pgxc/nodes/datanode_archlog#---- Overall ---------------primaryDatanode=none # Primary Node.# At present, xc has a priblem to issue ALTER NODE against the primay node. Until it is fixed, the test will be done# without this feature.##primaryDatanode=datanode1 # Primary Node.datanodeNames=(datan01 datan02 datan03)datanodePorts=(25431 25432 25433) # Master portsdatanodePoolerPorts=(40401 40402 40403) # Master pooler portsdatanodePgHbaEntries=(0.0.0.0/0) # Assumes that all the coordinator (master/slave) accepts# the same connection# This list sets up pg_hba.conf for $pgxcOwner user.# If you'd like to setup other entries, supply them# through extra configuration files specified below.# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust". If you don't want# such setups, specify the value () to this variable and suplly what you want using datanodeExtraPgHba# and/or datanodeSpecificExtraPgHba variables.#datanodePgHbaEntries=(::1/128) # Same as above but for IPv6 addresses#---- Master ----------------datanodeMasterServers=(192.168.122.181 192.168.122.182 192.168.122.183) # none means this master is not available.# This means that there should be the master but is down.# The cluster is not operational until the master is# recovered and ready to run. datanodeMasterDirs=($datanodeMasterDir'01/data' $datanodeMasterDir'02/data' $datanodeMasterDir'03/data')datanodeMaxWalSender=0 # max_wal_senders: needed to configure slave. If zero value is # specified, it is expected this parameter is explicitly supplied# by external configuration files.# If you don't configure slaves, leave this value zero.datanodeMaxWALSenders=(0 0 0)# max_wal_senders configuration for each datanode#---- Slave -----------------datanodeSlave=n # Specify y if you configure at least one coordiantor slave. Otherwise, the following# configuration parameters will be set to empty values.# If no effective server names are found (that is, every servers are specified as none),# then datanodeSlave value will be set to n and all the following values will be set to# empty values.##datanodeSlaveServers=(node07 node08 node09 node06) # value none means this slave is not available##datanodeSlavePorts=(20008 20009 20008 20009) # value none means this slave is not available##datanodeSlavePoolerPorts=(20012 20013 20012 20013) # value none means this slave is not available##datanodeSlaveSync=y # If datanode slave is connected in synchronized mode##datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir)##datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir )# ---- Configuration files ---# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries here.# These files will go to corresponding files for the master.# Or you may supply these files manually.datanodeExtraConfig=none # Extra configuration file for datanodes. This file will be added to all the # datanodes' postgresql.confdatanodeSpecificExtraConfig=(none none none)datanodeExtraPgHba=none # Extra entry for pg_hba.conf. This file will be added to all the datanodes' postgresql.confdatanodeSpecificExtraPgHba=(none none none)#----- Additional Slaves -----datanodeAdditionalSlaves=n # Additional slave can be specified as follows: where you# datanodeAdditionalSlaveSet=(dad1 dad2) # Each specifies set of slaves. This case, two set of slaves are# configured# dad1_Sync=n # All the slaves at "cad1" are connected with asynchronous mode.# If not, specify "y"# The following lines specifies detailed configuration for each# slave tag, cad1. You can define cad2 similarly.# dad1_Servers=(node08 node09 node06 node07) # Hosts# dad1_dir=$HOME/pgxc/nodes/coord_slave_cad1# dad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)# dad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1# dad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)#---- WAL archives -------------------------------------------------------------------------------------------------walArchive=n # If you'd like to configure WAL archive, edit this section.# Pgxc_ctl assumes that if you configure WAL archive, you configure it# for all the coordinators and datanodes.# Default is "no". Please specify "y" here to turn it on.## End of Configuration Section##==========================================================================================================================#========================================================================================================================# The following is for extension. Just demonstrate how to write such extension. There's no code# which takes care of them so please ignore the following lines. They are simply ignored by pgxc_ctl.# No side effects.#=============<< Beginning of future extension demonistration >> ========================================================# You can setup more than one backup set for various purposes, such as disaster recovery.##walArchiveSet=(war1 war2)##war1_source=(master) # you can specify master, slave or ano other additional slaves as a source of WAL archive.# Default is the master##wal1_source=(slave)##wal1_source=(additiona_coordinator_slave_set additional_datanode_slave_set)##war1_host=node10 # All the nodes are backed up at the same host for a given archive set##war1_backupdir=$HOME/pgxc/backup_war1##wal2_source=(master)##war2_host=node11##war2_backupdir=$HOME/pgxc/backup_war2#=============<< End of future extension demonistration >> ========================================================5. 通過pgxc_ctl進行初始化
進入gtm_mast
在postgres用戶下使用 pgxc_ctl init all 命令進行初始化以下是輸出結果:
初始化完成,可以用 pgxc_ctl monitor all 對所有服務的狀態進行觀察:
# pgxc_ctl monitor all/bin/bashInstalling pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.confFinished reading configuration.******** PGXC_CTL START ***************Current directory: /home/postgres/pgxc_ctlRunning: gtm masterRunning: gtm slaveRunning: gtm proxy gtm_pxy01Running: gtm proxy gtm_pxy02Running: gtm proxy gtm_pxy03Running: coordinator master coord01Running: coordinator master coord02Running: coordinator master coord03Running: datanode master datan01Running: datanode master datan02Running: datanode master datan036. 修改datanode的gtm地址
為了后面datanode節點切換服務器的時候能夠注冊上gtm proxy,現在將每個datanode節點的配置文件里面的gtm地址配置為datanode自己的服務ip。這里只演示修改datan01:
# gtm_mast下運行,停掉datanode datan01的服務pgxc_ctl stop datanode datan01# datan01下運行,編輯配置文件,修改gtm proxy連接地址su - postgrescd /pgdata/datan01/data/vi postgresql.conf tail -n 3 postgresql.conf# 以下為修改后的顯示結果,192.168.122.181是datan01的服務ip,從/etc/hosts可以看到gtm_host = '192.168.122.181'gtm_port = 20001# End of Addition# gtm_mast下運行pgxc_ctl start datanode datan01總結
以上是生活随笔為你收集整理的通过rhel7的kvm虚拟机实现3节点Postgres-XL(包括gtm standby)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: mysql workbench 6.2_
- 下一篇: java swt 不显示图片_Java