Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Database \ PostgreSQL use pgpool achieve high availability     - Thinking in Java study notes - initialization and cleanup (Programming)

- CentOS 6.5 opens the Xmanager Remote Desktop login (Linux)

- Debugging with GDB tool Go (Programming)

- Install Firefox 32 official version of the Linux system (Linux)

- Install RAID 6 (Striping double distributed parity) (Linux)

- Construction Spark source and application development environment (Server)

- CentOS6.x and Windows XP and Windows Server 2003 Open IPv6 related matters (Linux)

- Use in Linux ipmitool tool (Linux)

- Django Web dynamic three linkage (Programming)

- CentOS7 minimized installation can not find the 'ifconfig' command - Repair Tips (Linux)

- MySQL optimization resulting order by using filesort (Database)

- C ++ complex class of operator overloading (Programming)

- Hadoop namenode do NFS disaster recovery (Server)

- Example of use WebSocket (Programming)

- How to extend / remove swap partitions (Linux)

- Source encountered problems and solutions when installing SaltStack (Server)

- Linux screen commonly commands (Linux)

- Linux User Rights Study Notes (Linux)

- Linux - Common process the command (Linux)

- Read and write files efficiently from Apache Kafka (Server)

 
         
  PostgreSQL use pgpool achieve high availability
     
  Add Date : 2018-11-21      
         
         
         
  Here the use of pgpool-ii PG achieve high availability. Flow-based replication mode to automatically switch between two nodes:

1, single pgpool

a. environment:

pgpool: 192.168.238.129
data1: 192.168.238.130
data2: 192.168.238.131

b. Legend

c. Configure trust

ssh-copy-id ha @ node1
ssh-copy-id ha @ node2
 
d. node configuration database

e.pgpool configuration:

listen_addresses = '*'
backend_hostname0 = 'node1'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/ home / ha / pgdb / data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
 
backend_hostname1 = 'node2'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/ home / ha / pgdb / data'
backend_flag1 = 'ALLOW_TO_FAILOVER'
 
enable_pool_hba = on
pool_passwd = 'pool_passwd'
 
pid_file_name = '/home/ha/pgpool/pgpool.pid'
logdir = '/ home / ha / pgpool / log'
 
health_check_period = 1
health_check_user = 'ha'
health_check_password = 'ha'
 
failover_command = '/home/ha/pgdb/fail.sh% H'
 
recovery_user = 'ha'
recovery_password = 'ha'
 
f.fail.sh

# Failover command for streaming replication.
# This script assumes that DB node 0 is primary, and 1 is standby.
#
# If standby goes down, do nothing. If primary goes down, create a
# Trigger file so that standby takes over primary node.
#
# Arguments: $ 1: failed node id $ 2:.. New master hostname $ 3: path to
# Trigger file.
 
new_master = $ 1
trigger_command = "/ home / ha / pgdb / bin / pg_ctl -D / home / ha / pgdb / data promote -m fast"
 
# Do nothing if standby goes down.
if [$ failed_node = 1]; then
        exit 0;
fi
 
# Create the trigger file.
/ Usr / bin / ssh -T $ new_master $ trigger_command
 
exit 0;
 

g. establish pool_passwd

pg_md5 -m -p -u postgres pool_passwd
 
        PS: Before 9.1 has been using trigger_file, suggested here promote -m fast way, because
"Pg_ctl promote -m fast will skip the checkpoint at end of recovery so that we can achieve very fast failover when the apply delay is low. Write new WAL record XLOG_END_OF_RECOVERY to allow us to switch timeline correctly for downstream log readers. If we skip synchronous end of recovery checkpoint we request a normal spread checkpoint so that the window of re-recovery is low. Simon Riggs and Kyotaro Horiguchi, with input from Fujii Masao. Review by Heikki Linnakangas "

h. Test
pgpool node

[Ha @ node0 pgdb] $ pgpool -n -d> /tmp/pgpool.log 2> & 1 &
[1] 22928
[Ha @ node0 pgdb] $ psql -h 192.168.238.129 -p 9999 -d postgres -U ha
Password for user ha:
psql (9.4.5)
Type "help" for help.
 
postgres = # insert into test values ​​(8);
INSERT 0 1
postgres = # select * from test;
 id
----
  1
  2
  3
  4
  6
  8
(6 rows)
 
node1 node:

[Ha @ localhost pgdb] $ ps -ef | grep post
root 2124 1 0 Dec26? 00:00:00 / usr / libexec / postfix / master
postfix 2147 2124 0 Dec26? 00:00:00 qmgr -l -t fifo -u
postfix 13295 2124 0 06:01? 00:00:00 pickup -l -t fifo -u
ha 13395 1 0 06:06 pts / 3 00:00:00 / home / ha / pgdb / bin / postgres
ha 13397 13395 0 06:06 00:00:00 postgres:? checkpointer process
ha 13398 13395 0 06:06 00:00:00 postgres:? writer process
ha 13399 13395 0 06:06 00:00:00 postgres:? wal writer process
ha 13400 13395 0 06:06 00:00:00 postgres:? autovacuum launcher process
ha 13401 13395 0 06:06 00:00:00 postgres:? stats collector process
ha 13404 13395 0 06:07 00:00:00 postgres:? wal sender process rep 192.168.238.131 (59415) streaming 0/21000060
ha 13418 4087 0 06:07 pts / 3 00:00:00 grep post
[Ha @ localhost pgdb] $ kill -9 13395
 
pgpool nodes:

postgres = # insert into test values ​​(8);
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost Attempting reset:. Succeeded.
postgres = # insert into test values ​​(8);
INSERT 0 1
postgres = # insert into test values ​​(8);
INSERT 0 1
postgres = # select * from test;
 id
----
  1
  2
  3
  4
  6
  8
  8
  8
(8 rows)

2. Two pgpool nodes

a. Environment


pgpool: 192.168.238.129
pgpool: 192.168.238.131
node1: 192.168.238.130
node2: 192.168.238.131

b. Legend

c. Configure trust, supra.
d. node configuration database, supra.
e.pgpool Configuration
node1

f. Configure pgpool (primary)

listen_addresses = '*'
backend_hostname0 = 'node1'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/ home / ha / pgdb / data /'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = 'node2'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/ home / ha / pgdb / data /'
backend_flag1 = 'ALLOW_TO_FAILOVER'
enable_pool_hba = on
authentication
pool_passwd = 'pool_passwd'
pid_file_name = '/home/ha/pgpool/pgpool.pid'
logdir = '/ tmp / log'
master_slave_mode = on
master_slave_sub_mode = 'stream'
sr_check_period = 2
sr_check_user = 'ha'
sr_check_password = 'ha'
health_check_period = 1
health_check_timeout = 20
health_check_user = 'ha'
health_check_password = 'ha'
failover_command = '/home/ha/pgpool/fail.sh% H'
recovery_user = 'ha'
recovery_password = 'ha'
use_watchdog = on
wd_hostname = 'node1' # local
delegate_IP = '192.168.238.151'
# Use ifconfig, see if the card
if_up_cmd = 'ifconfig eth1: 0 inet $ _IP_ $ netmask 255.255.255.0'
if_down_cmd = 'ifconfig eth1: 0 down'
heartbeat_destination0 = 'node2' # peer
heartbeat_device0 = 'eth0'
other_pgpool_hostname0 = 'node2' # peer
other_pgpool_port0 = 9999
other_wd_port0 = 9000

g. Configure pgpool (from)
 
listen_addresses = '*'
backend_hostname0 = 'node1'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/ home / ha / pgdb / data /'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = 'node2'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/ home / ha / pgdb / data /'
backend_flag1 = 'ALLOW_TO_FAILOVER'
enable_pool_hba = on
authentication
pool_passwd = 'pool_passwd'
pid_file_name = '/home/ha/pgpool/pgpool.pid'
logdir = '/ tmp / log'
master_slave_mode = on
master_slave_sub_mode = 'stream'
sr_check_period = 2
sr_check_user = 'ha'
sr_check_password = 'ha'
health_check_period = 1
health_check_timeout = 20
health_check_user = 'ha'
health_check_password = 'ha'
failover_command = '/home/ha/pgpool/fail.sh% H'
recovery_user = 'ha'
recovery_password = 'ha'
use_watchdog = on
wd_hostname = 'node2' # local
delegate_IP = '192.168.238.151'
# Use ifconfig, see if the card
if_up_cmd = 'ifconfig eth1: 0 inet $ _IP_ $ netmask 255.255.255.0'
if_down_cmd = 'ifconfig eth1: 0 down'
heartbeat_destination0 = 'node1' # peer
heartbeat_device0 = 'eth1'
other_pgpool_hostname0 = 'node1' # peer
other_pgpool_port0 = 9999
other_wd_port0 = 9000

h.fail.sh
 
# Failover command for streaming replication.
# This script assumes that DB node 0 is primary, and 1 is standby.
#
# If standby goes down, do nothing. If primary goes down, create a
# Trigger file so that standby takes over primary node.
#
# Arguments: $ 1: failed node id $ 2:.. New master hostname $ 3: path to
# Trigger file.
 
new_master = $ 1
trigger_command = "/ home / ha / pgdb / bin / pg_ctl -D / home / ha / data start"
 
# Do nothing if standby goes down.
if [$ failed_node = 1]; then
        exit 0;
fi
 
# Create the trigger file.
/ Usr / bin / ssh -T $ new_master $ trigger_command
 
exit 0;
 
            i. establish pool_passwd

pg_md5 -m -p -u postgres pool_passwd

j. Test
 
# Database, pgpool start
[Ha @ node0 pgdb] $ psql -h 192.168.238.151 -p 9999 -d postgres -U ha
Password for user ha:
psql (9.4.5)
Type "help" for help.
 
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = #
- Kill node1 database processes
postgres = # insert into test values ​​(9);
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost Attempting reset:. Succeeded.
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
- Kill node1 of pgpool process
postgres = # insert into test values ​​(9);
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost Attempting reset:. Succeeded.
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = # insert into test values ​​(9);
INSERT 0 1
postgres = #
     
         
         
         
  More:      
 
- To configure linux transparent firewall (Linux)
- Ora-1092: OPI colleague K aborting process --- killed by OO Well killer (Database)
- Linux stand-alone OGG synchronous Oracle 11g DB test (Database)
- Vim useful plugin: YouCompleteMe (Linux)
- Linux run queue process scheduling (Programming)
- Save the database data files into Oracle Learning (Database)
- Java memory area Explanation (Programming)
- Memcache explain in detail (Server)
- Why do you need close contact Rust 1.0 (Programming)
- RHEL7.0 environment Linux kernel upgrade (Linux)
- Linux common network tools: batch scanning of nmap hosting service (Linux)
- How Linux system password security guarantee (Linux)
- CentOS 6.5 / 6.6 modify the default SSH port number (Linux)
- Swift used in the application to add a local push at the specified time (Programming)
- Nagios (centreon) monitoring LVS (Server)
- Java environment to build a number of issues (Linux)
- How to install and use the Snort in Ubuntu 15.04 (Linux)
- CentOS7 installation performance monitoring system (Server)
- awk Programming Model (Programming)
- libreadline.so.6: can not open shared object file problem solution (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.