Hi Ashok,
Though it is not very clear to identify the issue exactly in the environment.
Few checks you can do .
1. Instead of rsync you can use pgbasebackup -R that will write the recovery.conf with primary slot name.
2. I hope that listen address is properly set and connection between primary and slave node are ok.
3. Do some transaction and execute below query and observe any changes in GB behind
postgres=# SELECT redo_lsn, slot_name,restart_lsn,round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behindFROM pg_control_checkpoint(), pg_replication_slots;
4. If slave is able to consume then this will start reducing .
If problem still persists then check in log of both and find what it suggests .
Have a great day ..
Sent from my iPhone
On 24-Apr-2020, at 9:30 AM, Ashok kumar Mani <amani@xxxxxxxxxxxxx> wrote:
Classification: External
Hi Pathiria,
Step1:- Enable the below parameters in postgresql.conf file for hotstandby
wal_level=replica [ Default : hot_standby]
wal_log_hints = on
synchronous_commit = on
wal_log_hints = on
archive_mode = on [
archive_timeout = 30
archive_command = 'scp %p 172.30.8.199:/data/ARCHIVELOG/%f'
step 2:- #Replication
max_wal_senders = 10
wal_keep_segments = 3200 # in logfile segments, 16MB each; 0 disables[
hot_standby = on
hot_standby_feedback = on # send info from standby to prevent query conflicts
#Log file configurations
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 0
log_rotation_size = 100MB
log_timezone = 'GMT'
Step3:- backup master database then transfer into replica server by using rsync
[postgres@de284194 ~]$ psql -c "SELECT pg_start_backup('label', true)"
pg_start_backup
-----------------
10/DA000028
(1 row)
[postgres@de284194 pg_data]$ rsync -cva --inplace --exclude=*pg_xlog* \
> ${PGDATA}/ 172.29.7.194:$PGDATA
sending incremental file list
./
pg_wal/archive_status/0000000100000010000000CC.done
pg_wal/archive_status/0000000100000010000000CD.00000028.backup.done
pg_wal/archive_status/0000000100000010000000CD.done
pg_wal/archive_status/0000000100000010000000D5.done
pg_xact/
pg_xact/0000
sent 1,053,663,924 bytes received 21,480 bytes 15,160,941.06 bytes/sec
total size is 1,053,341,161 speedup is 1.00
[postgres@de284194 pg_data]$
Step 4:- Stop the backup
[postgres@de284194 pg_data]$ psql -c "select pg_stop_backup(), current_timestamp"
NOTICE: pg_stop_backup complete, all required WAL segments have been archived
Step 5;- Create recovery.conf
postgres=# SELECT pg_create_physical_replication_slot('amos_prod_replica1');
pg_create_physical_replication_slot
-------------------------------------
(amos_prod_replica1,)
(1 row)
Start the db server in replica server:-
Regards,
Ashok
From: Prince Pathria <prince.pathria@xxxxxxxxxxx>
Sent: Thursday, April 23, 2020 1:00 PM
To: Ashok kumar Mani <amani@xxxxxxxxxxxxx>
Cc: pgsql-admin@xxxxxxxxxxxxxx; pgsql-admin@xxxxxxxxxxxxxxxxxxxx
Subject: Re: Help needed for replication issue
Information Security Email Alert: This email is from an EXTERNAL source. Please use caution when clicking on links or opening attachments from an unknown or suspicious sender. To report a suspected phishing email, Send us an Email on Servicedesk@xxxxxxxxxxxxx
Hey Ashok,
Please share the steps you've followed till now so we help further.
Thanks!
Happy to help :)
Prince Pathria Systems Engineer | Certified Kubernetes Administrator | AWS Solutions Architect Evive +91 9478670472 goevive.com
On Thu, Apr 23, 2020 at 11:44 AM Ashok kumar Mani <amani@xxxxxxxxxxxxx> wrote:
Classification: Internal
Dear All,
I have created Postgresql 10.6 hot standb ,but while creating physical replication slots by default it is in inactive more
I recreated the standby but still I am unable to make slots is in active and also I could not able to see the LSN value on both sites.
Regards,
Ashok
Go for pg_basebackup instead of rsync and you'll be happy with the results.
Happy to help :)
Prince Pathria
Systems Engineer | Certified Kubernetes Administrator | AWS Solutions Architect
Evive
+91 9478670472
goevive.com
On Fri, Apr 24, 2020 at 7:12 PM soumitra bhandary <soumitra.bhandary@xxxxxxxxxxx> wrote: