On 2/22/20 11:03 AM, Edson Richter wrote:
------------------------------------------------------------------------
Streaming replication. Initiated via pg_basebackup.
Settings on master server:
# - Sending Server(s) -
# Set these on the master and on any standby that will send replication
data.
max_wal_senders = 2 # max number of walsender processes
(change requires restart)
wal_keep_segments = 25 # in logfile segments, 16MB each; 0 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
max_replication_slots = 2 # max number of replication
slots (change requires restart)
#track_commit_timestamp = off # collect timestamp of transaction
commit (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync
rep number of sync standbys and comma-separated list of
application_name from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is
delayed
Settings on slave server:
# - Standby Servers -
# These settings are ignored on a master server.
hot_standby = on # "on" allows queries during
recovery (change requires restart)
max_standby_archive_delay = -1 # max delay before canceling
queries when reading WAL from archive; -1 allows indefinite delay
max_standby_streaming_delay = -1 # max delay before canceling
queries when reading streaming WAL; -1 allows indefinite delay
wal_receiver_status_interval = 10s # send replies at least this
often 0 disables
hot_standby_feedback = on # send info from standby to
prevent query conflicts
wal_receiver_timeout = 0 # time that receiver waits for
communication from master in milliseconds; 0 disables
wal_retrieve_retry_interval = 5s # time to wait before retrying
to retrieve WAL after a failed attempt
What are the settings for:
archive_mode
archive_command
on the standby?
Are the files in pg_xlog on the standby mostly from well in the past?
Regards,
Edson
>
>
> Edson
>
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx