Hello. At Tue, 18 Aug 2020 09:27:34 -0700, Satyam Shekhar <satyamshekhar@xxxxxxxxx> wrote in > Hello, > > I wish to use logical replication in Postgres to capture transactions as > CDC and forward them to a custom sink. > > To understand the overhead of logical replication workflow I created a toy > subscriber using the V3PGReplicationStream that acknowledges LSNs after > every 16k reads by calling setAppliedLsn, setFlushedLsn, and forceUpdateState. > The toy subscriber is set up as a subscriber for a master Postgres instance > that publishes changes using a Publication. I then run a write-heavy > workload on this setup that generates transaction logs at approximately > 235MBps. Postgres is run on a beefy machine with a 10+GBps network link > between Postgres and the toy subscriber. > > My expectation with this setup was that the replication lag on master would > be minimal as the subscriber acks the LSN almost immediately. However, I > observe the replication lag to increase continuously for the duration of > the test. Statistics in pg_replication_slots show that restart_lsn > lags significantly behind > the confirmed_flushed_lsn. Cursory reading on restart_lsn suggests that an > increasing gap between restart_lsn and confirmed_flushed_lsn means that > Postgres needs to reclaim disk space and advance restart_lsn to catch up to > confirmed_flushed_lsn. > > With that context, I am looking for answers for two questions - > > 1. What work needs to happen in the database to advance restart_lsn to > confirmed_flushed_lsn? > 2. What is the recommendation on tuning the database to improve the > replication lag in such scenarios? To make sure, replication delay or lag here is current_wal_lsn() - confirmed_flush_lsn. restart_lsn has nothing to do with replication lag. It is the minimum LSN the server thinks it needs for restarting replication on the slot. How long have you observed the increase of the gap? If no long-transactions are running, restart_lsn is the current LSN about from 15 to 30 seconds ago. That is, the gap between restart_lsn and confirmed_flush_lsn would be at most the amount of WAL emitted in the last 30 seconds. In this case, that is estimated to be 235MB*30 = about 7GB or 440 in 16MB-segments even if the system is perfectly working. Anyway the publisher server would need to preserve WAL files up to about 68GB (in the case where checkpoint_timeout is 5 minutes) so requirement of 7GB by restart_lsn doesn't matter. In short, I don't think you need to do something against that "lag". regards. -- Kyotaro Horiguchi NTT Open Source Software Center