When logical replication is setup, any wal generation on any tables will result in replication lag. Since you are running a long running transaction on the master, the maximum number of changes kept in the memory per transaction is 4MB. If the transaction requires more than 4MB the changes are spilled to disk. This is when you will start seeing
1. Replication lag spiking
2. Storage being consumed
3. Restart lsn stops moving forward
You can confirm if the heavy write that you are talking about is spilling to disk or not by setting log_min_messges to debug 2. Try to find if the changes are spilled to disk.
To answer your question:
1. As long as the write heavy query is running on the database, you will not see restart lsn moving.
2. You will have to have smaller transactions
3. When the query is completed, you will see restart_lsn moving forward
On Tue, Aug 18, 2020 at 11:27 AM Satyam Shekhar <satyamshekhar@xxxxxxxxx> wrote:
Hello,I wish to use logical replication in Postgres to capture transactions as CDC and forward them to a custom sink.To understand the overhead of logical replication workflow I created a toy subscriber using the V3PGReplicationStream that acknowledges LSNs after every 16k reads by calling setAppliedLsn, setFlushedLsn, and forceUpdateState. The toy subscriber is set up as a subscriber for a master Postgres instance that publishes changes using a Publication. I then run a write-heavy workload on this setup that generates transaction logs at approximately 235MBps. Postgres is run on a beefy machine with a 10+GBps network link between Postgres and the toy subscriber.My expectation with this setup was that the replication lag on master would be minimal as the subscriber acks the LSN almost immediately. However, I observe the replication lag to increase continuously for the duration of the test. Statistics in pg_replication_slots show that restart_lsn lags significantly behind the confirmed_flushed_lsn. Cursory reading on restart_lsn suggests that an increasing gap between restart_lsn and confirmed_flushed_lsn means that Postgres needs to reclaim disk space and advance restart_lsn to catch up to confirmed_flushed_lsn.With that context, I am looking for answers for two questions -1. What work needs to happen in the database to advance restart_lsn to confirmed_flushed_lsn?2. What is the recommendation on tuning the database to improve the replication lag in such scenarios?Regards,Satyam