Hi everyone, I'm a Java developer, so please bear with me if my description isn't perfect. I hope the community can help me understand this issue better. I have PostgreSQL 15 running in a Kubernetes Pod using the Zalando Spilo image, managed by Patroni. The container has a memory limit of 16 GB, and the database size is 40 GB (data dir on disk). When I reinitialize a replica with Patroni, it runs 'basebackup.sh', which in turn runs the 'pg_basebackup' command: "/usr/lib/postgresql/15/bin/pg_basebackup --pgdata=/home/postgres/pgdata/pgroot/data -X none --dbname='dbname=postgres user=standby host=<Leader IP> port=5432' " I noticed that the first 16 GB are copied quickly, but the process slows down significantly afterward (I observed that after the copied size is approximately equal to container's memory limits). Increasing the container memory limit to 32 GB showed a similar pattern: the first 32 GB were copied quickly, then it slowed down. Running the command manually: "/usr/lib/postgresql/15/bin/pg_basebackup --pgdata=/home/postgres/pgdata/pgroot/dummy_dir -X none --dbname='dbname=postgres user=standby host=<Leader IP> port=5432' " reproduced the issue. Once the container's total memory (including page cache, and including inactive files which are a lion's part of container MEM) reaches the limit, 'pg_basebackup' slows down. Other disk write operations (e.g., generating and copying large files with 'dd') are not affected by the memory limit and remain fast. When 'pg_basebackup' is slow and the container memory limit is reached, I tried discarding the page cache with: "sync && echo 3 > /proc/sys/vm/drop_caches" and this made 'pg_basebackup' fast again. Is 'pg_basebackup' performance limited by the files page cache? Do you need any additional information from me? Any suggestions? Thanks for your help! Regards, AlexL