Search Postgresql Archives

RE: [External] Re: walsender RAM increases by 500 MB while data is 80 MB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


Thanks Masahiko Sawada for your reply.

We did not have many transactions running when walsenders started consuming more than expected RAM.  We only had 1-2 transactions that inserted a row with about 40-70 MB of binary data to a table.
Changing logical_decoding_work_mem to as low as 1MB did not help.

Any other suggestions that might help?

Thanks!


-----Original Message-----
From: Masahiko Sawada <sawada.mshk@xxxxxxxxx>
Sent: Monday, March 11, 2024 10:38 AM
To: Avi Weinberg <AviW@xxxxxxxxx>
Cc: pgsql-generallists.postgresql.org <pgsql-general@xxxxxxxxxxxxxxxxxxxx>
Subject: [External] Re: walsender RAM increases by 500 MB while data is 80 MB

On Mon, Mar 11, 2024 at 12:33 AM Avi Weinberg <AviW@xxxxxxxxx> wrote:
>
> Hi Experts,
>
>
>
> Your input is most welcome!
>
>
>
> We are using Postgres 13 (and plan to upgrade to 15 soon).  We have logical replication with about 40 servers subscribing to one publisher.  40 Walsender processes are running on the publisher server.  When we insert a row into a table holding binary data the walsender RAM usage increases by 500MB although the row binary data is only 80MB.  We see this increase in all walsender processes.  At some point we got OOM and the process was killed.

This sounds like similar reports[1][2] we got before. Were there any long-running transactions at that time when the 80MB data change was made? And is it accessible to the core dump of the walsender process who was killed due to OOM?

> Why does the walsender increases by 500MB when the data change was
> only 80MB Is some of the 500MB increase due to shared memory or each
> walsender has its own 500MB increase.  I assume that if it was only in shared memory we would not have gotten OOM… Why when logical_decoding_work_mem = 64MB the RAM is 15 times that size?  Shouldn't any additional space be used from disk and not RAM?
> Will adding streaming = on to publication "PUBLICATION pub WITH (streaming = on)" can alleviate the issue?
> Are there configuration options that can resolve the RAM issue.  It can be also in version 15 since we plan to upgrade soon.

If you're facing a similar issue I shared above, temporarily setting logical_decoding_work_mem a *lower* value could alleviate the situation.  Setting a lower value would lead to more evictions in logical decoding, and it probably can avoid using much memory and OOM (note that the logical decoding gets slower instead). I think there is a memory accounting issue in logical decoding, which could end up using memory much more than the logical_decoding_work_mem limit in some scenarios. This issue is not fixed yet, and setting "streaming = on" doesn't help.

Regards,

[1] https://www.postgresql.org/message-id/CAMnUB3oYugXCBLSkih%2BqNsWQPciEwos6g_AMbnz_peNoxfHwyw%40mail.gmail.com
[2] https://www.postgresql.org/message-id/17974-f8c9d353a62f414d%40postgresql.org

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com/
IMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you should not copy or use this e-mail for any purpose nor disclose its contents to any person.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux