Re: Heavy LWLockTranche buffer_mapping in Postgres 9.6.10 server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for those pitched in. I finally resolved the issue. It turns out that the table bloat was the culprit. Those bloat couldn't be dealt with by manual or auto vacuum. It somehow reached a point that many queries had trouble finish on time even though they used good query plans. The final solution was just do a full vacuum on some of the key tables, and that immediately quiet down the database.

thanks,

Justin

On Sun, Feb 2, 2020, 8:15 AM Andres Freund <andres@xxxxxxxxxxx> wrote:
Hi,

On 2020-02-01 16:17:13 -0700, Justin Lu wrote:
> We are seeing very heavy LWLockTranche buffer_mapping in db recently.
>
> There server had 24 core, 128GB of RAM, SSD data file system, on Unbuntu
> 16.04.6.
> The shared_buffers was at 32GB. 1/4 of over RAM size. No issue on
> checkpoints (avg time 29 min apart).
>
> After seeing the heavy wait, we added 64GB more RAM and increased
> shared_buffers to 48GB, effective_cache_size to 90GB. But it seems there is
> no impact on the buffer mapping waits at all.

I suggest doing a perf profile with --call-graph dwarf, to see where
this is mostly coming from.

One thing I've seen causing symptoms like this before, is if there's
suddenly a larger amount of table truncations, dropping, etc - dropping
/ truncating a table / index needs to scan all of shared buffers...

Greetings,

Andres Freund

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux