On Mon, Apr 25, 2022 at 10:33 AM Michael Lewis <lewis.michaelr@xxxxxxxxx> wrote: > > On Thu, Apr 21, 2022 at 6:17 AM Chris Bisnett <cbisnett@xxxxxxxxx> wrote: >> >> We're working to update our application so that we can >> take advantage of the pruning. Are you also using native partitioning? > > > No partitioned tables at all, but we do have 1800 tables and some very complex functions, some trigger insanity, huge number of indexes, etc etc. > > There are lots of things to fix, but I just do not yet have a good sense of the most important thing to address right now to reduce the odds of this type of traffic jam occurring again. I very much appreciate you sharing your experience. If I could reliably reproduce the issue or knew what data points to start collecting going forward, that would at least give me something to go on, but it feels like I am just waiting for it to happen again and hope that some bit of information makes itself known that time. > > Perhaps I should have posted this to the performance list instead of general. In my experience lwlock contention (especially around buffer_mapping) is more about concurrent write activity than any particular number of tables/partitions. The first recommendation I would have is to install pg_buffercache and see if you can capture some snapshots of what the buffer cache looks like, especially looking for pinning_backends. I'd also spend some time capturing pg_stat_activity output to see what relations are in play for the queries that are waiting on said lwlocks (especially trying to map write queries to tables/indexes). Robert Treat https://xzilla.net