Search Postgresql Archives

Re: Automatic aggressive vacuum on almost frozen table takes too long

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 20, 2023 at 9:43 PM Mikhail Balayan <mv.balayan@xxxxxxxxx> wrote:
> What catches my eye: scanning indexes smaller than 3.1GB is fast, larger ones are slow. For example:
> idx_applications2_policy_id is 3131 MB took just 5 seconds (DETAIL:  CPU: user: 2.99 s, system: 1.65 s, elapsed: 5.32 s)
> but idx_applications2_deleted_at with 3264 MB took 1 minute 22 seconds (DETAIL:  CPU: user: 67.93 s, system: 3.41 s, elapsed: 82.75 s)

I think that I know what this is.

If you delete many index pages during VACUUM, and those pages are all
full of duplicate values, the deletion operation can sometimes be
slower due to the need to relocate a downlink to each to-be-deleted
leaf page. When there are thousands of matches, you'll start to notice
O(n^2) behavior due to the way in which the B-Tree VACUUM code must
grovel through the parent level, which is full of duplicate keys.

If you were on Postgres 12+, then this wouldn't happen, because the
heap TID is treated as a part of the key space there, affecting sort
order. The implementation would immediately relocate the matching
parent downlink using a unique key (unique because heap TID would act
as a unique-ifier on that version). And if you were on 14+, things in
this area would be much better still.

-- 
Peter Geoghegan





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux