On Thu, Feb 16, 2023 at 7:44 AM Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote: > > That is, if I understand it correctly, it says that there were (and > > actually are) 2013128 pages of which 2008230 were skipped, which leaves > > 4898 blocks to be scanned. I.e. it seems that the allocated 1GB > > (autovacuum_work_mem) should be enough to handle that amount of blocks > > and to avoid multiple scans of the indexes. It's quite clear that there is only one pass over the indexes, since "index scans: 1" says exactly that. > > But, based on buffer usage, one can see that a huge amount of data is > > read, greatly exceeding not only the number of remaining unfrozen > > blocks, but also the size of the table and indexes taken together: 2 > > billion blocks, more than 15TB. > > > > Is this a bug in Postgresql or am I interpreting the log data wrong? I think that it might be a bug in Postgres. I addressed a similar issue in the same "buffer" instrumentation in commit d3609dd2, but that wasn't backpatched because I imagined that it only applied to the new VACUUM VERBOSE case (VACUUM VERBOSE only started sharing the same instrumentation code as log_autovacuum in Postgres 15). It's not immediately obvious how you could see a problem like the one you've shown in the autovacuum log output. Even still, the information about buffers that you've shown does indeed appear to be total nonsense (while everything else we can see looks plausible). There has to be some explanation for that. The only other explanation I can think of is a pathological case where an index scan by some particular ambulkdelete routine scans a number of buffers that vastly exceeds the total size of the index. That does seem just about possible with an access method like GIN. Do you have any non-btree indexes on the table? Can you show us the details of the table, including all of its indexes? In other words, can you show "\d applications" output from psql? -- Peter Geoghegan