On Tuesday 09 December 2014 16:56:39 Tom Lane wrote: > Vincent de Phily <vincent.dephily@xxxxxxxxxxxxxxxxx> writes: > > It reads about 8G of the table (often doing a similar number of writes, > > but > > not always), then starts reading the pkey index and the second index (only > > 2 indexes on this table), reading both of them fully (some writes as > > well, but not as many as for the table), which takes around 8h. > > > > And the cycle apparently repeats: process a few more GB of the table, then > > go reprocess both indexes fully. A rough estimate is that it spends ~6x > > more time (re)processing the indexes as it does processing the table > > (looking at data size alone the ratio would be 41x, but the indexes go > > faster). I'm probably lucky to only have two indexes on this table. > > > > Is that the expected behaviour ? > > Yes. It can only remember so many dead tuples at a time, and it has > to go clean the indexes when the dead-TIDs buffer fills up. Fair enough. And I guess it scans the whole index each time because the dead tuples are spread all over ? What happens when vacuum is killed before it had time to go though the index with its dead-TID buffer ? Surely the index isn't irreversibly bloated; and whatever is done then could be done in the normal case ? It still feels like a lot of wasted IO. > You could > increase maintenance_work_mem to increase the size of that buffer. Will do, thanks. -- Vincent de Phily -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general