Thanks,
as we understand there is a lack of autovacuum max workers in postgresql.conf.
One question more, what impact would be on streaming replication? Will full vacuum create extra wal files during full vacuum?
Thanks a lot
2018-03-26 18:24 GMT+03:00 hubert depesz lubaczewski <depesz@xxxxxxxxxx>:
On Mon, Mar 26, 2018 at 05:33:19PM +0300, Artem Tomyuk wrote:
> Can't, it generates huge IO spikes.
>
> But....
>
> Few hours ago i manually started vacuum verbose on pg_attribute, now its
> finished and i have some outputs:
>
> INFO: "pg_attribute": found 554728466 removable, 212058 nonremovable row
> versions in 44550921 out of 49326696 pages DETAIL: 178215 dead row versions
> cannot be removed yet. There were 53479 unused item pointers. 0 pages are
> entirely empty. CPU 1097.53s/1949.50u sec elapsed 6337.86 sec. Query
> returned successfully with no result in 01:47:3626 hours.
>
> what do you think?
>
> select count(*) on pg_attribute returns:
> 158340 rows
>
> So as i understand vacuum full will create new pg_attribute and will wrote
> those amount of "valid" rows, but still it will scan 300GB old table?
> So estimate will be even ~same compering with regular vacuum?
more or less, yes.
the thing is - find and fix whatever is causing this insane churn of
tables/attributes.
Best regards,
depesz