Search Postgresql Archives

Re: Recommendation to run vacuum FULL in parallel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stephen 

Thanks for the response ,

Version : 9.2 
We never ran VACUUM FULL in the past, All we are doing just manual vacuum ( Weekly ) .
Based on the Observation ( test run ) , we were able to reclaim 150 GB out of 500 GB .

We are heading to a planned down time soon , So thinking  to run FULL during that time .

Reason behind to run FULL : 1. Reclaim unused space which postgres never using it.
                                                2. Considering  FULL may increase the performance.
                                                3. Daily backup size and time  will be reduced after reclaiming 150GB.

Thanks,
Raj


On Wed, Apr 3, 2019 at 1:16 PM Stephen Eilert <contact@xxxxxxxxxxxxxxxxx> wrote:
> Ideally VACUUM FULL should not require a giant lock on the table.

It is a massively expensive operation, regardless. Not sure if it is something you want to run in production outside a maintenance window.

I would argue that frequent vacuum full is an antipattern. This will become a matter of superstition in your company.

If db size growth is a problem, make autovacuum more agressive. Or run your manual vacuum job (not full) more often than a week. Daily, if you have to. This will not reclaim disk space as reported by the OS, but it should make the space available for new row versions, so db should mostly stop growing from the OS point of view(mostly, because you may be adding new data, right?). If it is still a problem, then there may be something else going on.

Which PG version is that?


— Stephen
On Apr 3, 2019, 10:02 AM -0700, Perumal Raj <perucinci@xxxxxxxxx>, wrote:
Hi All

Thanks for all your valuable  inputs,

Here is some more data,

Though we have 150 GB free space spread across 500 Tables , Every alternative day DB is growing with 1 GB rate.
Also,We have manual vacuum job scheduled to run weekly basis, So seems to be space is not reusing all the time ?

So conclude the requirement here , The only way to parallelism is multiple script. And no need to do REINDEX exclusively.
Question : Do we need to consider  Table dependencies while preparing script in order to avoid table locks during vacuum full ?

At present Maintenance work memory set to 20 GB.
Question : Do we need to tweak any other parameters ?

Note: 
We are planning this activity with Application Downtime only.

Let me know if i missed anything.

Regards,
Raj
 




On Wed, Apr 3, 2019 at 8:42 AM rihad <rihad@xxxxxxx> wrote:
> And future updates can reuse it, too (an update is very similar to an
> insert+delete).


Hm, then it's strange our DB takes 6 times as much space compared to
freshly restored one (only public schema is considered).

> Not if autovacuum has a chance to run between updates.

Ours is run regularly, although we had to tweak it down not to interfere
with normal database activity, so it takes several hours each run on the
table. We did that by setting autovacuum_vacuum_scale_factor = 0.05 from
default 0.2.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux