Re: Question about VACUUM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kevin, comments after your comments

2011/12/3 Kevin Grittner <Kevin.Grittner@xxxxxxxxxxxx>:
> Ernesto Quiñones wrote:
>> Scott Marlowe  wrote:
>>> Ernesto Quiñones  wrote:
>
>>>> I want to know if it's possible to predict (calculate), how long
>>>> a VACUUM FULL process will consume in a table?
>
> I don't think you said what version of PostgreSQL you're using.
> VACUUM FULL prior to version 9.0 is not recommended for most
> situations, and can take days or weeks to complete where other
> methods of achieving the same end may take hours.  If you have
> autovacuum properly configured, you will probably never need to run
> VACUUM FULL.

I'm working with PostgreSQL 8.3 running in Solaris 10, my autovacuum
paramaters are:

autovacuum	on	
autovacuum_analyze_scale_factor		0,5
autovacuum_analyze_threshold50000
autovacuum_freeze_max_age 	200000000
autovacuum_max_workers	3
autovacuum_naptime		1h
autovacuum_vacuum_cost_delay	 -1
autovacuum_vacuum_cost_limit	-1
autovacuum_vacuum_scale_factor 0,5
autovacuum_vacuum_threshold 50000

my vacuums parameters are:

vacuum_cost_delay	1s
vacuum_cost_limit	200
vacuum_cost_page_dirty	20
vacuum_cost_page_hit	1
vacuum_cost_page_miss	10
vacuum_freeze_min_age	100000000


> Ah, well that right there is likely to put you into a position where
> you need to do painful extraordinary cleanup like VACUUM FULL.  In
> most situation the autovacuum defaults are pretty good.  Where they
> need to be adjusted, the normal things which are actually beneficial
> are to change the thresholds to allow more aggressive cleanup or (on
> low-powered hardware) to adjust the cost ratios so that performance
> is less affected by the autovacuum runs.

I have a good performance in my hard disks, I have a good amount of
memory, but my cores are very poor, only 1ghz each one.

I have some questions here:

1. autovacuum_max_workers= 3  , each work processes is using only one
"core" or one "core" it's sharing por 3 workers?

2. when I run a "explain analyze" in a very big table (30millons of
rows) , explain returning me 32 millons of rows moved, I am assuming
that my statistics are not updated in 2 millons of rows, but, is it a
very important number? or maybe, it's a regular result.


thanks for your help?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux