Hello list,
System is running linux kernel 2.6.18 with postgres 8.2.4 and 1GB ram.
I'm having a 50GB database with the biggest table taking about 30 GB
and has about 200 million rows.
I'm already started to redesign the database to avoid the hugh number
of rows in this big table but I'm still curious why autovacuum hogs
over 200MB when it is not running?
Is it the shared_buffers?
Thanks,
Henke
shared_buffers = 128MB
work_mem = 10MB
maintenance_work_mem = 64MB
vacuum_cost_delay = 0 # 0-1000 milliseconds
vacuum_cost_limit = 200 # 0-10000 credits
effective_cache_size = 256MB
autovacuum_vacuum_cost_delay = 50
autovacuum_vacuum_cost_limit = 150
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings