I'm currently seeing more and more problems with vacuum as the DB size gets bigger and bigger. (~220GB+) Bear in mind that I'm working on a fairly big DB with unfairly sized hardware (Celeron 1.7G, 2x500G Raid1 dbspace1, 1x500Gb dbspace2, 1x80G system, 768MB Ram, 2G Swap on dspace2) IO is main bottleneck when doing the vacuum and I've had vacuum stuck on a particular 5gb table for over 5 hours w/o moving and I've to kill the entire DB and restart) Right now, I've already implemented partitioning of some of the huge tables (weekly) and moved the older ones to read-only tables which does not have updates/deletes etc. I'm doing both autovacuum and nightly vacuum. The nightly vacuum is DB wide (so tht I can see where is the FSM) and this is un-necessary(?) hitting the read-only tables. Is there a way to specify it to _not_ vacuum those tables and yet still give me the FSM? (if I vacuum based on per-table via vacuumdb, I won't get the FSM information) autovacuum = on # enable autovacuum subprocess? autovacuum_vacuum_threshold = 200 # min # of tuple updates before autovacuum_vacuum_scale_factor = 0.03 # fraction of rel size before autovacuum_analyze_scale_factor = 0.02 # fraction of rel size before I've bumped maintenance_work_mem from 32 to 64 and now to 128mb and I've stopped all activity on the DB while I'm vacuuming and I see that setting it to 128MB is keeping things zippy. Any comments would be appreciated on how best to tune this. (with the aforementioned hardware limitation) ps : Shared buffers = 200mb effective cache size = 350mb ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq