Thanks Kevin. Postgres version is 9.1.4 (lastest) Every day the table has about 7 millions of new rows. The table hold the data for 60 days, so approx. the total rows must be around 420 millions. Every night a delete process run, and remove rows older than 60 days. So, the space used by postgres should not be increase drastically because every day arrive 7 millions of rows but also same quantity is deleted but my disk get out of space every 4 months. I must copy tables outside the server, delete local table and create it again, after this process I got again space for about 4 months. Maybe is a wrong autovacuum config, but is really complicate to understand what values are correct to avoid performance penalty but to keep table in good fit. I think that autovacuum configuration should have some like "auto-config" that recalculate every day which is the best configuration for the server condition Thanks! -----Mensaje original----- De: Kevin Grittner [mailto:Kevin.Grittner@xxxxxxxxxxxx] Enviado el: jueves, 16 de agosto de 2012 04:52 p.m. Para: Anibal David Acosta; pgsql-performance@xxxxxxxxxxxxxx Asunto: Re: best practice to avoid table bloat? "Anibal David Acosta" <aa@xxxxxxxxxxxx> wrote: > if I have a table that daily at night is deleted about 8 millions of > rows (table maybe has 9 millions) is recommended to do a vacuum > analyze after delete completes or can I leave this job to autovacuum? Deleting a high percentage of the rows should cause autovacuum to deal with the table the next time it wakes up, so an explicit VACUUM ANALYZE shouldn't be needed. > For some reason for the same amount of data every day postgres consume > a little more. How are you measuring the data and how are you measuring the space? And what version of PostgreSQL is this? -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance