In response to Ivano Luberti <luberti@xxxxxxxxxxxx>: > > > >> In only one case so far, the "code" table with 442 record has a size of > >> 18MB. If I run an vacuum full and a reindex it shrinks to less than > >> 100KB. > >> If I use the software to delete the rows and reinsert the same records > >> it explodes again to 18MB. > > > > That suggests the autovacuum system isn't checking the table often > > enough. Or, perhaps that you have a long-lived transaction that is > > preventing it from reclaiming space. > > > > Autovacuum is disussed at the bottom of this page: > > http://www.postgresql.org/docs/8.4/static/routine-vacuuming.html > > The "storage parameters" link has details on setting vacuum parameters > > for a single table. > > > > If your application is sat there holding open a transaction without > > doing anything stop doing that. It means the system can't be sure it's > > safe to reclaim the space used by old versions of rows. > > > No the application is doing what is supposed to do: inserting records. > But when the size of the table is so great insert become really slow, so > indeed autovacuum has been canceled a few times. This is another strong indicator that your autovacuum is not configured properly. You should probably make the settings more aggressive. > Moreover when autovacuum runs and the application is idle he is able to > run but not able to claim space. I don't understand what that comment means. > Then if I run vacuum manually with full, freeze and analyze checked and > also I run reindex everything return to normality. Again, this indicates that autovacuum is not configured to do what you want it to do. > What really worries and puzzles me is the size of the table is not > coherent with other copies of the same table with similar records number. Have you read the section on how MVCC and autovacuum work? The size of the table is not a function of the # of records. It's the # of records, plus the number of "dead rows" (which are records that have been deleted or updated and thus replaced with new rows). If this table sees frequent updates, then it will _NEVER_ be the size that you expect it to be if you just multiple #rows * size of row, because it will _always_ have some dead rows in the table. This is OK, it's how the system is designed to work, and frequent VACUUM FULL is just wasting time as the table will just enlarge again. The key is that autovacuum runs often enough that the size stabalizes based on usage. For example, if you have a table that usually has 500 rows in it and it's common for all the rows to be deleted and replaced, then you can expect the table to _always_ be big enough to hold 1000 rows. However, if all the rows are updated 5 times between each vacuum run, the table will be big enough to hold 2500 rows most of the time. Again, this is typical, it's how the system is designed to run. I'm guessing (although a lot of the original email has been trimmed) that the actual problem you're having is that autovacuum is taking too long, and is slowing down modifications to table data. If autovacuum is taking too long and is slowing down other operations, you have a few options (you can do one or many of these): * Run autovacuum more frequently. * Modify your application so it modifies less rows (yes, sometimes this is possible and the best solution. The fact that the table blows up to 180x the minimum size is a hint that you may be updating very inefficiently.) * Tune PostgreSQL to make more efficient use of RAM for caching (which will speed everything up) * Get faster hardware * Upgrade to a newer version of PostgreSQL that has more efficient vacuum code (if you mentioned which version you are using, it was lost when the message was trimmed) -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general