How to avoid vacuuming a huge logging table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Our application has a table that is only logged to, and infrequently
used for reporting. There generally no deletes and updates.

Recently, the shear size (an estimated 36 million rows) caused a serious
problem because it prevented a "vacuum analyze" on the whole database
from finishing in a timely manner.

As I understand, a table with this usage pattern wouldn't need to be
vacuumed anyway.

I'm looking for general advice from people who have faced the same
issue. I'm looking at a number of alternatives:

1. Once a month, we could delete and archive old rows, for possible
re-import later if we need to report on them. It would seem this would
need to be done as proper insert statements for re-importing. (Maybe
there is a solution for that with table partitioning? )

2. We could find a way to exclude the table for vacuuming, and let it
grow even larger. Putting the table in it's own database would
accomplish that, but it would nice to avoid the overhead of a second
database connection.

3. Take a really different approach. Log in CSV format to text files
instead, And only import the date ranges we need "on demand" if a report
is requested on the data.

Thanks for any tips.

    Mark


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux