Re: Performance query about large tables, lots of concurrent access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Karl Wright writes:

Okay - I started a VACUUM with the 8.1 database yesterday morning, having the database remain under load. As of 12:30 today (~27 hours), the original VACUUM was still running. At that point:

I don't recall if you said it already, but what is your maintenance_work_mem?
(a) I had to shut it down anyway because I needed to do another experiment having to do with database export/import performance, and

Do you know which tables change the most often?
Have you tried to do vacuum of those one at a time and see how long they take?

(b) the performance of individual queries had already degraded significantly in the same manner as what I'd seen before.

If you have a lot of inserts perhaps you can do analyze more often also.
So, I guess this means that there's no way I can keep the database adequately vacuumed with my anticipated load and hardware.

It is a possibility, but you could consider other strategies.. totally dependant on the programs accessing the data..

For example:
do you have any historical data that never changes?
Could that be moved to a different database in that same machine or another machine? That would decrease your vacuum times. Also partitioning the data so data that never changes is in separate tables may also help (but I am not sure of this).

Given the sizes you sent to the list, it may be simply that it is more than the hardware can handle.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux