Re: Performance query about large tables, lots of concurrent access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Francisco Reyes wrote:
Karl Wright writes:

Okay - I started a VACUUM with the 8.1 database yesterday morning, having the database remain under load. As of 12:30 today (~27 hours), the original VACUUM was still running. At that point:

I don't recall if you said it already, but what is your maintenance_work_mem?

I'm trying that now.

(a) I had to shut it down anyway because I needed to do another experiment having to do with database export/import performance, and

Do you know which tables change the most often?
Have you tried to do vacuum of those one at a time and see how long they take?

I can certainly do that, but at the rate it's currently operating that may take many more days.


(b) the performance of individual queries had already degraded significantly in the same manner as what I'd seen before.

If you have a lot of inserts perhaps you can do analyze more often also.

I'm not getting bad query plans; I'm getting good plans but slow execution. This is consistent with what someone else said, which was that if you didn't run VACUUM enough, then dead tuples would cause performance degradation of the kind I am seeing.

(FWIW, ANALYZE operations are kicked off after every 30,000 inserts, updates, or deletes, by the application itself).

So, I guess this means that there's no way I can keep the database adequately vacuumed with my anticipated load and hardware.

It is a possibility, but you could consider other strategies.. totally dependant on the programs accessing the data..

For example:
do you have any historical data that never changes?

Some, but it's insignificant compared to the big tables that change all over the place.

Could that be moved to a different database in that same machine or another machine? That would decrease your vacuum times.

That's not an option, since we ship appliances and this would require that each appliance somehow come in pairs.

Also partitioning the data so data that never changes is in separate tables may also help (but I am not sure of this).


Right, see earlier discussion.

Given the sizes you sent to the list, it may be simply that it is more than the hardware can handle.


I'm going to recommend going to 8.2 so that we get as much improvement as possible before panicking entirely. :-)

Karl


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux