Search Postgresql Archives

Re: UPDATE on two large datasets is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Apr 3, 2007, at 11:44 AM, Scott Marlowe wrote:

I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.

Can you step back and work on how the app writes out data, so that it
opens a persistent connection, and then sends in the updates one at a
time, committing every couple of seconds while doing so?

I'd look into indexing the tables your update requires in such a way that you're not doing so many sequential scans.

I have a system that does many updates on a quickly growing db - 5M rows last week, 25M this week.

Even simple updates could take forever, because of poor indexing in relation to fields addressed in the 'where' on the update and foreign keys.
With some proper updating, the system is super fast again.

So i'd look into creating new indexes and trying to shift the seq scans into more time-efficient index scans.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux