On Tue, Jul 17, 2012 at 6:30 AM, Craig Ringer <ringerc@xxxxxxxxxxxxx> wrote: > On 07/17/2012 01:56 AM, Jon Nelson wrote: > To perform reasonably well, Pg would need to be able to defer index updates > when bulk-loading data in a single statement (or even transaction), then > apply them when the statement finished or transaction committed. Doing this > at a transaction level would mean you'd need a way to mark indexes as > 'lazily updated' and have Pg avoid using them once they'd been dirtied > within a transaction. No such support currently exists, and it'd be > non-trivial to implement, especially since people loading huge amounts of > data often want to do it with multiple concurrent sessions. You'd need some > kind of 'DISABLE INDEX' and 'ENABLE INDEX' commands plus a transactional > backing table of pending index updates. It seems to me that if the insertion is done as a single statement it wouldn't be a problem to collect up all btree insertions and apply them before completing the statement. I'm not sure how much that would help though. If the new rows have uniform distribution you end up reading in the whole index anyway. Because indexes are not stored in logical order you don't get to benefit from sequential I/O. The lazy merging approach (the paper that Claudio linked) on the other hand seems promising but a lot trickier to implement. Regards, Ants Aasma -- Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance