Re: very very slow inserts into very large table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/07/12 20:08, Claudio Freire wrote:
On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton <mthornton@xxxxxxxxxx> wrote:
4. The most efficient way for the database itself to do the updates would be
to first insert all the data in the table, and then update each index in
turn having first sorted the inserted keys in the appropriate order for that
index.
Actually, it should create a temporary index btree and merge[0] them.
Only worth if there are really a lot of rows.

[0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf
I think 93 million would qualify as a lot of rows. However does any available database (commercial or open source) use this optimisation.

Mark



--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux