Re: very very slow inserts into very large table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton <mthornton@xxxxxxxxxx> wrote:

Every insert updates four indexes, so at least 3 of those will be in random order. The indexes don't fit in memory, so all those updates will involve reading most of the relevant b-tree pages from disk (or at least the leaf level). A total of 10ms of random read from disk (per inserted row) wouldn't surprise me ... which adds up to more than 10 days for your 93 million rows.

Which is the long way of saying that you will likely benefit from partitioning that table into a number of smaller tables, especially if queries on that table tend to access only a subset of the data that can be defined to always fit into a smaller number of partitions than the total.  At the very least, inserts will be faster because individual indexes will be smaller.  But unless all queries can't be constrained to fit within a subset of partitions, you'll also see improved performance on selects.

--sam



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux