Re: Performance on Bulk Insert to Partitioned Table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Charles,

* Charles Gomes (charlesrg@xxxxxxxxxxx) wrote:
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.

Do you end up having multiple threads writing to the same, underlying,
tables..?  If so, I've seen that problem before.  Look at pg_locks while
things are running and see if there are 'extend' locks that aren't being
immediately granted.

Basically, there's a lock that PG has on a per-relation basis to extend
the relation (by a mere 8K..) which will block other writers.  If
there's a lot of contention around that lock, you'll get poor
performance and it'll be faster to have independent threads writing
directly to the underlying tables.  I doubt rewriting the trigger in C
will help if the problem is the extent lock.

If you do get this working well, I'd love to hear what you did to
accomplish that.  Note also that you can get bottle-necked on the WAL
data, unless you've taken steps to avoid that WAL.

	Thanks,

		Stephen

Attachment: signature.asc
Description: Digital signature


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux