Re: Big data INSERT optimization - ExclusiveLock on extension of the table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 1. rename table t01 to t02
OK...
> 2. insert into t02 1M rows in chunks for about 100k
Why not just insert into t01??

Because of cpu utilization, it speeds up when load is divided

> 3. from t01 (previously loaded table) insert data through stored procedure
But you renamed t01 so it no longer exists???
> to b01 - this happens parallel in over a dozen sessions
b01?

that's another table - permanent one

> 4. truncate t01
Huh??

The data were inserted to permanent storage so the temporary table can be
truncated and reused.

Ok, maybe the process is not so important; let's say the table is loaded,
then data are fetched and reloaded to other table through stored procedure
(with it's logic), then the table is truncated and process goes again. The
most important part is holding ExclusiveLocks ~ 1-5s.




--
View this message in context: http://postgresql.nabble.com/Big-data-INSERT-optimization-ExclusiveLock-on-extension-of-the-table-tp5916781p5917136.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux