Re: concurrent inserts into two separate tables are very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Scott Marlowe wrote:
On Jan 5, 2008 9:00 PM, Sergei Shelukhin <realgeek@xxxxxxxxx> wrote:
Hi. Running postgres 8.2 on debian.
I've noticed that concurrent inserts (archiving) of large batches data
into two completely unrelated tables are many times slower than the
same inserts done in sequence.
Is there any way to speed them up apart from buying faster HDs/
changing RAID configuration?

What method are you using to load these data?  Got a short example
that illustrates what you're doing?

The basic structure is as follows: there are several tables with transaction data that is stored for one month only. The data comes from several sources in different formats and is pushed in using a custom script. It gets the source data and puts it into a table it creates (import table) with the same schema as the main table; then it deletes the month old data from the main table; it also searches for duplicates in the main table using some specific criteria and deletes them too (to make use of indexes 2nd temp table is created with id int column and it's populated with one insert ... select query with the transaction ids of data duplicate in main and import tables, after that delete from pages where id in (select id from 2nd-temp-table) is called). Then it inserts the remainder of the imports table into the main table. There are several data load processes that function in the same manner with different target tables. When they are running in sequence, they take about 20 minutes to complete on average. If, however, they are running in parallel, they can take up to 3 hours... I was wondering if it's solely the HD bottleneck case, given that there's plenty of CPU and RAM available and postgres is configured to use it.


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your
      message can get through to the mailing list cleanly

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux