2013/10/27 Robert James <srobertjames@xxxxxxxxx>
On 10/27/13, Thomas Kellerer <spam_eater@xxxxxxx> wrote:Changing DELETE to TRUNCATE and putting it all in a transaction
> Robert James wrote on 27.10.2013 20:47:
>> I'm using Postgres for data analysis (interactive and batch). I need
>> to focus the analysis on a subset of one table, and, for both
>> performance and simplicity, have a function which loads that subset
>> into another table (DELETE FROM another_table; INSERT INTO
>> another_table SELECT ...).
>>
>> Oddly enough, although the SELECT itself is very quick (< 1 s), the
>> DELETE and INSERT can take over a minute! I can't figure out why.
>> another_table is simple: it has only 7 fields. Two of those fields
>> are indexed, using a simple one field standard index. There are no
>> triggers on it.
>>
>> What is the cause of this behavior? What should I do to make this
>> faster? Is there a recommended work around?
>>
>> (I'm hesitant to drop another_table and recreate it each time, since
>> many views depend on it.)
>
> DELETE can be a quite lengthy thing to do - especially with a large number
> of rows.
>
> If you use TRUNCATE instead, this will be *much* quicker with the additional
> benefit,
> that if you INSERT the rows in the same transaction, the INSERT will require
> much less
> I/O because it's not logged.
>
brought the time down to 40 seconds. But this is still awfully slow,
when the SELECT is under a second.
How many rows are being inserted?
Is there another problem here? Perhaps something to do with
triggerring autovacuum?
Or should I be using a different type of table for work tables? (RAM only table)
You could use a TEMP or UNLOGGED table depending on how long you need it to stay around (for these types of tables data won't be forced to disk before returning from the COMMIT).