I am using the c-library interface and for these particular transactions
I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
followed by at most 300 EXECUTES and then a COMMIT. That is the
general scenario. What value beyond 300 should I try?
Also, how might COPY (which involves file I/O) improve the
above scenario?
Thanks.
----- Original Message ----
From: James Mansion <james@xxxxxxxxxxxxxxxxxxxxxx>
To: andrew klassen <aptklassen@xxxxxxxxx>
Cc: pgsql-performance@xxxxxxxxxxxxxx
Sent: Wednesday, June 4, 2008 3:20:26 PM
Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows
andrew klassen wrote:
> I'll try adding more threads to update the table as you suggest.
You could try materially increasing the update batch size too. As an
exercise you could
see what the performance of COPY is by backing out the data and
reloading it from
a suitable file.
From: James Mansion <james@xxxxxxxxxxxxxxxxxxxxxx>
To: andrew klassen <aptklassen@xxxxxxxxx>
Cc: pgsql-performance@xxxxxxxxxxxxxx
Sent: Wednesday, June 4, 2008 3:20:26 PM
Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows
andrew klassen wrote:
> I'll try adding more threads to update the table as you suggest.
You could try materially increasing the update batch size too. As an
exercise you could
see what the performance of COPY is by backing out the data and
reloading it from
a suitable file.