Re: performance for high-volume log insertion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, 23 Apr 2009, Thomas Kellerer wrote:

Out of curiosity I did some tests through JDBC.

Using a single-column (integer) table, re-using a prepared statement took about 7 seconds to insert 100000 rows with JDBC's batch interface and a batch size of 1000


As a note for non-JDBC users, the JDBC driver's batch interface allows executing multiple statements in a single network roundtrip. This is something you can't get in libpq, so beware of this for comparison's sake.

I also played around with batch size. Going beyond 200 didn't make a big difference.


Despite the size of the batch passed to the JDBC driver, the driver breaks it up into internal sub-batch sizes of 256 to send to the server. It does this to avoid network deadlocks from sending too much data to the server without reading any in return. If the driver was written differently it could handle this better and send the full batch size, but at the moment that's not possible and we're hoping the gains beyond this size aren't too large.

Kris Jurka

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux