Jens Schipkowski wrote:
Thanks a lot to all for your tips.
Of course, I am doing all the INSERTs using a transaction. So the cost
per INSERT dropped from 30 ms to 3 ms.
The improvement factor matches with the hint by Brian Hurt.
Sorry, I forgot to mention we are using PostgreSQL 8.1.4.
Thanks for the code snippet posted by mallah. It looks like you are
using prepared statements, which are not available to us.
But I will check our database access if its possible to do a
workaround, because this looks clean and quick to me.
regards
Jens Schipkowski
On Mon, 11 Dec 2006 17:53:52 +0100, Guillaume Cottenceau <gc@xxxxxx>
wrote:
"Jens Schipkowski" <jens.schipkowski 'at' apus.co.at> writes:
Hello!
In our JAVA application we do multiple inserts to a table by data from
a Hash Map. Due to poor database access implemention - done by
another company (we got the job to enhance the software) - we cannot
use prepared statements. (We are not allowed to change code at
database access!)
First, we tried to fire one INSERT statement per record to insert.
This costs 3 ms per row which is to slow because normally we insert
10.000 records which results in 30.000 ms just for inserts.
for(){
sql = "INSERT INTO tblfoo(foo,bar)
VALUES("+it.next()+","+CONST.BAR+");";
}
You should try to wrap that into a single transaction. PostgreSQL
waits for I/O write completion for each INSERT as it's
implicitely in its own transaction. Maybe the added performance
would be satisfactory for you.
--**
APUS Software GmbH
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
This link might be what you are looking for, it has some information
about implementing COPY in the JDBC driver. Check the reply message as
well.
http://archives.postgresql.org/pgsql-jdbc/2005-04/msg00134.php
Another solution might be to have Java dump the contents of the HashMap
to a CVS file and have it load through psql with COPY commands.
Good luck,
Nick