Ilja Golshtein wrote:
Hello!
One important use case in my libpq based application (PostgreSQL 8.1.4) is a sort of massive data loading.
Currently it is implemented as a series of plain normal INSERTs
(binary form of PQexecParams is used) and the problem here it is pretty slow.
I've tried to play with batches and with peculiar constructions
like INSERT (SELECT .. UNION ALL SELECT ..) to improve performance, but not satisfied with the result I've got.
Now I try to figure out if it is possible to use COPY FROM STDIN instead of INSERT if I have to insert, say, more then 100 records at once.
Hints are highly appreciated.
The only limitaion mentioned in Manual is about Rules and I don't care about this since I don't use Rules.
Am I going to come across with any other problems (concurrency, reliability, compatibility, whatever) on this way?
Many thanks.
Using COPY FROM STDIN is much faster than INSERT's (I am sure some out
there have test times to compare, I don't have any on hand)
Sounds like your working with an existing database - if you are starting
from scratch (inserting data into an empty database) then there are
other things that can help too.
--
Shane Ambler
Postgres@xxxxxxxxxxxxxxxx
Get Sheeky @ http://Sheeky.Biz