On 12/21/2012 03:29 PM, Merlin Moncure wrote:
How you attack this problem depends a lot on if all your data you want to insert is available at once or you have to wait for it from some actor on the client side. The purpose of asynchronous API is to allow client side work to continue while the server is busy with the query.
The client has only very little work to do until the next INSERT.
So they would only help in your case if there was some kind of other processing you needed to do to gather the data and/or prepare the queries. Maybe then you'd PQsend multiple insert statements with a single call.
I want to use parameterized queries, so I'll have to create an INSERT statement which inserts multiple rows. Given that it's still stop-and-wait (even with PQsendParams), I can get through at most one batch per RTT, so the number of rows would have to be rather large for a cross-continental bulk load. It's probably doable for local bulk loading.
Does the wire protocol support pipelining? The server doesn't have to do much to implement it. It just has to avoid discarding unexpected bytes after the current frame and queue it for subsequent processing instead.
(Sorry if this message arrives twice.) -- Florian Weimer / Red Hat Product Security Team -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general