Search Postgresql Archives

Re: Pipelining INSERTs using libpq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 21, 2012 at 4:31 AM, Florian Weimer <fweimer@xxxxxxxxxx> wrote:
> I would like to pipeline INSERT statements.  The idea is to avoid waiting
> for server round trips if the INSERT has no RETURNING clause and runs in a
> transaction.  In my case, the failure of an individual INSERT is not
> particularly interesting (it's a "can't happen" scenario, more or less).  I
> believe this is how the X toolkit avoided network latency issues.
>
> I wonder what's the best way to pipeline requests to the server using the
> libpq API.  Historically, I have used COPY FROM STDIN instead, but that
> requires (double) encoding and some client-side buffering plus heuristics if
> multiple tables are filled.
>
> It does not seem possible to use the asynchronous APIs for this purpose, or
> am I missing something?

How you attack this problem depends a lot on if all your data you want
to insert is available at once or you have to wait for it from some
actor on the client side.  The purpose of asynchronous API is to allow
client side work to continue while the server is busy with the query.
So they would only help in your case if there was some kind of other
processing you needed to do to gather the data and/or prepare the
queries.  Maybe then you'd PQsend multiple insert statements with a
single call.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux