On Wed, May 15, 2013 at 11:31 AM, Jorge Arévalo <jorgearevalo@xxxxxxxxxxxx> wrote: > Hello, > > I'd like to know what's the best way to reduce the number of server rounds in a libpq C app that fetches BLOBs from a remote PostgreSQL server. > > About 75% of the time my app uses is spent querying database. I basically get binary objects (images). I have to fetch all the images from a table. This table can be really big (in number of rows) and each image can be big too. #1 thing to make sure of when getting big blobs is that you are fetching data in binary. If you are not, do so before changing anything else (I wrote a library to help do that, libpqtypes). > I guess I should go for cursors. If I understood the concept of "cursor", basically the query is executed, a ResultSet is generated inside the database server, and the client receives a "pointer" to this ResultSet. You can get all the rows by moving this pointer over the ResultSet, calling the right functions. But you still have to go to the database for each chunk of data. Am I right? cursors are a way to page through a query result without fetching all the data at once. this would be most useful if you are processing one row at a time on the client side. but if the client needs all the data held in memory, cursors will only help in terms of reducing the temporary memory demands while doing the transfer. So it's hard to say if it's worth using them until you describe the client side requirements a little better. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general