Fully agree with Laurenz.
LIMIT in some (limited!) cases could be seen as a workaround but it's far from being elegant (what if your end user types the query?)
If 'SingleRowMode' goes row-by-row then again it's not a solution, especially given that this particular issue applies to rather large row sets.
The only solution is CURSOR based which I find an awkward low level hack comparing to elegant option supported by native library.
Postgresql is there for a good while perceived as one of the best (or just simply the best!?) available open source DB solution, so I'm really surprised this functionality is not yet supported...
On Mon, Mar 10, 2014 at 6:58 AM, Albe Laurenz <laurenz.albe@xxxxxxxxxx> wrote:
Daniel Verite wrote:
> matshyeq wrote:
[ runs out of memory on the client because all results from a large query are retrieved at once ]
Yes, DBD::Pg could be improved to make use of that; the problem is probably
>> "Unfortunately, this is a limitation in the underlying driver (libpq) rather
>> than DBD::Pg itself. There have been talks over the years of supporting
>> this, but nothing concrete yet. Your best bet would be to ask about this on
>> the Postgres lists"
>>
>> Would you consider putting this on the roadmap, so one day it gets improved?
>
> This improvement seems to have actually been made since 9.2 with
> the PQsetSingleRowMode() function:
>
> http://postgresql.org/docs/current/static/libpq-single-row-mode.html
that the code would have to differentiate between PostgreSQL versions.
Your solution with using
SELECT ... OFFSET ? LIMIT 1
in a loop is bound to suck.
First of all, there is no guarantee that the rows will be returned in
the same order each time, see for example
http://www.postgresql.org/docs/current/static/runtime-config-compatible.html#GUC-SYNCHRONIZE-SEQSCANS
Also, unless you operate with an isolation level higher than READ COMMITTED,
the various SELECTs could operate on different data sets.
So you are likely to end up with incorrect results sooner or later
if you use OFFSET and LIMIT without an ORDER BY clause.
Then you will have really bad performance, especially with a large table,
because each SELECT statement will have to start scanning the table again.
The complexity will rise from O(n) to O(n^2).
You can improve on this by using ORDER BY with an index and remembering
the last returned row (get and read http://sql-performance-explained.com/).
Finally, you will have a client-server round trip for each row returned.
This is a problem you would also have when using PQsetSingleRowMode().
Yours,
Laurenz Albe
Thank you,
Kind Regards
~Maciek