Hi,
with a cursor the behaviour is the same. So I would like to ask a more
general question:
My client needs to receive data from a huge join. The time the client
waits for being able to fetch the first row is very long. When the
retrieval starts after about 10 mins, the client itself is I/O bound so
it is not able to catch up the elapsed time.
My workaround was to build a queue of small joins (assuming the huge
join delivers 10 mio rows I now have 10000 joins delivering 1000 rows ).
So the general question is: Is there a better solution then my crude
workaround?
Thank you
Hi Kevin,
this is what I need (I think). Hopefully a cursor can operate on a
join. Will read docu now.
Thanks!
Björn
Am 22.10.2014 16:53, schrieb Kevin Grittner:
Björn Wittich <Bjoern_Wittich@xxxxxx> wrote:
I do not want the db server to prepare the whole query result at
once, my intention is that the asynchronous retrieval starts as
fast as possible.
Then you probably should be using a cursor.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance