On 10/16/2012 12:40 PM, Craig Ringer wrote:
On 10/16/2012 12:24 PM, Deven Thaker wrote:
Hi,
My application takes longer time (we see time out even) when data to be
fetched from Postgresql 9.0.3 is around 1900000 records. I am doing an
improvement at application level, but from database side any performance
tuning do i need to do?
I have not changed any parameters in postgresql.conf, so using default
values.
Any recommendations to improve the performance.
Hi
My earlier reply was a tad grumpy; my apologies. The point stands, but
the wording could've been nicer.
It isn't really clear *where* the slowness is. That's why I'm asking for
EXPLAIN (BUFFERS, ANALYZE) results. If that's fast then it tells you the
problem is somewhere else.
What is the client application? What database driver does it use -
PgJDBC? libpq? psqlODBC? npgsql? Something else? What language is it
written in? Does it read the whole result set into memory at once, or
does it use a cursor?
If you're reading the whole result set into memory at once you might
want to consider using DECLARE and FETCH:
http://www.postgresql.org/docs/current/static/sql-declare.html
so you don't have to read the whole result into memory at once.
--
Craig Ringer
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general