Milan Zamazal wrote: > My problem is that retrieving sorted data from large tables > is sometimes > very slow in PostgreSQL (8.4.1, FWIW). > > I typically retrieve the data using cursors, to display them in UI: > > BEGIN; > DECLARE ... SELECT ... ORDER BY ...; > FETCH ...; > ... > > On a newly created table of about 10 million rows the FETCH command > takes about one minute by default, with additional delay during the > contingent following COMMIT command. This is because PostgreSQL uses > sequence scan on the table even when there is an index on the ORDER BY > column. When I can force PostgreSQL to perform index scan (e.g. by > setting one of the options enable_seqscan or enable_sort to off), FETCH > response is immediate. > > PostgreSQL manual explains motivation for sequence scans of large tables > and I can understand the motivation. Nevertheless such behavior leads > to unacceptably poor performance in my particular case. It is important > to get first resulting rows quickly, to display them to the user without > delay. Did you try to reduce the cursor_tuple_fraction parameter? Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general