Re: limit clause produces wrong query plan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100

	I think pagination is overrated.

If the query produces, for instance, something like 100 rows or less, more often than not, getting all the rows will take the exact same time as getting a portion of the rows... in all those cases, it is much better to cache the results somewhere (user session, table, whatever) and paginate based on that, rather than perform the same query lots of times. Especially when some non-indexed sorting takes place in which case you are gonna fetch all the rows anyway. Something like row-id can be stored instead of the full rows, also. There are exceptions of course.

And if the query produces 20.000 results... who is ever going to scroll to page 1257 ?

The example which I  posted shows that

SELECT ... FROM bigtable ORDER BY intprimarykey LIMIT 100

this is extremely *slow*: seq scan is  performed over whole bigtable.

This is wrong though. It should use an index, especially if you have a LIMIT...


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux