Hello I have CURSOR for query SELECT a FROM table ORDER BY xyz, where table a is extremely large and xyz is hard-to-evaluate, but indexable expression(using gist). As far as I understand, on SELECT a FROM table ORDER BY xyz LIMIT X query postgres do not fetch entire index or table, but it uses "next" operation on index, fetching tuples one-by-one until X tuples are fetched. This means, that such queries complexity depends on size of output data, and not depends on size of table and index. Correct me if this is not true. In fact, I have more complicated case. I do not know value of X. I need to create cursor, open it, fetch records one-by-one and aggregate. Then on some condition I stop fetching and return aggregated value. In fact in most cases my procedure scans a little part of table. But does postgres understand such usage of index and cursors? Will it really use index and fetch only little amount of data, or will it read entire table to memory, creating in-memory copy of sorted data? There are two troubles with it - the table is large and expression in ORDER BY is complex to evaluate. -- Regards, Andrey -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general