Scott,
And how exactly should it be optimized? If a query is even moderately
interesting, with a few joins and a where clause, postgresql HAS to
create the rows that come before your offset in order to assure that
it's giving you the right rows.
SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100
It should scan primary key in index order for 200 first keys and skipping
first 100 keys.
SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 0 LIMIT 100
That should be plenty fast.
The example which I posted shows that
SELECT ... FROM bigtable ORDER BY intprimarykey LIMIT 100
this is extremely *slow*: seq scan is performed over whole bigtable.
A standard workaround is to use some kind of sequential, or nearly so,
id field, and then use between on that field.
select * from table where idfield between x and x+100;
Users can delete and insert any rows in table.
This appoarch requires updating x in every row in big table after each
insert, delete or order column change and is thus extremely slow.
So I do'nt understand how this can be used for large tables.
Andrus.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance