On Thu, Apr 16, 2009 at 11:29:25AM -0400, Tom Lane wrote: > , a full table indexscan isn't going to be particularly fast in > any case; it's often the case that seqscan-and-sort is the right > decision. Is PG capable of "skipping" over duplicate values using an index? For example, if I've got a table like: CREATE TABLE foo ( id INTEGER PRIMARY KEY, v1 BOOLEAN ); that contains several million rows and I do a query like: SELECT DISTINCT v1 FROM foo; PG should only need to read three tuples from the table (assuming there are no dead rows). I've had a look in the TODO, but haven't found anything similar. This is obviously only a win when there are few distinct values from compared to the number of rows. -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general