David Yeu <david.yeu@xxxxxxxxx> writes: > Our queries essentially fall into the following cases: > * ? WHERE group_id = ? ORDER BY created_at DESC LIMIT 20; > * ? WHERE group_id = ? AND id > ? ORDER BY created_at DESC; > * ? WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT 20; > * ? WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?; All of those should be extremely cheap if you've got the right indexes, with the exception of the last one. Large OFFSET values are never a good idea, because Postgres always has to scan and discard that many rows. If you need to fetch successive pages, consider using a cursor with a series of FETCH commands. Another possibility, if the data is sufficiently constrained, is to move the limit point with each new query, ie instead of OFFSET use something like WHERE group_id = ? AND created_at < last-previous-value ORDER BY created_at DESC LIMIT 20; regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance