Hi Bruno, Good to read that your advice to me is the solution I was considering! Although I think this is something PostgreSQL should solve internally, I prefer the WHERE clause over a long lasting SERIALIZABLE transaction. Thanks, Jan -----Original Message----- From: Bruno Wolff III [mailto:bruno@xxxxxxxx] Sent: Tuesday, January 16, 2007 19:12 To: Jan van der Weijde; pgsql-general@xxxxxxxxxxxxxx Subject: Re: [GENERAL] Performance with very large tables On Tue, Jan 16, 2007 at 12:06:38 -0600, Bruno Wolff III <bruno@xxxxxxxx> wrote: > > Depending on exactly what you want to happen, you may be able to continue > where you left off using a condition on the primary key, using the last > primary key value for a row that you have viewed, rather than OFFSET. > This will still be fast and will not skip rows that are now visible to > your transaction (or show duplicates when deleted rows are no longer visible > to your transaction). I should have mentioned that you also will need to use an ORDER BY clause on the primary key when doing things this way.