Thank you. It is true he want to have the first few record quickly and then continue with the next records. However without LIMIT it already takes a very long time before the first record is returned. I reproduced this with a table with 1.1 million records on an XP machine and in my case it took about 25 seconds before the select returned the first record. I tried it both interactively with pgAdmin and with a C-application using a cursor (with hold). Both took about the same time. Thanks, Jan van der Weijde -----Original Message----- From: Richard Huxton [mailto:dev@xxxxxxxxxxxx] Sent: Monday, January 15, 2007 12:01 To: Jan van der Weijde Cc: pgsql-general@xxxxxxxxxxxxxx Subject: Re: [GENERAL] Performance with very large tables Jan van der Weijde wrote: > Hello all, > > one of our customers is using PostgreSQL with tables containing > millions of records. A simple 'SELECT * FROM <table>' takes way too > much time in that case, so we have advised him to use the LIMIT and OFFSET clauses. That won't reduce the time to fetch millions of rows. It sounds like your customer doesn't want millions of rows at once, but rather a few rows quickly and then to fetch more as required. For this you want to use a cursor. You can do this via SQL, or perhaps via your database library. In SQL: http://www.postgresql.org/docs/8.2/static/sql-declare.html http://www.postgresql.org/docs/8.2/static/sql-fetch.html In pl/pgsql: http://www.postgresql.org/docs/8.2/static/plpgsql-cursors.html HTH -- Richard Huxton Archonet Ltd