Please reply-all so others can learn and contribute. On Sun, Jul 29, 2007 at 09:38:12PM -0700, Craig James wrote: > Decibel! wrote: > >It's unlikely that it's going to be faster to index scan 2.3M rows than > >to sequential scan them. Try setting enable_seqscan=false and see if it > >is or not. > > Out of curiosity ... Doesn't that depend on the table? Are all of the data > for one row stored contiguously, or are the data stored column-wise? If > it's the former, and the table has hundreds of columns, or a few columns > with large text strings, then wouldn't the time for a sequential scan > depend not on the number of rows, but rather the total amount of data? Yes, the time for a seqscan is mostly dependent on table size and not the number of rows. But the number of rows plays a very large role in the cost of an indexscan. -- Decibel!, aka Jim C. Nasby, Database Architect decibel@xxxxxxxxxxx Give your computer some brain candy! www.distributed.net Team #1828
Attachment:
pgpnuHOO9RZks.pgp
Description: PGP signature