On Tue, Sep 6, 2011 at 1:31 PM, Anibal David Acosta <aa@xxxxxxxxxxxx> wrote: > Hi everyone, > > > > My question is, if I have a table with 500,000 rows, and a SELECT of one row > is returned in 10 milliseconds, if the table has 6,000,000 of rows and > everything is OK (statistics, vacuum etc) > > can i suppose that elapsed time will be near to 10? The problem with large datasets does not come from the index, but that they increase cache pressure. On today's typical servers it's all about cache, and the fact that disks (at least non ssd drives) are several orders of magnitude slower than memory. Supposing you had infinite memory holding your data files in cache or infinitely fast disks, looking up a record from a trillion record table would still be faster than reading a record from a hundred record table that had to fault to a spinning disk to pull up the data. merlin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance