On Wed, Oct 10, 2012 at 10:36 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote: > On Wed, Oct 10, 2012 at 4:37 PM, Claudio Freire <klaussfreire@xxxxxxxxx> wrote: >> >> In my case, if I set it too high, I get impossibly suboptimal plans >> when an index scan over millions of rows hits the disk way too often >> way too randomly. The difference is minutes for a seqscan vs hours for >> the index scan. In fact, I prefer setting it too low than too high. > > There's a corollary for very fast disk subsystems. If you've got say > 40 15krpm disks in a RAID-10 you can get sequential read speeds into > the gigabytes per second, so that sequential page access costs MUCH > lower than random page access, to the point that if seq page access is > rated a 1, random page access should be much higher, sometimes on the > order of 100 or so. On the other hand, if you have 40 very busy connections, then if they are all doing sequential scans on different tables they will interfere with each other and will have to divide up the RAID throughput, while if they are doing random fetches they will get along nicely on that RAID. So you have to know how much concurrency of the relevant type you expect to see. The default page cost settings already assume that random fetches are far more likely to be cache hits than sequential fetches are. If that is not true, then the default random page cost is way too low, regardless of the number of spindles or the concurrency. Cheers, Jeff -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance