[combining responses to two posts on this thread by lars] lars <lhofhansl@xxxxxxxxx> wrote: > On the face of it, though, this looks like Postgres would not be > that useful as database that resides (mostly) in the cache. I've mentioned this in a hand-wavy general sense, but I should have mentioned specifics ages ago: for a database where the active portion of the database is fully cached, it is best to set seq_page_cost and random_page_cost to the same value, somewhere in the 0.1 to 0.05 range. (In your case I would use 0.05.) In highly cached databases I have sometimes also found it necessary to increase cpu_tuple_cost. (In your case I might try 0.02.) This won't directly address the specific issue you've been investigating in this thread, but it will tend to give you faster plans for your actual environment without needing to fuss with things on a query-by-query basis. It may indirectly mitigate the problem at hand through heavier use of indexes which would reduce pruning and hint-bit setting by readers. > Interesting. When you did you test, did you also find WAL write > activity when running x the first time after y? I wasn't able to check that in this quick, ad hoc run. > Oh, it's just switched off for testing, so that I can control > when vacuum runs and make sure that it's not skewing the results > while I am measuring something. Again, that's an impulse I can certainly understand, but the problem is that turning autovacuum off skews results in other ways, such as forcing readers to do maintenance work which might otherwise be done in a cost-limited background process. Or if that didn't happen you would be constantly chasing through lots of dead line pointers which would hurt performance in another way. It's probably best to consider autovacuum an integral part of normal database operations and run benchmarks like this with it operational. This will also give you an opportunity to tune thresholds and costing factors to evaluate the impact that such adjustments have on potential workloads. > For a fully cached database I would probably want to switch off > HOT pruning and compaction (which from what we see is done > synchronously with the select) and leave it up to the asynchronous > auto vacuum to do that. But maybe I am still not quite > understanding the performance implications. Code comments indicate that they expect the pruning to be a pretty clear win on multiple reads, although I don't know how much that was benchmarked. Jeff does raise a good point, though -- it seems odd that WAL-logging of this pruning would need to be synchronous. We support asynchronous commits -- why not use that feature automatically for transactions where the only writes are this sort of thing. Which raises an interesting question -- what happens to the timings if your SELECTs are done with synchronous_commit = off? I wonder if it would make any sense to implicitly use async commit for a transaction which is declared READ ONLY or which never acquires and XID? -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance