tv@xxxxxxxx wrote:
Anyway I'm not an expert in this field, but AFAIK something like this already happens - btw that's the purpose of effective_cache_size.
effective_cache_size probably doesn't do as much as you suspect. It is used for one of the computations for whether an index is small enough that it can likely be read into memory efficiently. It has no impact on caching decisions outside of that.
As for the ideas bouncing around here for tinkering with random_page_size more automatically, I have a notebook with about a dozen different ways to do that I've come up with over the last few years. The reason no work can be done in this area is because there are no standardized benchmarks of query execution in PostgreSQL being run regularly right now. Bringing up ideas for changing the computation is easy; proving that such a change is positive on enough workloads to be worth considering is the hard part. There is no useful discussion to be made on the hackers list that doesn't start with "here's the mix the benchmarks I intend to test this new model against".
Performance regression testing for the the query optimizer is a giant pile of boring work we get minimal volunteers interested in. Nobody gets to do the fun model change work without doing that first though. For this type of change, you're guaranteed to just be smacking around parameters to optimize for only a single case without some broader benchmarking context.
-- Greg Smith 2ndQuadrant US greg@xxxxxxxxxxxxxxx Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance