Dne 14.4.2011 01:10, Claudio Freire napsal(a): > On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra <tv@xxxxxxxx> wrote: >> >> Another issue is that when measuring multiple values (processing of >> different requests), the decisions may be contradictory so it really >> can't be fully automatic. >> > > I don't think it's soooo dependant on workload. It's dependant on > access patterns (and working set sizes), and that all can be > quantified, as opposed to "workload". Well, think about a database that's much bigger than the available RAM. Workload A: Touches just a very small portion of the database, to the 'active' part actually fits into the memory. In this case the cache hit ratio can easily be close to 99%. Workload B: Touches large portion of the database, so it hits the drive very often. In this case the cache hit ratio is usually around RAM/(size of the database). So yes, it may be very workload dependent. In the first case you may actually significantly lower the random_page_cost (even to seq_page_cost) and it's going to be quite fast (thanks to the cache). If you do the same thing with workload B, the database is going to burn. I'm not saying it's not possible to do some autotuning, but it's a bit tricky and it's not just about hardware. The workload *is* a very important part of the equation. But I have to admit this post probably sounds like an overengineering. If you can develop something simple (even if that does not consider workload at all), it might be a useful starting point. If I could help you in any way with this, let me know. regards Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance