On Tue, Apr 26, 2011 at 9:49 AM, Claudio Freire <klaussfreire@xxxxxxxxx> wrote: > On Tue, Apr 26, 2011 at 7:30 AM, Robert Haas <robertmhaas@xxxxxxxxx> wrote: >> On Apr 14, 2011, at 2:49 AM, Claudio Freire <klaussfreire@xxxxxxxxx> wrote: >>> This particular factor is not about an abstract and opaque "Workload" >>> the server can't know about. It's about cache hit rate, and the server >>> can indeed measure that. >> >> The server can and does measure hit rates for the PG buffer pool, but to my knowledge there is no clear-cut way for PG to know whether read() is satisfied from the OS cache or a drive cache or the platter. > > Isn't latency an indicator? > > If you plot latencies, you should see three markedly obvious clusters: > OS cache (microseconds), Drive cache (slightly slower), platter > (tail). What if the user is using an SSD or ramdisk? Admittedly, in many cases, we could probably get somewhat useful numbers this way. But I think it would be pretty expensive. gettimeofday() is one of the reasons why running EXPLAIN ANALYZE on a query is significantly slower than just running it normally. I bet if we put such calls around every read() and write(), it would cause a BIG slowdown for workloads that don't fit in shared_buffers. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance