2010/5/26 Cédric Villemain <cedric.villemain.debian@xxxxxxxxx>: > At the moment where a block is requested for the first time (usualy > 8kb from postgres, so in fact 2 blocks in OS), you have 'duplicate' > buffers. > But, depending of your workload, it is not so bad because those 2 > blocks should not be requested untill some time (because in postgresql > shared buffers) and should be evicted by OS in favor of new blocks > requests. Since pg_buffercache is 4-8 times smaller, it would seem to be extremely rare to me. And when PG requests a block, it also needs to evict something from shared_buffers. > You can try pgfincore extension to grab stats from OS cache and/or > patch postgresql if you want real stats ;) Thank you! It seems to be the tool I was looking for. Could help me locate and troubleshoot the hogs in page cache. I also find the snapshot/restore function promising. Every morning our cache is cold or filled with irrelevant data left by nightly batch jobs, thus severely impacting the performance. Seems to be exactly what this tool is for. How does it work? How stable is it? Can we use it in production on a daily basis? > pgbuffercache is provided with postgresql and deliver very usefull information : > http://www.postgresql.org/docs/8.4/interactive/pgbuffercache.html Thank you. I already am using it. I've already found a few hogs with it. -- Konrad Garus -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance