Hi All; I've installed pg_buffercache and I want to use it to help define the optimal shared_buffers size. Currently I run this each 15min via cron: insert into buffercache_stats select now(), isdirty, count(*) as buffers, (count(*) * 8192) as memory from pg_buffercache group by 1,2; and here's it's explain plan explain insert into buffercache_stats select now(), isdirty, count(*) as buffers, (count(*) * 8192) as memory from pg_buffercache group by 1,2; QUERY PLAN ------------------------------------------------------------------------------------------- Subquery Scan "*SELECT*" (cost=65.00..65.23 rows=2 width=25) -> HashAggregate (cost=65.00..65.12 rows=2 width=1) -> Function Scan on pg_buffercache_pages p (cost=0.00..55.00 rows=1000 width=1) (3 rows) Then once a day I will pull a report from the buffercache_stats table. The buffercache_stats table is our own creation : \d buffercache_stats Table "public.buffercache_stats" Column | Type | Modifiers ----------------+-----------------------------+----------- snap_timestamp | timestamp without time zone | isdirty | boolean | buffers | integer | memory | integer | Here's my issue, the server that we'll eventually roll this out to is extremely busy and the every 15min query above has the potential to have a huge impact on performance. Does anyone have any suggestions per a better approach or maybe a way to improve the performance for the above query ? Thanks in advance... -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance