Re: Monitoring buffercache...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 24 Nov 2008, Kevin Kempter wrote:

Currently I run this each 15min via cron:
insert into buffercache_stats select now(), isdirty, count(*) as buffers,
(count(*) * 8192) as memory from pg_buffercache group by 1,2;

This query isn't going to save the information you need to figure out if shared_buffers is working effectively for you. You'll need the usage count information (if you're on 8.3) and a notion of what tables it's caching large amounts of data from to do that. What's going to happen with the above is that you'll watch shared_buffers grow to fill whatever size you've allocated it, and then the only useful information you'll be saving is what percentage of that happens to be dirty. If it happens that the working set of everything you touch is smaller than shared_buffers, you'll find that out, but you don't need this query to figure that out--just look at the amount of shared memory the postgres processes are using with ipcs or top and you can find where that peaks at.

I've got some queries that I find more useful, along with a general suggested methodology for figuring out if you've sized the buffers correctly, in my "Inside the PostgreSQL Buffer Cache" presentation at at http://www.westnet.com/~gsmith/content/postgresql

Does anyone have any suggestions per a better approach or maybe a way to
improve the performance for the above query ?

It's possible to obtain this data in a rather messy but faster way by not taking all those locks. Someone even submitted a patch to do just that: http://archives.postgresql.org/pgsql-general/2008-02/msg00453.php

I wouldn't recommend doing that, and it's really the only way to make this run faster.

It's nice to grab a snapshot of the buffer cache every now and then just to see what tends to accumulate high usage counts and such. I predict that trying to collect it all the time will leave you overwhelmed with data it's hard to analyze and act on. I'd suggest a snapshot per hour, spread across one normal day every week, would be more than enough data to figure out how your system is behaving. If you want something worth saving every 15 minutes, you should save a snapshot of the data in pg_statio_user_tables.

--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux