Re: how to estimate shared_buffers...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 12 Jul 2008, Jessica Richard wrote:

On a running production machine, we have 900M configured on a 16G-memory Linux host. The db size for all dbs combined is about 50G.  There are many transactions going on all the times (deletes, inserts, updates). We do not have a testing environment that has the same setup and the same amount of workload. I want to evaluate on the production host if this 900M is enough. If not, we still have room to go up a little bit to speed up all Postgres activities. I don't know enough about the SA side. I just would imagine, if something like "top" command or other tools can measure how much total memory Postgres is actually using (against the configured 900M shared buffers), and if Postgres is using almost 900M all the time, I would take this as an indication that the shared_buffers can go up for another 100M...

What is the best way to tell how much memory Postgres (all Postgres related things) is actually using?

there is a contrib/pg_buffers which can tell you about usage of shared memory. Also, you can estimate how much memory of OS cache occupied by postgres files (tables, indexes). Looks on http://www.kennygorman.com/wordpress/?p=246 for some details.
I wrote a perl script, which simplifies estimation of OS buffers, but
it's not yet ready for public.


	Regards,
		Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@xxxxxxxxxx, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux