Re: How to monitor resources on Linux.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Tom

Thanks for responding. This issue came around because of a situation yesterday with processes being killed off by the kernel. I believe my co worker Geof Myers sent a post yesterday and the response was to adjust the vm.commit_memory=2. Several time throughout the day we see memory usage peak and then it will go down. We have multiple postmasters running for each of our division so that I we have a problem with a database it only affects that one. It make it diffucult to tune a system with this many postmasters running. Each database is tuned according to need. We allow anywhere between 5-50 max connections. So what I am looking for is? Exactly what am I looking at with ipcs -m, free, and top.

Thanks

Tom Lane wrote:
John R Allgood <jallgood@xxxxxxxxxxxxxxxx> writes:
    I have some questions on memory resources and linux. We are
currently running Dell Poweredge 2950 with dual core opeterons and 8GB
RAM. Postgres version is 7.4.17 on RHEL4. Could someone explain to me
how to best monitor the memory resources on this platform. Top shows a
high memory usage nearly all is being used.

That's meaningless: what you have to look at is the breakdown of *how*
it is being used.  The normal state of affairs is that there is no
"free" memory to speak of, because the kernel will keep around cached
disk pages as long as it can, so as to save a read if they are
referenced again.  You're only in memory trouble when the percentage
used for disk buffers gets real small.

ipcs -m shows the following
output. If I am looking at this correctly each of the postgres entries
represents a postmaster with the number of connections. If I calculate
the first entry it comes to around 3.4GB of RAM being used is this
correct.

That's *completely* wrong.  It's shared memory, so by definition there
is one copy, not one per process.

One thing you have to watch out for is that "top" tends to report some
or all shared memory as part of the address space of each attached
process; so adding up the process sizes shown by top gives a
ridiculously inflated estimate.  However, it's tough to tell exactly how
much is being double-counted :-(.  I tend to look at top's aggregate
numbers, which are pretty real, and ignore the per-process ones.

We have started running into memory issues

How do you know that?

Another good tool is to watch "vmstat 1" output.  If you see a lot of
swapin/swapout traffic, then maybe you do indeed need more RAM.

We have a 2 node cluster running about 10 separate postmasters divided
evenly on each node.

I was wondering why so many postgres-owned shmem segments.  Is it
intentional that you've given them radically different amounts of
memory?  Some of these guys are scraping along with just a minimal
number of buffers ...

0x0052ea91 163845     postgres  600        133947392  26
0x00530db9 196614     postgres  600        34529280   24
0x00530201 229383     postgres  600        34529280   21
0x005305e9 262152     postgres  600        4915200    3
0x005311a1 294921     postgres  600        34529280   28
0x0052fe19 327690     postgres  600        4915200    4

			regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org



---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux