I am using a two node cluster as the front end to our iscsi san. The
cluster serves NFS file systems to the network. I'm seeing loads on the
active cluster node of 7 or 8 at times which brings performance to a crawl.
Here is a sample of top on the active node:
top - 10:46:21 up 1 day, 2:48, 1 user, load average: 8.32, 8.22, 7.27
Tasks: 439 total, 1 running, 438 sleeping, 0 stopped, 0 zombie
Cpu0 : 5.0%us, 25.2%sy, 0.0%ni, 19.8%id, 48.7%wa, 0.0%hi, 1.3%si,
0.0%st
Cpu1 : 3.6%us, 29.5%sy, 0.0%ni, 1.7%id, 50.7%wa, 3.3%hi, 11.3%si,
0.0%st
Mem: 3631900k total, 3512904k used, 118996k free, 1008k buffers
Swap: 2031608k total, 136k used, 2031472k free, 2674504k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3944 root 15 0 0 0 0 D 5 0.0 0:36.56 nfsd
3937 root 15 0 0 0 0 D 4 0.0 0:36.78 nfsd
3940 root 15 0 0 0 0 D 4 0.0 0:36.99 nfsd
3941 root 15 0 0 0 0 D 4 0.0 0:36.59 nfsd
3942 root 15 0 0 0 0 D 4 0.0 0:37.30 nfsd
3938 root 15 0 0 0 0 D 3 0.0 0:36.55 nfsd
3943 root 15 0 0 0 0 D 3 0.0 0:36.25 nfsd
3939 root 15 0 0 0 0 D 3 0.0 0:36.23 nfsd
The SAN doesn't appear to be straining at all. The cluster nodes seem
to be the bottleneck The IOwait will cross 70% at times which leads me
to believe I need to "open the pipe" between the nodes and the storage.
Traffic on my switches looks good. Each node is running Centos 5. They
have Dual 3GHz procs with 4GB Ram. I would appreciate any advice that
could possible help me distribute/alleviate the load. I will gladly
provide more info on our configuration as needed.
Thanks in advance!
Randy
begin:vcard
fn:Randy Brown
n:Brown;Randy
org:National Weather Service;Office of Hydrologic Development
adr:;;1325 East West Highway;Silver Spring;MD;20910;USA
email;internet:randy.brown@xxxxxxxx
title:Senior Systems Administrator
tel;work:301-713-1669 x110
url:http://www.nws.noaa.gov/ohd/
version:2.1
end:vcard
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster