James Pearson wrote: > John Doe wrote: >> Hi, >> >> We have a storage server (HP DL360G5 + MSA20 (12 disks in RAID 6) on a SmartArray6400). >> 10 directories are exported through nfs to 10 clients (rsize=32768,wsize=32768,soft,intr,nosuid,proto=udp,vers=3). >> The server is apparently not doing much but... we have very high waiting IOs. > > Probably not connected, but personally I would use 'hard' and > 'proto=tcp' instead of 'soft' and 'proto=udp' on the clients I'd usually blame disk seek time first. If your raid level requires several drives to move their heads together and/or the data layout lands on the same drive set, consider what happens when your 10 users all want the disk head(s) to be in different places. Disk drives allow random access but they really aren't that good at it if they have to spend most of their time seeking. Raid6 is particularly bad at write performance so a different raid level might help - or if you know the data access pattern you might split the drives into different volumes that don't affect each other and arrange the data accordingly. And mounting with the async option might give much better performance - I'm not sure what the default is these days. -- Les Mikesell lesmikesell@xxxxxxxxx _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos