Hi, On Wed, 2011-11-16 at 11:42 -0500, Michael Bubb wrote: > Hello - > > We are experiencing extreme I/O slowness on a gfs2 volume on a SAN. > > We have a: > > Netezza TF24 > IBM V7000 SAN > IBM Bladecenter with 3 HS22 blades > Stand alone HP DL380 G7 server > > > The 3 blades and the HP DL380 are clustered using RHEL 6.1 and > clustersuite 5.5. > You should ask the Red Hat support team about this, as they should be able to help. > We have 2 clustered volumes on different storage pools (one has 10k > drives the other 7.2k). > > We have an internal test that reads a large file (950G) using fopen and > memmap. On a standalone server in a datacenter (Ubuntu Raid 5 10k disks) > the tests take approximately 75seconds to run. > > On the blades the test takes 300 - 350 seconds. > > I have been looking at the cluster conf; any gfs tuning I can find. I am > not really sure what I should post here? > > yrs > > Michael > So it is a streaming data test. Are you running it on all three nodes at the same time or just one when you get the 300 seconds times? Did you mount with noatime,nodiratime set? Are the drives you are using just a straight linear lvm volume from a JBOD effectively? Steve. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster