It's HW RAID we are using. These are Dell C6100 server with RAID controller. We expect around 100MB/sec. When I run "dd" I get 20MB/sec and since I have 6 servers I at least expected 3 X 20 MB/sec since it replica of 2. We are in a subnet inside a LAB with only us doing the testing so network is not a issue for sure. File distribution are as follows: bytes % 130000 19.762% 70000 30.101% 100000 20.165% 230000 20.016% 1100000 0.447% 430000 5% 2000000 039% 630000 4.47% On Tue, Apr 19, 2011 at 7:07 PM, Joe Landman <landman at scalableinformatics.com> wrote: > On 04/19/2011 08:52 PM, Mohit Anchlia wrote: >> >> I am getting miserable performance in LAN setup with 6 servers with >> distributed and 2 replicas with 1GigE. I am using native glsuter >> clients (mount -t glusterfs server1:/vol /mnt) I am only able to get >> 20Mb/s. This is some of the output from sar: >> >> Each server has 4 10K SAS drives RAID0. This is a new setup and I >> expected to get much much higher performance. Can someone please help >> with recommendations? > > It seems like I just handled a case like this a few months ago ... > > What does your IO workload actually look like? ?Much more interested in > iostat like output (though dstat also works very well for bandwidth heavy > loads). > > Gluster isn't going to do well with small IO operations without serious > caching (NFS client). ?Despite the fact that these are RAID0 across 4x > 10kRPM SAS drives, this is *not* a high performance IO system in most senses > of the definition. ?The design of the storage should be driven by the > application and anticipated workloads. > > More to the point, what specifically are your goals in terms of > throughput/bandwidth ... what will your storage loads look like? > > What RAID cards are you using (if any)? ?If software raid, could you report > output of > > ? ? ? ?mdadm --detail /dev/MD > > where /dev/MD is your MD raid device. ?Which SAS 10k disks are you using? > ?How are they connected to the machine if not through a RAID card? ?Is the > RAID0 a hardware or software RAID? > >> !sar >> sar -B 1 100 >> >> >> 05:44:38 PM ?pgpgin/s pgpgout/s ? fault/s ?majflt/s >> 05:44:39 PM ? ? ?0.00 ? ? ?0.00 ? 1413.00 ? ? ?0.00 >> 05:44:40 PM ? ? ?0.00 ?29896.00 ? ? 29.00 ? ? ?0.00 >> 05:44:41 PM ? ? ?0.00 ? 4510.89 ? 1523.76 ? ? ?0.00 >> 05:44:42 PM ? ? ?0.00 ? ? 16.16 ? ? 20.20 ? ? ?0.00 >> 05:44:43 PM ? ? ?0.00 ? ? 12.00 ? ? 16.00 ? ? ?0.00 >> 05:44:44 PM ? ? ?0.00 ? ?102.97 ? ? 15.84 ? ? ?0.00 >> 05:44:45 PM ? ? ?0.00 ?21100.00 ? ? 14.00 ? ? ?0.00 >> 05:44:46 PM ? ? ?0.00 ? 8092.00 ? ? 19.00 ? ? ?0.00 > > sar isn't too useful for figuring out whats going on in the io channel. > iostat is much better. ?dstat, atop, vmstat are all specifically good tools. > ?If you want too much data (e.g. it gathers everything of value), use > collectl, with a 1 second interval, and the right options. > > > -- > Joseph Landman, Ph.D > Founder and CEO > Scalable Informatics, Inc. > email: landman at scalableinformatics.com > web ?: http://scalableinformatics.com > ? ? ? http://scalableinformatics.com/sicluster > phone: +1 734 786 8423 x121 > fax ?: +1 866 888 3112 > cell : +1 734 612 4615 > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >