Re: dm-multipath has great throughput but we'd like more!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The system bus isn't a limiting factor is it? 64-bit PCI-X will get 8.5 GB/s (plenty), but 32-bit PCI 33MHz got 133MB/s.

Can your disks sustain that much bandwidth? 10 striped drives might get better than 200MB/s if done right, I suppose.

Don't the switches run at 2 Gbits/s? 2 Gbits/s / 10 (throw in 2 bits for protocol) ~= 200MB/s.

Could be a bunch of reasons...

 brassow

On May 18, 2006, at 2:05 AM, Bob Gautier wrote:

Yesterday my client was testing of multipath load balancing and failover
on a system running ext3 on a logical volume which comprises about ten
SAN LUNs all reached using multipath in multibus mode over two QL2340
HBAs.

On the one hand, the client is very impressed: running bonnie++
(inspired by Ronan's GFS v VxFS example) we get just over 200Mbyte/s
over the two HBAs, and when we pull a link we get about 120MByte/s.

The throughput and failover response times are better than the client
has ever seen, but we're wondering why we are not seeing higher
throughput per-HBA -- the QL2340 datasheet says it should manage
200Mbyte/s and all switches etc. run at 2GBps.

Any ideas?

Bob Gautier
+44 7921 700996

--

dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


--

dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux