On Thu, Dec 22, 2011 at 07:54:46PM -0500, John A. Sullivan III wrote: > On Wed, 2011-10-05 at 15:54 -0400, Adam Chasen wrote: > > John, > > I am limited in a similar fashion. I would much prefer to use multibus > > multipath, but was unable to achieve bandwidth which would exceed a > > single link even though it was spread over the 4 available links. Were > > you able to gain even a similar performance of the RAID0 setup with > > the multibus multipath? > > > > Thanks, > > Adam > <snip> > We just ran a quick benchmark before optimizing. Using multibus rather > than RAID0 with four GbE NICs, and testing with a simple cat /dev/zero > > zeros, we hit 3.664 Gbps! > > This is still on CentOS 5.4 so we are not able to play with > rr_min_io_rq. We have not yet activated jumbo frames. We are also > thinking of using SFQ as a qdisc instead of the default pfifo_fast. So, > we think we can make it go even faster. > > We are delighted to be achieving this with multibus rather than RAID0 as > it means we can take transactionally consistent snapshots on the SAN. > > Many thanks to whomever pointed out that tag queueing should solve the > 4KB block size latency problem. The problem turned out to not be > latency as we were told but simply an under resources SAN. We brought > in new Nexenta SANs with much more RAM and they are flying - John > Hey, Can you please post your multipath configuration ? Just for reference for future people googling for this :) -- Pasi -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel