Re: multipath performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 24, 2008 at 05:59:39PM -0800, malahal@xxxxxxxxxx wrote:
> Andy [genanr@xxxxxxxxxxxx] wrote:
> > I did some basic dd tests to get an idea of the speed of multipath vs
> > individual devices, some of the numbers seem not to make sense.
> > 
> > I ran several test with dd, 4 at a time reading different parts of the
> > device.
> > 
> > multibus - round robin among paths rr_min_io=1, 4 paths
> > individual - each dd going to a separate path
> > 
> > dd with iflag=direct
> > 	multibus : 120 MB/s
> > 	individual : 115 MB/s
> > 
> > dd without direct
> > 	multibus : 44MB/s (why is this so bad)
> > 	individual : 160MB/s (why is this so good)
> > 
> > Why is multibus without direct the worst, yet individual devices without
> > direct is the fastest.  And why doesn't multibus perform about the same as
> > going to the individual devices (25% performance hit using multibus)?
> 
> I am assuming that you are only reading from the device based on your
> iflag. When you do O_DIRECT, you inherently disable read-ahead. Also
> note that the read-ahead benefit would be insignificant as you increase
> the block size.
> 
> without direct I/O, you end up with very big I/O's in individual mode.
> Same can't be said in multibus mode as it tries to send small I/O's
> (actually bio's) on different paths and it makes the system not to merge
> any adjacent requests. 

Is there any way (yet) to increase the bio size?  It would be nice if
individual paths could perform large I/O's when needed.

Andy

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux