RE: Multipath I/O stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

-----Original Message-----
From: redhat-list-bounces@xxxxxxxxxx
[mailto:redhat-list-bounces@xxxxxxxxxx] On Behalf Of Yong Huang
Sent: Monday, May 24, 2010 1:57 PM
To: redhat-list@xxxxxxxxxx
Subject: Re: Multipath I/O stats

> Not nessacarily. If you advertise one 10 disk LUN from the SAN, 
> the OS will see it as one disk, and multipath can make multiple 
> paths to the same "disk". That's 10 spindles in one disk, which, 
> if they're fast or SSD, will saturate a fiber link.

OK. I agree. Now a slightly different issue. Currently multipath load 
balance allows only one path to be used at any given moment, chosen 
in a round-robin fashion. Unless multiple paths are allowed to read 
simultaneously, the bottleneck is on the single path when the "disk" 
is faster. This makes "load balance" meaningless.

If the "disk" is slower, even future implementation of multiple paths 
simultaneous read doesn't help in the sense of load balance because 
the bottleneck is on the "disk".

Yong Huang
==========

	So are you saying that Multipath does not allow for multiple
outstanding I/O request even if the underlying FC driver and such does?

	What happens if a large I/O request is made that will overlap
between 2 LUNs?

	Is one I/O request sent to 1 LUN and when it completes then
another is sent to the other LUN?

-----
Jack Allen


      

-- 
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list


[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux