Re: performance considerations with IO schedulers and DMmultipathing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2007/11/30, Romanowski, John (OFT) <John.Romanowski@xxxxxxxxxxxxxxx>:
Here's some links:

Google cache of this article:
"The basis for my test was to determine the best possible performance combination of elevator tuning AND /etc/multipath.conf rr_min_io setting."
http://72.14.209.104/search?q=cache:q2p5HOwGxHwJ:www.techyblog.com/content/view/45/28/+Multipath+rr_min_io+Oracle+Elevator+Benchmarks&hl=en&ct=clnk&cd=1&gl=us

The thing is, that there are two settings that affect different drivers. The I/O scheduler setting will affect the disks that are part of the multipath volume (and only them), while the rr_min_io affects the multipath volume.
The higher the value of rr_min_io, the more requests are sent down one path before switching to the next in the same path group. While this is good for sequential I/O (because the elevator/scheduler on the underlying device can merge more efficiently), this reduces the amount of I/O that is sent in parallel. With very high rr_min_io settings you will end up using mostly one path at a time, while the others are idle.
Using small values for rr_min_io, the chances of spreading the requests over all paths are higher, but so is the chance of separating a long sequence into smaller parts that are not sequential for the disk devices that make the paths. Here a scheduler setting that copes with that pattern can help.
Another approach, that is not in the mainline kernel yet, is to introduce a queue to the multipath target, merge sequential request there and send each I/O down another path (like rr_min_io=1 would do). Kiyoshi Ueda from NEC had a presentation about this on last years OLS ( https://ols2006.108.redhat.com/2007/Reprints/ueda-Reprint.pdf ). From their evaluation of the current kernel, smaller rr_min_io values improved performance but the best value was different for reads and writes.

Stefan

"The short summary of our study indicates that there is no SINGLE answer to which I/O scheduler is best."
http://www.redhat.com/magazine/008jun05/features/schedulers/

Oracle and linux I/O Scheduler
part 1- http://www.nextre.it/oracledocs/ioscheduler_01.html
part 2- http://www.nextre.it/oracledocs/ioscheduler_02.html
part 3- http://www.nextre.it/oracledocs/ioscheduler_03.html


--------------------------------------------------------
This e-mail, including any attachments, may be confidential, privileged or otherwise legally protected. It is intended only for the addressee. If you received this e-mail in error or from someone who was not authorized to send it to you, do not disseminate, copy or otherwise use this e-mail or its attachments.  Please notify the sender immediately by reply e-mail and delete the e-mail from your system.


-----Original Message-----

From: dm-devel-bounces@xxxxxxxxxx on behalf of Stefan Bader
Sent: Tue 11/27/2007 4:42 PM
To: device-mapper development
Subject: Re: performance considerations with IO schedulers and DMmultipathing

Hi Paul,

the device-mapper target itself doe not use any queue and so there is no
scheduler used at all. Only the real devices that are used as paths will be
using a scheduler. Whether there might be a performance gain by changing
these, I do not know.

Stefan

2007/11/27, Paul Cote <paul.cote@xxxxxxxxxxxxx>:
>
>  Hi,
>
> Is there any advantage to one specific IO scheduler (below) that may
> improve IO performance / throughput when running with a round-robin failover
> policy? Has anyone done testing with this ... and willing to share results?
>
> thanks,
> Paul
>
> Completely Fair Queuing-elevator=cfq (default)
> Deadline-elevator=deadline
> NOOP-elevator=noop
> Anticipatory-elevator=as
>
>
>
> --
> dm-devel mailing list
> dm-devel@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/dm-devel
>


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux