Re: [PATCH 1/1] dm-ioband: I/O bandwidth controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 19, 2009 at 05:43:52PM +0900, Ryo Tsuruta wrote:
> Hi Vivek,
> 
> From: Ryo Tsuruta <ryov@xxxxxxxxxxxxx>
> Subject: [PATCH 1/1] dm-ioband: I/O bandwidth controller
> Date: Tue, 19 May 2009 17:39:28 +0900 (JST)
> 
> > Hi Alasdair and all,
> > 
> > This is the dm-ioband version 1.11.0 release. This patch can be
> > applied cleanly to current agk's tree. Alasdair, please give some
> > comments and suggestions.
> > 
> > Changes from the previous release:
> > - Classify IOs in sync/async instead of read/write since the IO
> >   request allocation/congestion logic were changed to be sync/async
> >   based.
> > - IOs belong to the real-time class are dispatched in preference to
> >   other IOs, regardless of the assigned bandwidth.
> 
> I ran your script from the following URL to see IOs belong to the
> real-time class take precedence.
> http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-04/msg08355.html
> 
>     Script
>     ======
>     # /dev/mapper/ioband1 is mounted on /mnt1
>     rm /mnt1/aggressivewriter
>     sync
>     echo 3 > /proc/sys/vm/drop_caches
>     # launch an hostile writer
>     ionice -c2 -n7 dd if=/dev/zero of=/mnt1/aggressivewriter \
>        bs=4K count=524288 conv=fdatasync &
>     # Reader
>     ionice -c1 -n0 dd if=/mnt1/testzerofile1 of=/dev/null &
>     wait $!
>     echo "reader finished"
> 
>     old dm-ioband
>     =============
>     First run
>     2147483648 bytes (2.1 GB) copied, 100.343 seconds, 21.4 MB/s (Reader)
>     reader finished
>     2147483648 bytes (2.1 GB) copied, 101.107 seconds, 21.2 MB/s (Writer)
> 
>     new dm-ioband v1.11.0
>     =====================
>     First run
>     2147483648 bytes (2.1 GB) copied, 35.0623 seconds, 61.2 MB/s (Reader)
>     reader finished
>     2147483648 bytes (2.1 GB) copied, 87.6979 seconds, 24.5 MB/s (Writer)
> 
> The RT reader took precedence over the aggressive writer, regardless
> of assigned bandwidth. However, I think that some sort of limitation
> for RT IOs is needed. What do you think?

I think we don't need any limitation for RT tasks. If somebody needs to
limit RT task, then it should be put into a separate group.

Looks like you have started duplicating CFQ infrastrucutre in dm-ioband to
make sure CFQ is not broken. That will lead to interesting interaction
with noop and other IO schedulers at the same time whenever CFQ changes 
behavior you will find it hard to keep up with it. 

Anyway, how are you taking care of priorities with-in same class. How will
you make sure that a BE prio 0 request is not hidden behind BE prio 7
request? Same is true for prio with-in RT class.

Thanks
Vivek
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/virtualization

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux