Re: [kvm-devel] I/O bandwidth control on KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ryo Tsuruta wrote:
Hi Anthony.

The attached patch implements AIO support for the virtio backend so if this is the case, you should see the proper proportions.

First, thank you very much for making the patch. I ran the same test program on KVM with the patch but I wasn't able to
get good results.
I checked the dm-ioband log, the I/Os didn't seem to be issued simultaneously.

There's an aio_init in block-raw-posix.c that sets the thread count to 1. If you #if 0 out that block, or increase the threads to something higher (like 16), you should see multiple simultaneous requests. Sorry about that, I had that in a different patch in my tree.

It looked like the next I/O was blocked until the previous I/O was completed.

                 The number of issued I/Os for 60 seconds
     --------------------------------------------------------------------
    |             device         |       sda11       |       sda12       |
    |          weight setting    |        80%        |        20%        |
    |-----------+----------------+-------------------+-------------------|
    | KVM AIO   |      I/Os      |       4596        |       4728        |
    |           | ratio to total |       49.3%       |       50.7%       |
    |-----------+----------------+-------------------+-------------------|
    | KVM       |      I/Os      |       5217        |       5623        |
    |           | ratio to total |       48.1%       |       51.9%       |
     --------------------------------------------------------------------

Here is an another test result, which is very interesting.
I/Os were issued from a KVM virtual machine and from the host machine
simultaneously.

            The number of issued I/Os for 60 seconds
     --------------------------------------------------------
    | issue from     |  Virtual Machine  |    Host Machine   |
    |      device    |       sda11       |       sda12       |
    | weight setting |        80%        |        20%        |
    |----------------+-------------------+-------------------|
    |      I/Os      |        191        |       9466        |
    | ratio to total |        2.0%       |       98.0%       |
     --------------------------------------------------------

The most I/Os that were processed were the I/Os issued by the host machine.
There might exist another bottleneck somewhere as well.

The virtio block backend isn't quite optimal right now. I have some patches (that are currently suffering bitrot) that switch over to linux-aio which allows zero-copy and for proper barrier support (so the guest block device will use an ordered queue). The QEMU aio infrastructure makes it tough to integrate it properly though.

Regards,

Anthony Liguori

Here is a block diagram representing the test.

    +---------------------------+
    | Virtual Machine           |
    |                           |
    | Read/Write with O_DIRECT  |     +--------------------------+
| process x 128 | | Host Machine | | | | | |
    |             V             |     | Read/Write with O_DIRECT |
    |         /dev/vda1         |     |       process x 128      |
    +-------------|-------------+     +-------------|------------+
    +-------------V---------------------------------V------------+
    |     /dev/mapper/ioband1      |     /dev/mapper/ioband2     |
    |         80% weight           |         20% weight          |
    |                              |                             |
    |    Control I/O bandwidth according to the weights          |
    +-------------|---------------------------------|------------+
    +-------------V-------------+     +-------------|------------+
    |        /dev/sda11         |     |         /dev/sda12       |
    +---------------------------+     +--------------------------+

Thanks,
Ryo Tsuruta

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux