Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dong-Jae,

Dong-Jae Kang <baramsori72@xxxxxxxxx> wrote:
> Hi Ryo
> Sorry for late reply.
> 
> Mr Lee and I tested total bandwidth as you requested and the result is like
> attached file.

Thank you for your work.

> I know file attaching is not proper in mailing list,
> but I selected it for efficient and easy communication. sorry :)
> 
> And Buffed-Device case can be some strange,
> it has too much variation in total bandwidth, So you had better to see as
> only reference.
> The rests didn't have big fluctuation such like Buffered-Device case.
> 
> if you have another request about the result,
> reply to me about that.

Could you try the test on the weight policy with increaing token to
1280? I guess that the throughput difference between without and with
dm-ioband is caused since the token is a little small against the disk
speed.

> Additionally, I will try to inform the test results in cgroup environment of
> you in case by case.
> What do you think about it?

I would like to know the reason for the diffirence between debug patch
and iostat in the prvious test you did (total_bandwidth_result.xls).

Thanks,
Ryo Tsuruta

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux