Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ryo
I attached new file that includes I/O total bandwidth of evaluation system.
We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
and it is measured through iostat tool and dm-ioband debug patch which I
gave you several months ago.
Of course, the result in prior report was measured by dm-ioband debug patch.

As a result, the big difference in prior report derives from the location
where we measured I/O bandwidth
iostat counts it in application level and dm-ioband debug patch does it in
dm-ioband controller.
I think the difference is related with buffer cache.

Thank you.
Have a nice weekend

2009/8/27 Dong-Jae Kang <baramsori72@xxxxxxxxx>

> Hi Ryo
>
> 2009/8/27 Ryo Tsuruta <ryov@xxxxxxxxxxxxx>
>
> Hi Dong-Jae,
>>
>> # I've added dm-devel to Cc:.
>>
>> Dong-Jae Kang <baramsori72@xxxxxxxxx> wrote:
>> > Hi Ryo
>> >
>> > I attached new test result file(ioband-partition-based-evaluation.xls)in
>> > this mail.
>>
>> Thanks for your great job.
>>
>> > In this time, it is not virtualization environment.
>> > I evaluated partition-based use cases before I do it in vitualization
>> > environment.
>> > because I think the two cases are smilar each other.
>> >
>> > The detailed information about the evaluation can be referred in
>> attached
>> > file.
>> >
>> > If you have any questions or comments after examine it,
>> > please give me your opinion.
>>
>> I would like to know the throughput without dm-ioband in your
>> environment. Because the total throughput of range-bw policy is
>> 8000KB/s, which means the device has a capability to perform over
>> 8000KB/s, but the total throughput of weight policy is lower than
>> the range-bw policy. In my environment, there is no significant
>> difference in average throughput between with and without dm-ioband.
>> I ran fio in the way described in your result file. Here are the
>> results of my environment. The throughputs were calculated from
>> "iostat -k 1" outputs.
>>
>>            buffered write test
>>           Avg. throughput [KB/s]
>>        w/o ioband     w/ioband
>> sdb2         14485         5788
>> sdb3         12494        22295
>> total        26979        28030
>>
>
> OK, good comments.
> I omitted the total bandwidth of the evaluation system.
>
> I will reply to you about it tomorrow after I check and re-test it again.
>
>>
>> Thanks,
>> Ryo Tsuruta
>>
>
> Thank you for comments.
>
>
> --
> Best Regards,
> Dong-Jae Kang
>



-- 
Best Regards,
Dong-Jae Kang

Attachment: total bandwidth result.xls
Description: MS-Excel spreadsheet

_______________________________________________
Containers mailing list
Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/containers

[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux