Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ryo
Sorry for late reply.
 
Mr Lee and I tested total bandwidth as you requested and the result is like attached file.
 
I know file attaching is not proper in mailing list,
but I selected it for efficient and easy communication. sorry :)
 
And Buffed-Device case can be some strange,
it has too much variation in total bandwidth, So you had better to see as only reference.
The rests didn't have big fluctuation such like Buffered-Device case.
 
if you have another request about the result,
reply to me about that.
 
Additionally, I will try to inform the test results in cgroup environment of you in case by case.
What do you think about it?
If you are busy these days, it is OK to delay it later
.
Thank you.
 
 
2009/8/31 Ryo Tsuruta <ryov@xxxxxxxxxxxxx>
Hi Dong-Jae,

Thanks for testing.
Could you do the same test without dm-ioband? I would like to know
the throughput of your disk drive and a difference with and withour
dm-ioband.

Thanks,
Ryo Tsuruta

Dong-Jae Kang <baramsori72@xxxxxxxxx> wrote:
> Hi Ryo
> I attached new file that includes I/O total bandwidth of evaluation system.
> We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
> and it is measured through iostat tool and dm-ioband debug patch which I
> gave you several months ago.
> Of course, the result in prior report was measured by dm-ioband debug patch.
>
> As a result, the big difference in prior report derives from the location
> where we measured I/O bandwidth
> iostat counts it in application level and dm-ioband debug patch does it in
> dm-ioband controller.
> I think the difference is related with buffer cache.
>
> Thank you.
> Have a nice weekend
>
> 2009/8/27 Dong-Jae Kang <baramsori72@xxxxxxxxx>
>
> > Hi Ryo
> >
> > 2009/8/27 Ryo Tsuruta <ryov@xxxxxxxxxxxxx>
> >
> > Hi Dong-Jae,
> >>
> >> # I've added dm-devel to Cc:.
> >>
> >> Dong-Jae Kang <baramsori72@xxxxxxxxx> wrote:
> >> > Hi Ryo
> >> >
> >> > I attached new test result file(ioband-partition-based-evaluation.xls)in
> >> > this mail.
> >>
> >> Thanks for your great job.
> >>
> >> > In this time, it is not virtualization environment.
> >> > I evaluated partition-based use cases before I do it in vitualization
> >> > environment.
> >> > because I think the two cases are smilar each other.
> >> >
> >> > The detailed information about the evaluation can be referred in
> >> attached
> >> > file.
> >> >
> >> > If you have any questions or comments after examine it,
> >> > please give me your opinion.
> >>
> >> I would like to know the throughput without dm-ioband in your
> >> environment. Because the total throughput of range-bw policy is
> >> 8000KB/s, which means the device has a capability to perform over
> >> 8000KB/s, but the total throughput of weight policy is lower than
> >> the range-bw policy. In my environment, there is no significant
> >> difference in average throughput between with and without dm-ioband.
> >> I ran fio in the way described in your result file. Here are the
> >> results of my environment. The throughputs were calculated from
> >> "iostat -k 1" outputs.
> >>
> >>            buffered write test
> >>           Avg. throughput [KB/s]
> >>        w/o ioband     w/ioband
> >> sdb2         14485         5788
> >> sdb3         12494        22295
> >> total        26979        28030
> >>
> >
> > OK, good comments.
> > I omitted the total bandwidth of the evaluation system.
> >
> > I will reply to you about it tomorrow after I check and re-test it again.
> >
> >>
> >> Thanks,
> >> Ryo Tsuruta
> >>
> >
> > Thank you for comments.
> >
> >
> > --
> > Best Regards,
> > Dong-Jae Kang
> >
>
>
>
> --
> Best Regards,
> Dong-Jae Kang



--
Best Regards,
Dong-Jae Kang

Attachment: fio_test_with_without_dm-ioband_by_iostat.xls
Description: MS-Excel spreadsheet

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux