On 1 June 2011 05:12, Zhi Yong Wu <wuzhy@xxxxxxxxxxxxxxxxxx> wrote: > On Tue, May 31, 2011 at 03:55:49PM -0400, Vivek Goyal wrote: >>Date: Tue, 31 May 2011 15:55:49 -0400 >>From: Vivek Goyal <vgoyal@xxxxxxxxxx> >>To: Zhi Yong Wu <wuzhy@xxxxxxxxxxxxxxxxxx> >>Cc: kwolf@xxxxxxxxxx, aliguori@xxxxxxxxxx, stefanha@xxxxxxxxxxxxxxxxxx, >> Â Â Â kvm@xxxxxxxxxxxxxxx, guijianfeng@xxxxxxxxxxxxxx, >> Â Â Â qemu-devel@xxxxxxxxxx, wuzhy@xxxxxxxxxx, >> Â Â Â herbert@xxxxxxxxxxxxxxxxxxxx, luowenj@xxxxxxxxxx, zhanx@xxxxxxxxxx, >> Â Â Â zhaoyang@xxxxxxxxxx, llim@xxxxxxxxxx, raharper@xxxxxxxxxx >>Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits >>User-Agent: Mutt/1.5.21 (2010-09-15) >> >>On Mon, May 30, 2011 at 01:09:23PM +0800, Zhi Yong Wu wrote: >> >>[..] >>> Â Â 3.) How the users enable and play with it >>> Â Â QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. >> >>How does throughput interface look like? is it bytes per second or something >>else? > HI, Vivek, > It will be a value based on bytes per second. > >> >>Do we have read and write variants for throughput as we have for iops. > QEMU code has two variants "rd_bytes, wr_bytes", but we maybe need to get their bytes per second. > >> >>if you have bytes interface(as kenrel does), then "bps_rd" and "bps_wr" >>might be good names too for thoughput interface. > I agree with you, and can change them as your suggestions. > Changing them this way is not going to be an improvement. While rd_bytes and wr_bytes lack the time interval specification bps_rd and bps_wr is ambiguous. Is that bits? bytes? Sure, there should be some distinction by capitalization but that does not apply since qemu arguments are all lowercase. Thanks Michal -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html