On 03/02/2016 02:10 PM, Vivek Goyal wrote:
On Wed, Mar 02, 2016 at 09:59:13PM +0200, Nikolay Borisov wrote:
We had similar issue with IO priority and it did not work reliably with CFQ on underlying device when dm devices were sitting on top. If we really want to give it a try, I guess we will have to put cgroup info of submitter early in bio at the time of bio creation even for all kind of IO. Not sure if it is worth the effort.
As it stands, imagine that you have a hypervisor node running many VMs (or containers), each of which is assigned a separate logical volume (possibly thin-provisioned) as its rootfs.
Ideally we want the disk accesses by those VMs to be "fair" relative to each other, and we want to guarantee a certain amount of bandwidth for the host as well.
Without this sort of feature, how can we accomplish that?
For the case of IO throttling, I think you should put throttling rules on the dm device itself. That means as long as filesystem supports the cgroups, you should be getting right cgroup information for all kind of IO and throttling should work just fine.
IO throttling isn't all that useful, since it requires you to know in advance what your IO rate is. And it doesn't adjust nicely as the number of competing entities changes the way that weight-based schemes do.
Chris -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html