Re: [PATCH] block: transfer source bio's cgroup tags to clone via bio_associate_blkcg() (was: Re: blkio cgroups controller doesn't work with LVM?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wednesday, March 2, 2016, Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
On Wed, Mar 02, 2016 at 09:59:13PM +0200, Nikolay Borisov wrote:
> On Wednesday, March 2, 2016, Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
>
> > On Wed, Mar 02, 2016 at 08:03:10PM +0200, Nikolay Borisov wrote:
> > > Thanks for the patch I will likely have time to test this sometime next
> > week.
> > > But just to be sure - the expected behavior would be that processes
> > > writing to dm-based devices would experience the fair-shair
> > > scheduling of CFQ (provided that the physical devices that back those
> > > DM devices use CFQ), correct?
> >
> > Nikolay,
> >
> > I am not sure how well it will work with CFQ of underlying device. It will
> > get cgroup information right for buffered writes. But cgroup information
>
>
>  Right, what's your definition of  buffered writes?

Writes which go through page cache.

> My mental model is that
> when a process submits a write request to a dm device , the bio is going to
> be put on a devi e workqueue which would then  be serviced by a background
> worker thread and later the submitter notified. Do you refer to this whole
> gamut of operations as buffered writes?

No, once the bio is submitted to dm device it could be a buffered write or
a direct write.

>
> for reads and direct writes will come from submitter's context and if dm
> > layer gets in between, then many a times submitter might be a worker
> > thread and IO will be attributed to that worker's cgroup (root cgroup).
>
>
> Be that as it may, proivded that the worker thread is in the  'correct'
> cgroup,  then the appropriate babdwidth policies should apply, no?

Worker thread will most likely be in root cgroup. So if a worker thread
is submitting bio, it will be attributed to root cgroup.

We had similar issue with IO priority and it did not work reliably with
CFQ on underlying device when dm devices were sitting on top.

If we really want to give it a try, I guess we will have to put cgroup
info of submitter early in bio at the time of bio creation even for all
kind of IO. Not sure if it is worth the effort.

For the case of IO throttling, I think you should put throttling rules on
the dm device itself. That means as long as filesystem supports the
cgroups, you should be getting right cgroup information for all kind of
IO and throttling should work just fine.

Throttling does work even now,  but the use case I had in mind was proportional 
distribution of IO. Imagine 50  or so dm devices, hosting IO intensive workloads. In
this situation, I'd  be interested each of them getting proportional IO based on the weights
set in the blkcg controller for each respective cgroup for every workload.

Thanks
Vivek
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux