On Tue, Jun 25, 2019 at 12:05:32PM +0200, Christoph Hellwig wrote: > On Mon, Jun 24, 2019 at 08:25:27PM -0700, Darrick J. Wong wrote: > > By the way, did all the things Dave complained about in last year's > > attempt[1] to add cgroup writeback support get fixed? IIRC someone > > whose name I didn't recognise complained about log starvation due to > > REQ_META bios being charged to the wrong cgroup and other misbehavior. > > As mentioned in the reference thread while the metadata throttling is > an issue, it is in existing one and not one touched by the cgroup > writeback support. This patch just ensures that writeback takes the > cgroup information from the inode instead of the current task. The > fact that blkcg should not even look at any cgroup information for > REQ_META is something that should be fixed entirely in core cgroup > code is orthogonal to how we pick the attached cgroup. That may be, but I don't want to merge this patchset only to find out I've unleashed Pandora's box of untested cgroupwb hell... I /think/ they fixed all those problems, but it didn't take all that long tracing the blkg/blkcg object relationships for my brain to fall out. :/ [Oh well I guess I'll try to turn all that on in my test vm and see if its brain falls out overnight too...] > > Also, I remember that in the earlier 2017 discussion[2] we talked about > > a fstest to test that writeback throttling actually capped bandwidth > > usage correctly. I haven't been following cgroupwb development since > > 2017 -- does it not ratelimit bandwidth now, or is there some test for > > that? The only test I could find was shared/011 which only tests the > > accounting, not bandwidth. > > As far as I can tell cfq could limit bandwith, but cgq is done now. > Either way all that is hiddent way below us. <shrug> ok? I mean, if bandwidth limits died as a feature it'd be nice to know that outright. :) --D