On 06/02/2014 11:42 AM, Tejun Heo wrote: > Hello, Jens. > > On Mon, Jun 02, 2014 at 11:32:05AM -0600, Jens Axboe wrote: >> For things like blkcg, I agree, it should be able to be common code and >> reusable. But there's a need for scheduling beyond that, for people that >> don't use control groups (ie most...). And it'd be hard to retrofit cfq >> into blk-mq, without rewriting it. I don't believe we need anything this >> fancy for blk-mq, hopefully. At least having simple deadline scheduling >> would be Good Enough for the foreseeable future. > > Heh, looks like we're miscommunicating. I don't think anything with > the level of complexity of cfq is realistic for high-iops devices. It > has already become a liability for SATA ssds after all. My suggestion > is that as hierarchical scheduling tends to be logical extension of > flat scheduling, it probably would make sense to implement both > scheduling logics in the same framework as in the cpu scheduler or (to > a lesser extent) cfq. So, a new blk-mq scheduler which can work in > hierarchical mode if blkcg is in active use. But blk-mq will potentially drive anything, so it might not be out of the question with a more expensive scheduling variant, if it makes any sense to do of course. At least until there's no more rotating stuff out there :-). But it's not a priority at all to me yet. As long as we have coexisting IO paths, it'd be trivial to select the needed one based on the device characteristics. > One part I was wondering about is whether we'd need to continue the > modular multiple implementation mechanism. For rotating disks, for > various reasons including some historical ones, we ended up with > multiple ioscheds and somewhat uglily layered blkcg implementations. > Given that the expected characteristics of blk-mq devices are more > consistent, it could be reasonable to stick with single iops and/or > bandwidth scheme. I hope not to do that. I just want something sane and simple (like a deadline scheduler), nothing more. -- Jens Axboe _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/containers