On 04/05/2012 12:50 AM, Vivek Goyal wrote: > On Thu, Apr 05, 2012 at 12:45:05AM +0800, Tao Ma wrote: > > [..] >>> In iops_mode(), expire each cfqq after dispatch of 1 or bunch of requests >>> and you should get the same behavior (with slice_idle=0 and group_idle=0). >>> So why write a new scheduler. >> really? How could we config cfq to work like this? Or you mean we can >> change the code for it? > > You can just put a few lines of code to expire queue after 1-2 requests > dispatched from the queue. Than run your workload with slice_idle=0 > and group_idle=0 and see what happens. oh, yes I can do this to see whether the latency helps, but it is hacking and doesn't work with the cgroup proportion... > > I don't even know what your workload is. Sorry for not allowing to say more about it. > >>> >>> Only thing is that with above, current code will provide iops fairness only >>> for groups. We should be able to tweak queue scheduling to support iops >>> fairness also. >> OK, as I have said in another e-mail another my concern is the >> complexity. It will make cfq too much complicated. I just checked the >> source code of shaohua's original patch, fiops scheduler is only ~700 >> lines, so with cgroup support added it would be ~1000 lines I guess. >> Currently cfq-iosched.c is around ~4000 lines even after Tejun's cleanup >> of io context... > > I think a large chunk of that iops scheduler code will be borrowed from > CFQ code. All the cgroup logic, queue creation logic, group scheduling > logic etc. And that's the reason I was still exploring the possibility > of having common code base. Yeah, actually I was thinking of abstracting a generic logic, but it seems a lot bit hard. Maybe we can try to unify the code later? Thanks Tao _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/containers