2012/4/4 Vivek Goyal <vgoyal@xxxxxxxxxx>: > On Wed, Apr 04, 2012 at 05:35:49AM -0700, Shaohua Li wrote: > > [..] >> >> How iops_weight and switching different than CFQ group scheduling logic? >> >> I think shaohua was talking of using similar logic. What would you do >> >> fundamentally different so that without idling you will get service >> >> differentiation? >> > I am thinking of differentiate different groups with iops, so if there >> > are 3 groups(the weight are 100, 200, 300) we can let them submit 1 io, >> > 2 io and 3 io in a round-robin way. With a intel ssd, every io can be >> > finished within 100us. So the maximum latency for one io is about 600us, >> > still less than 1ms. But with cfq, if all the cgroups are busy, we have >> > to switch between these group in ms which means the maximum latency will >> > be 6ms. It is terrible for some applications since they use ssds now. >> Yes, with iops based scheduling, we do queue switching for every request. >> Doing the same thing between groups is quite straightforward. The only issue >> I found is this will introduce more process context switch, this isn't >> a big issue >> for io bound application, but depends. It cuts latency a lot, which I >> guess is more >> important for web 2.0 application. > > In iops_mode(), expire each cfqq after dispatch of 1 or bunch of requests > and you should get the same behavior (with slice_idle=0 and group_idle=0). > So why write a new scheduler. > > Only thing is that with above, current code will provide iops fairness only > for groups. We should be able to tweak queue scheduling to support iops > fairness also. Agreed, we can tweak cfq to make it support iops fairness because the two are conceptually the same. The problem is if this is a mess. CFQ is quite complicated already. In iops mode, a lot of code isn't required, like idle, queue merging, thinktime/seek detection and so on, as the scheduler will be only for ssd. With recent iocontext cleanup, the iops scheduler code is quite short actually. _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/containers