On Tue, Feb 28, 2012 at 10:53:59PM +0100, Peter Zijlstra wrote: > On Tue, 2012-02-28 at 16:35 -0500, Vivek Goyal wrote: > > Yes this is how scheduler does to handle hierarchy. Treat task and group > > at same level. > > ... > > > Whether it is a good thing or bad thing, I don't know. > > That's IMO what the cgroupfs interface provides for, if you do anything > different there's this shadow group that contains the tasks for which > you then have to provide extra parameter control. > > Furthermore, by treating tasks and groups at the same level you can > create the extra group, but you can't do the reverse. So its the more > versatile solution as well. Agreed that it is more versatile. And one can move all the tasks to a new group to achieve what a shadow group will do. The only thing is what is a good default. If we are thinking of dividing resources in terms of % and writing a user space tool, then in default model we just don't know what's the %. May be it is dynamically varying % and should be shown accordingly. Or if idea of minimum % proportional bandwidth is more natural, then we shall have to change userspace and things like systemd to not run any task in /. Then a user space tool can go through cgroup hierarchy and calculate minimum % share of a group and display it. > > > I think previous > > design was allocating a group for every user. I guess, in that case we > > will have fixed % share of each user (until and unless users are created/ > > removed). > > Not even, it depended on if the user had anything runnable or not. It > was very much like the current cgroup stuff if you create a cgroup for > each user and stick the tasks in. > > The cpu-cgroup stuff is purely runnable based, so every wakeup/sleep > changes the entire weight distribution, yay! :-) :-). That's fine. If a group is not using its bandwidth because there is no runnable task, then other groups get more cpu. I thought that's the proportional definition. Thanks Vivek _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/containers