On Wed 01-06-22 06:43:27, Vasily Averin wrote: [...] > However, it isn't critical for OpenVz. Our kernel does not allow > to change of cgroup.subgroups_limit from inside containers. What is the semantic of this limit? > CT-901 /# cat /sys/fs/cgroup/memory/cgroup.subgroups_limit > 512 > CT-901 /# echo 3333 > /sys/fs/cgroup/memory/cgroup.subgroups_limit > -bash: echo: write error: Operation not permitted > CT-901 /# echo 333 > /sys/fs/cgroup/memory/cgroup.subgroups_limit > -bash: echo: write error: Operation not permitted > > I doubt this way can be accepted in upstream, however for OpenVz > something like this it is mandatory because it much better > than nothing. > > The number can be adjusted by host admin. The current default limit > looks too small for me, however it is not difficult to increase it > to a reasonable 10,000. > > My experiments show that ~10000 cgroups consumes 0.5 Gb memory on 4cpu VM. > On "big irons" it can easily grow up to several Gb. This is quite a lot > to ignore its accounting. Too many cgroups can certainly have a high memory footprint. I guess this is quite clear. The question is whether trying to limit them by the memory footprint is really the right way to go. I would be especially worried about those smaller machines because of a smaller footprint which would allow to deplete the id space faster. Maybe we need some sort of limit on the number of cgroups in a subtree so that any potential runaway can be prevented regardless of the cgroups memory footprint. One potentially big problem with that is that cgroups can live quite long after being offlined (e.g. memcg) so such a limit could easily trigger I can imagine. -- Michal Hocko SUSE Labs