Serge E. Hallyn wrote: > Quoting Daniel Lezcano (daniel.lezcano@xxxxxxx): > >> Krzysztof Taraszka wrote: >> >>> Okey. >>> I made few tests and this two ways work: >>> >>> First way: >>> ======= >>> lxc. smack enabled, policy loaded. cgroup not labeled. >>> >>> a) start container >>> b) mount cgroup inside container >>> c) mount --bind /cgroup/foo/memory.meminfo /proc/meminfo >>> d) secure the /cgroup on the host (ie: attr -S -s SMACK64 -V host /cgroup). >>> >>> this step can be done inside lxc tools ;) >>> >>> Second way: >>> ========== >>> lxc. smack enabled, policy loaded. cgroup not labeled. >>> >>> a) do not label whole /cgrop directory (DO NOT DO: attr -S -s SMACK64 -V >>> host /cgroup). Label dedicate files only (for example: /cgroup/cpuset.cpus, >>> /cgroup/vs1/cpuset.cpus, etc). Do not label the /cgrop/vs1 directory. Label >>> with vs1 label only /cgroup/vs1/memory.meminfo. All other files label with >>> host label to do not allow read them. >>> b) start container >>> c) mount cgroup inside container >>> d) mount --bind /cgroup/foo/memory.meminfo /proc/meminfo >>> >>> steps: b, c, d can be done inside lxc tools. step a can't and it is base on >>> the admin policy. >>> >>> I think that the first solution is more automatic and can be done by lxc >>> tools (maybe command line switch? I can prepare a patch for that. >>> >>> >> I do not know smack, what does smack here ? Will this solution avoid the >> container to overwrite /proc/meminfo by remounting /proc ? >> > > Right, in the first way he is labeling the whole cgroupfs with a label > which prevents the container from mounting it. In the second way, > the specific files are labeled. > Ah, got it ! :) The idea of Kamezawa-san to use a fuse proc is maybe a good idea in this case. So we can address the entire /proc specific informations. For example, like the /proc/meminfo, there is the /proc/cpuinfo. If you restrict the usage to a subset of your cpus with cpuset and you look at /proc/cpuinfo, you see all the cpus; it is not a big problem until a computation application looks at this file and choose to fork(n cpus) and set the affinity of each process to each cpu ... AFAIR, this is the case for HPC applications. _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers