Krzysztof Taraszka wrote: > 2009/8/24 Daniel Lezcano <daniel.lezcano@xxxxxxx> > >> Krzysztof Taraszka wrote: >> >>> 2009/8/23 Daniel Lezcano <daniel.lezcano@xxxxxxx> >>> >>> (...) >>> >>> >>> >>> >>>> With the lxc tools I did: >>>> >>>> lxc-execute -n foo /bin/bash >>>> echo 268435456 > /cgroup/foo/memory.limit_in_bytes >>>> mount --bind /cgroup/foo/memory.meminfo /proc/meminfo >>>> for i in $(seq 1 100); do sleep 3600 & done >>>> >>>> >>> >>> (...) >>> >>> >>> >>> >>>> :) >>>> >>>> >>>> >>>> >>> hmmm... I think that access to the cgroup inside container is very risk >>> because I am able to manage for example memory resources (what if I am not >>> the host owner and... I can give me via non-secure mounted /cgroup (inside >>> container) all available memory resources...). >>> I think that the /proc/meminfo should be pass to the container in the >>> other >>> way, but this is the topic for the other thread. >>> >>> >> It is not a problem, I did it in this way because it's easy to test but in >> a real use case, the memory limit is setup by the lxc configuration file and >> the cgroup directory will be no longer accessible from the container. >> > > > So.. how there will be another method (more secure) for giving /proc/meminfo > with limits to the container, right? Same method. The lxc tools can be configured with a fstab to mount more mount points, furthermore if memory.meminfo is available I will add the code to mount it automatically to /proc/meminfo in the lxc tools. _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers