Re: name=systemd cgroup mounts/hierarchy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michal,

> What exact problem do you see?
I see the kubelet crash with error: «Failed to start ContainerManager failed to initialize top level QOS containers: root container [kubepods] doesn't exist»
details: https://github.com/kubernetes/kubernetes/issues/95488


> What was 'self'?
‘self’ was a kubelet process running on bare metal node as systemd service.
I can see same two mounts of named systemd hierarchy from shell on the same node, simply by `$ cat /proc/self/mountinfo`
I think kubelet is running in the «main» mount namespace which has weird named systemd mount.
 
> bind mount of cgroup subtree into a container done by a container runtime
does it mean, there was a container which somehow had access to main mount namespace and decided to mount named systemd hierarchy inside itself? (probably it was running systemd inside it)

I would like to reproduce such weird mount to understand the full situation and make sure I can avoid it in future.

Friday, November 13, 2020 2:34 PM +03:00 from Michal Koutný <mkoutny@xxxxxxxx>:
 
Hello.

On Thu, Nov 12, 2020 at 08:05:34PM +0300, Andrei Enshin <b1os@xxxxx> wrote:
> There are few nodes after k8s update started to have (maybe it was
> before) a problem with the following mount:
What exact problem do you see?

> It was taken from /proc/self/mountinfo
What was 'self'?

> May I ask, does systemd mount on a fly some hierarchies like this and
> if yes what logic behind it?  
systemd mounts the cgroup hierarchies at boot. What you see is likely a
bind mount of cgroup subtree into a container done by a container
runtime.

Michal
 
 

---

Best Regards,
Andrei Enshin

 
_______________________________________________
systemd-devel mailing list
systemd-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux