On 7/11/23 01:05, Robert Vasek wrote:
Dear autofs community, We run an instance of the automount daemon inside a container (a part of a storage plugin in Kubernetes). The autofs mount root is shared between different containers, and must survive restarting the daemon.
That sounds unusual, please tell me more about how this works? My thought was always that there are two ways one would use autofs in a container. One is mapping an indirect mount root (from the init mount namespace) as a volume into the container thereby using the daemon present in the init namespace. Yes, this has had an expire problem for a long time but that will change (soon I hope). The second way is to run an instance of the daemon completely independently within the container. But this sounds like a combination of both of these which is something I hadn't considered.
The problem is that when the daemon exits, it tries to clean up all its mounts -- including the autofs root, so there is nothing to reconnect to. At the moment, we're getting around the issue by sending it a SIGKILL upon the daemon container exit, which skips the mount cleanup, leaving it available for reconnect when the container comes back.
Yes, it does. My preferred configure setup is to leave mounts in use mounted at exit but that's not what you need ... While the SIGKILL usage won't change I agree it would be better to be able to tell automount to just leave everything mounted at exit. You might need to send a HUP signal at container start in case of map updates but indirect mounts should detect changes anyway.
While this works nicely for the moment, we don't want to rely on some random signal which may be handled differently in the future, and I didn't see anything in the options that would explicitly skip mount clean up at exit. Would you accept a patch that adds a dedicated command line flag for this?
By all means, send it over. I'm not sure how this will fit in with the configure options for mount handling at exit ... we'll see what we get, ;) Ian