Hi Ian, > That sounds unusual, please tell me more about how this works? Essentially, you have two roles for containers: * mount provider: mounts a volume in a path; provided by a storage vendor * consumer: makes use of the mounted volume; user application There are also two directories on the host: * /staging: publisher mounts the volume once into /staging/{volume_id} * /publish: the mounts from /staging/{volume_id} are bind-mounted into /publish/{consumer_id}/{volume_id} The provider container(s) gets access to /staging and /publish as a shared bind-mount, while the consumer gets access only to its own /publish/{consumer_id} as a slave bind-mount. This is it as far as how things are connected together. Here's a bit more concrete example: * the provider we're developing has the map definitions, runs the automount daemon and exposes an indirect map mount in /staging/x * when a new consumer is scheduled on the node, the provider receives a request to expose volume x for container y, and so it runs `mkdir /publish/y/x && mount --rbind --make-slave /staging/x /publish/y/x`, * the container runtime then makes sure /publish/y/x is exposed to the consumer container. All this is of course a bit simplified, but should be enough to illustrate the setup. automount's reconnect feature here is critical, as the provider may be restarted (due crash or upgrade, etc.). If there are any more questions about this, I'm happy to give more details! > By all means, send it over. Cool, I should have something ready in the coming days, thanks! PS: more details on how this works can be found here (in general, unrelated to autofs): * https://github.com/kubernetes/design-proposals-archive/blob/main/storage/container-storage-interface.md * https://github.com/container-storage-interface/spec/blob/master/spec.md Cheers, Robert ut 7. 11. 2023 o 3:15 Ian Kent <raven@xxxxxxxxxx> napísal(a): > > On 7/11/23 01:05, Robert Vasek wrote: > > Dear autofs community, > > > > We run an instance of the automount daemon inside a container (a part > > of a storage plugin in Kubernetes). The autofs mount root is shared > > between different containers, and must survive restarting the daemon. > > That sounds unusual, please tell me more about how this works? > > > My thought was always that there are two ways one would use autofs in > > a container. > > > One is mapping an indirect mount root (from the init mount namespace) > > as a volume into the container thereby using the daemon present in the init > > namespace. Yes, this has had an expire problem for a long time but that will > > change (soon I hope). > > > The second way is to run an instance of the daemon completely independently > > within the container. > > > But this sounds like a combination of both of these which is something I > > hadn't considered. > > > > > > The problem is that when the daemon exits, it tries to clean up all > > its mounts -- including the autofs root, so there is nothing to > > reconnect to. At the moment, we're getting around the issue by sending > > it a SIGKILL upon the daemon container exit, which skips the mount > > cleanup, leaving it available for reconnect when the container comes > > back. > > Yes, it does. > > > My preferred configure setup is to leave mounts in use mounted at > > exit but that's not what you need ... > > > While the SIGKILL usage won't change I agree it would be better > > to be able to tell automount to just leave everything mounted at > > exit. You might need to send a HUP signal at container start in > > case of map updates but indirect mounts should detect changes > > anyway. > > > > > > While this works nicely for the moment, we don't want to rely on some > > random signal which may be handled differently in the future, and I > > didn't see anything in the options that would explicitly skip mount > > clean up at exit. Would you accept a patch that adds a dedicated > > command line flag for this? > > By all means, send it over. > > > I'm not sure how this will fit in with the configure options for > > mount handling at exit ... we'll see what we get, ;) > > > Ian >