Re: do_mount_autofs_direct: failed to create mount directory ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2021-04-12 at 12:53 -0700, Mike Marion wrote:
> On Sat, Apr 10, 2021 at 09:46:47AM +0800, Ian Kent wrote:
> 
> > > I'd love to see re-reads parse the map, save the new paths, parse
> > > the
> > > removals, then add the new paths after removing/umounting removed
> > > paths.
> > 
> > It sounds simple to do but I think you would be surprised with the
> > sort of difficulties I would encounter.
> 
> Yeah, that's pretty much why I never sent the email I wrote up
> before, I
> started to realize it was far more complicated as you say.
> 
> > But, if that was done, what would be the policy if /prj/foo was in
> > use, lazy umount /prj/foo mounts, ignore all changes at or below
> > /prj/foo until it's no longer in use, or something else?
> 
> Yep, that's one of the issues I thought of too.  It's already an
> issue
> we have with the current logic as well.  Usually we end up just
> tagging
> the hosts for a reboot once any compute jobs on them are done, it's
> just
> easier than fixing them by hand.
> 
> > I would be tempted to lazy umount things at or below /prj/foo,
> > after
> > all they would be using stale paths and will eventually end up in
> > tears anyway, particularly if processes have open file handles on
> > paths within the mount.
> > 
> > Don't get me wrong, this does sound sensible and is something that
> > needs to be fixed, there's just those cases that cause me pain time
> > and time again that get in the road.
> > 
> > The other problem is I might use features that are as yet
> > unreleased
> > (but in the current source tree) so that would complicate matters,
> > OTOH I might not need the new features and, other than the in use
> > policy, it might straight froward ...
> 
> It'd be great if it could be implemented at some point.

The lazy umount approach is a bit of a trap because in trivial cases
like testing it appears to work much better than you would expect.

What worries me is that you could have a process with open files (or
a process working directory) on the mount that's gone away, the mount
then gets lazy umounted, the file handles remain but all new accesses
got to different file systems that (probably) contain different data.
So the process could be processing inconsistent data coming from
different mounts it thinks are the same.

Once any references have been released the mount will then go away
naturally.

I don't know if the NFS server will return an error to those existing
file handle accesses once the export is removed ... I suspect not ...

Ian





[Index of Archives]     [Linux Filesystem Development]     [Linux Ext4]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux