Re: "rmdir_path: lstat of <path> failed" issues, despite updated autofs RPM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2015-11-09 at 12:42 -0800, Greg Earle wrote:
> > On Nov 8, 2015, at 3:47 PM, Ian Kent <raven@xxxxxxxxxx> wrote:
> > 
> > > The keys were never in a map.  That is what is so weird.  I went
> > > to
> > > our
> > > Solaris NIS master and there is no reference to "objects" or
> > > "refs"
> > > in
> > > /etc/auto_home or /etc/auto_*.  I can't imagine what could be
> > > running
> > > that is trying to access those two phantom "/home" paths.
> > 
> > Yeah, that function is called from a few places.
> > 
> > At map read time, at autofs fs mount time and on HUP signal or if a
> > lookup makes autofs think the map has been modified and at expire
> > when
> > attempting to remove a path on failed mount attempts for some fs
> > types,
> > notably NFS.
> 
> I checked and of course it was still doing it:
> 
> Nov  9 12:30:05 mipldiv automount[31419]: rmdir_path: lstat of
> /home/objects \
> failed
> 
> So I SIGHUP'ed the daemon and immediately got another one:
> 
> Nov  9 12:35:36 mipldiv automount[31419]: rmdir_path: lstat of
> /home/objects \
> failed

So it sounds like it's occurring in the map entry cache prune function.

The puzzle is why the lstat() failing since that directory does exist
according to what we have above.

There's also the question of why it continues to occur.

I looked briefly at the RHEL code (actually rev 184 was 5.11) for the
prune function, which is driven by what's in the map entry cache, and
it looks like the map entry cache entry is removed just before the
rmdir_path() call. So it shouldn't keep coming back if there are no
further accesses.

I probably didn't look closely enough though.

> 
> I can't stop & restart the autofs service because it's a production
> server
> for a flight project.

Even though you should be able to perform a restart without disrupting
autofs users it's not something I'd recommend on a production machine.

There is a small window of time when autofs won't respond to requests
and if requests arrive within that window they could hang forever. Not
sure how to mitigate against that either.

But that probably wouldn't be enough anyway.

If there are directories that are possibly broken in some way then to
clear them up the autofs managed mount point must be umounted.

But if any mounts are busy at shutdown they are left mounted, and the
autofs mount itself obviously must be left mounted too, then autofs re
-connects to the mounts when it starts again.

That complexity is the reason I usually just recommend a re-boot be
scheduled, plus if there is some sort of brokenness within the mounted
autofs file system there's no knowing if there were side effects when
it happened.

Sorry, I'm not really much help with this.
Ian

--
To unsubscribe from this list: send the line "unsubscribe autofs" in



[Index of Archives]     [Linux Filesystem Development]     [Linux Ext4]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux