Re: Linux NFS and cached properties

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 26 Jul 2012 18:36:07 -0400 "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
wrote:

> On Tue, Jul 24, 2012 at 05:28:02PM +0000, ZUIDAM, Hans wrote:
> > Hi Bruce,
> > 
> > Thanks for the clarification.
> > 
> > (I'm repeating a lot of my original mail because of the Cc: list.)
> > 
> > > J. Bruce Fields
> > > I think that's right, though I'm curious how you're managing to hit
> > > that case reliably every time.  Or is this an intermittent failure?
> > It's an intermittent failure, but with the procedure shown below it is
> > fairly easy to reproduce.    The actual problem we see in our product
> > is because of the way external storage media are handled in user-land.
> > 
> >         192.168.1.10# mount -t xfs /dev/sdcr/sda1 /mnt
> >         192.168.1.10# exportfs 192.168.1.11:/mnt
> > 
> >         192.168.1.11# mount 192.168.1.10:/mnt /mnt
> >         192.168.1.11# umount /mnt
> > 
> >         192.168.1.10# exportfs -u 192.168.1.11:/mnt
> >         192.168.1.10# umount /mnt
> >         umount: can't umount /media/recdisk: Device or resource busy
> > 
> > What I actually do is the mount/unmount on the client via ssh.  That
> > is a good way to trigger the problem.
> > 
> > We see that during the un-export the NFS caches are not flushed
> > properly which is why the final unmount fails.
> > 
> > In net/sunrpc/cache.c the cache times (last_refresh, expiry_time,
> > flush_time) are measured in seconds.  If I understand the code somewhat
> > then during an NFS un-export the is done by setting the flush_time to
> > the current time.  The cache_flush() is called.  If in that same second
> > last_refresh is set to the current time then the cached item is not
> > flushed.  This will subsequently cause un-mount to fail because there
> > is still a reference to the mount point.
> > 
> > > J. Bruce Fields
> > > I ran across that recently while reviewing the code to fix a related
> > > problem.  I'm not sure what the best fix would be.
> > >
> > > Previously raised here:
> > >
> > >       http://marc.info/?l=linux-nfs&m=133514319408283&w=2
> > 
> > The description in your mail does indeed looks the same as the problem
> > that we see.
> > 
> > >From reading the code in net/sunrpc/cache.c I get the impression that it is
> > not really possible to reliably flush the caches for an un-exportfs such
> > that after flushing they will not accept entries for the un-exported IP/mount
> > point combination.
> 
> Right.  So, possible ideas, from that previous message:
> 
> 	- As Neil suggests, modify exportfs to wait a second between
> 	  updating etab and flushing the cache.  At that point any
> 	  entries still using the old information are at least a second
> 	  old.  That may be adequate for your case, but if someone out
> 	  there is sensitive to the time required to unexport then that
> 	  will annoy them.  It also leaves the small possibility of
> 	  races where an in-progress rpc may still be using an export at
> 	  the time you try to flush.
> 	- Implement some new interface that you can use to flush the
> 	  cache and that doesn't return until in-progress rpc's
> 	  complete.  Since it waits for rpc's it's not purely a "cache"
> 	  layer interface any more.  So maybe something like
> 	  /proc/fs/nfsd/flush_exports.
> 	- As a workaround requiring no code changes: unexport, then shut
> 	  down the server entirely and restart it.  Clients will see
> 	  that as a reboot recovery event and recover automatically, but
> 	  applications may see delays while that happens.  Kind of a big
> 	  hammer, but if unexporting while other exports are in use is
> 	  rare maybe it would be adequate for your case.

That's a shame...
I had originally intended "rpc.nfsd 0" to simple stop all threads and nothing
else.  Then you would be able to:
   rpc.nfsd 0
   exportfs -f
   unmount
   rpc.nfsd 16

and have a nice fast race-free unmount.
But commit e096bbc6488d3e49d476bf986d33752709361277 'fixed' that :-(

I wonder if it can be resurrected ... maybe not worth the effort.


The idea of a new interface to synchronise with all threads has potential and
doesn't need to be at the nfsd level - it could be in sunrpc.  Maybe it could
be built into the current 'flush' interface.
1/ iterate through all no-sleeping threads setting a flag an increasing a
counter.
2/ when a thread completes current request, if test_and_clear the flag, it
atomic_dec_and_test the counter and then wakes up some wait_queue_head.
3/ 'flush'ing thread waits on the waut_queue_head for the counter to be 0.

If you don't hate it I could possibly even provide some code.

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux