On Tue, Jul 24, 2012 at 05:28:02PM +0000, ZUIDAM, Hans wrote: > Hi Bruce, > > Thanks for the clarification. > > (I'm repeating a lot of my original mail because of the Cc: list.) > > > J. Bruce Fields > > I think that's right, though I'm curious how you're managing to hit > > that case reliably every time. Or is this an intermittent failure? > It's an intermittent failure, but with the procedure shown below it is > fairly easy to reproduce. The actual problem we see in our product > is because of the way external storage media are handled in user-land. > > 192.168.1.10# mount -t xfs /dev/sdcr/sda1 /mnt > 192.168.1.10# exportfs 192.168.1.11:/mnt > > 192.168.1.11# mount 192.168.1.10:/mnt /mnt > 192.168.1.11# umount /mnt > > 192.168.1.10# exportfs -u 192.168.1.11:/mnt > 192.168.1.10# umount /mnt > umount: can't umount /media/recdisk: Device or resource busy > > What I actually do is the mount/unmount on the client via ssh. That > is a good way to trigger the problem. > > We see that during the un-export the NFS caches are not flushed > properly which is why the final unmount fails. > > In net/sunrpc/cache.c the cache times (last_refresh, expiry_time, > flush_time) are measured in seconds. If I understand the code somewhat > then during an NFS un-export the is done by setting the flush_time to > the current time. The cache_flush() is called. If in that same second > last_refresh is set to the current time then the cached item is not > flushed. This will subsequently cause un-mount to fail because there > is still a reference to the mount point. > > > J. Bruce Fields > > I ran across that recently while reviewing the code to fix a related > > problem. I'm not sure what the best fix would be. > > > > Previously raised here: > > > > http://marc.info/?l=linux-nfs&m=133514319408283&w=2 > > The description in your mail does indeed looks the same as the problem > that we see. > > >From reading the code in net/sunrpc/cache.c I get the impression that it is > not really possible to reliably flush the caches for an un-exportfs such > that after flushing they will not accept entries for the un-exported IP/mount > point combination. Right. So, possible ideas, from that previous message: - As Neil suggests, modify exportfs to wait a second between updating etab and flushing the cache. At that point any entries still using the old information are at least a second old. That may be adequate for your case, but if someone out there is sensitive to the time required to unexport then that will annoy them. It also leaves the small possibility of races where an in-progress rpc may still be using an export at the time you try to flush. - Implement some new interface that you can use to flush the cache and that doesn't return until in-progress rpc's complete. Since it waits for rpc's it's not purely a "cache" layer interface any more. So maybe something like /proc/fs/nfsd/flush_exports. - As a workaround requiring no code changes: unexport, then shut down the server entirely and restart it. Clients will see that as a reboot recovery event and recover automatically, but applications may see delays while that happens. Kind of a big hammer, but if unexporting while other exports are in use is rare maybe it would be adequate for your case. --b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html