On Fri, 24 Jul 2015 15:48:17 -0400 "J. Bruce Fields" <bfields@xxxxxxxxxxxx> wrote: > On Fri, Jul 24, 2015 at 09:46:57AM +1000, NeilBrown wrote: > > Does that seem like a reasonable approach from your understanding of the problem? > > So something like that could give us a way to prevent asking mountd > about mounts that it can't see. > > Except when things change: it's possible a mount that would pass this > test at the time we create the request is no longer by the time we get > mountd's reply. > > You can tell people not to do that. It still bugs me to have the > possibility of a unanswereable request. I can see three general ways to cope with the fact that things could change between creating a request and receiving the reply: - lock to prevent the change - refcounts to provide a stable reference to the thing of interest - detect change and retry. These correspond roughly to spinlock, kref, and seqlock. Trying to prevent changes in the filesystem over an upcall-and-reply is out of the question. A refcount could be implemented as a file descriptor. i.e. when nfsd finds a mountpoint at the end of a 'lookup', it creates a file descriptor for the target object and passes that to mountd. mountd does what it does and sends the reply back with the same file descriptor. I think that is needlessly complex. detect-change-and-retry is, I think, the best. The cache already has a retry mechanism. It can often detect a change implicitly if it gets told about some filesystem object that it doesn't really care about. The only weakness is that it can't currently detect if its question can no longer be answered... I agree that an unanswerable request seems ugly. But sometimes the best way to handle races is to let ugly things happen temporarily. I should probably double-check that the cache will retry the upcall in a reasonable time frame - I only have a vague recollection of how that works... NeilBrown -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html