On Fri, 2 Aug 2013 16:42:00 +0200 Miklos Szeredi <miklos@xxxxxxxxxx> wrote: > On Fri, Aug 2, 2013 at 2:17 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote: > > On Tue, 30 Jul 2013 18:16:55 +0200 Miklos Szeredi <miklos@xxxxxxxxxx> wrote: > > >> The other problem is that, unlike NFS, fuse doesn't currently > >> reconnect these subtrees when it finds them at a different point in > >> the tree. d_drop on it just makes things worse because at that point > >> that subtree will not be accessible anymore (while the fuse fs is > >> mounted, that is). This could be fixed pretty easily by using the > >> d_materialise_*() helpers. > >> > > > > Yes, but... > > > > This works on NFS since we have an expectation that we can identify an > > inode again when we see it. Given some of the strange userland > > filesystems that FUSE supports, does that expectation hold there? If > > not then you might still end up with disconnected subtrees. > > We'll end up with disconnected subtrees in NFS as well. That state > can remain indefinitely. It will either be reconnected when we come > accross the inode "by chance" or dissolved when it is no longer > referenced and the dentries reclaimed. > > And as long as there are no mounts under the disconnected subtree, > there's no big problem. > > If some strange filesystem doesn't support identifying a disconnected > subtree it will just not be reconnected. But that can happen with NFS > as well, if the new location is never accessed, so it's not something > new. > Ok, makes sense. So to summarize...the main issue you had is the case where a mount races in just before you shrink the subtree? If so, then I guess the patch you proposed earlier in this thread should take care of that (even if the error code returned isn't ideal). With that patch in place, would Anand's patch then be reasonable, or do you think other changes are needed there to make that safe? -- Jeff Layton <jlayton@xxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html