On Thu, Nov 29, 2012 at 05:54:02PM -0800, Patrick McLean wrote: > > Very interesting. Do you have anything mounted on the corresponding > > directories on server? The picture looks like you are getting empty > > fhandles in readdir+ respons for exactly the same directories that happen > > to be mountpoints on client. In any case, we shouldn't do that blind > > d_drop() - empty fhandles can happen. The only remaining question is > > why do they happen on that set of entries. From my reading of > > encode_entryplus_baggage() it looks like we have compose_entry_fh() > > failing for those entries and those entries alone. One possible cause > > would be d_mountpoint(dchild) being true on server. If it is true, we > > can declare the case closed; if not, I really wonder what's going on. > > Those directories do have the server's own copies of the said directories bind mounted at the moment in a separate mount namespace. > > Unmounting those directories on the server does appear to stop the WARN_ON from triggering. OK, that settles it. WARN_ON() and printks in the area can be dropped; the right fix is below. However, there's a similar place in cifs that also needs to be dealt with and I really, really wonder why the hell do we do d_drop() in nfs_revalidate_lookup(). It's not relevant in this bug, but I would like to understand what's wrong with simply returning 0 from ->d_revalidate() and letting the caller (in fs/namei.c) take care of unhashing, etc. itself. Would make have_submounts() in there pointless as well - we could just return 0 and let d_invalidate() take care of the checks... Trond? diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -450,7 +450,8 @@ void nfs_prime_dcache(struct dentry *parent, struct nfs_entry *entry) nfs_refresh_inode(dentry->d_inode, entry->fattr); goto out; } else { - d_drop(dentry); + if (d_invalidate(dentry) != 0) + goto out; dput(dentry); } } -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html