> - hlist_add_fake(&inode->i_hash); > + hlist_nulls_add_fake(&inode->i_hash); Please add a preparatory inode_fake_hash/inode_mark_hashed or similar helper to isolate filesystems from the implementation details of the hash list. > + /* > + * reset the inode number so during RCU traversals we do not match this > + * inode in any lookups until it is fully re-initialised again during > + * allocation. > + */ > + inode->i_ino = 0; There is no hard rule that i_ino is an invalid inode number. It can happen quite easily for inodes using the generic last_ino allocator, and I would not be surprised if there's some filesystems using it as part of the on disk layour either. > + rcu_read_unlock(); > + if (locked) > + spin_unlock(&inode_hash_lock); > __wait_on_freeing_inode(inode); > + if (locked) > + spin_lock(&inode_hash_lock); I can't say I like the locked argument, but I don't see an easy way around it. Can you at least keept the unlocking/relocking inside __wait_on_freeing_inode so that it's centralized in a single place for both find_inode pathes? While at it moving __wait_on_freeing_inode to be above ifind would making changes in this area a lot easier to read, so maybe you can throw in a patch for that, too? > static struct inode *ifind(struct super_block *sb, > - struct hlist_head *head, int (*test)(struct inode *, void *), > + struct hlist_nulls_head *head, int chain, > + int (*test)(struct inode *, void *), > void *data, const int wait) > { > struct inode *inode; > > - spin_lock(&inode_hash_lock); > - inode = find_inode(sb, head, test, data); > + inode = find_inode(sb, head, chain, test, data, false); > if (inode) { > - spin_unlock(&inode_hash_lock); > if (likely(wait)) > wait_on_inode(inode); > return inode; > } > - spin_unlock(&inode_hash_lock); > return NULL; > } This is starting to get a rather pointless helper. I'd suggest just killing ifind/ifind_fast and opencoding them in the caller, possibly as a preparatory patch. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html