On Fri, Jul 29, 2011 at 04:59:51PM +1000, Dave Chinner wrote: > On Fri, Jul 29, 2011 at 09:59:18AM +0400, Cyrill Gorcunov wrote: > > On Fri, Jul 29, 2011 at 01:25:03PM +1000, Dave Chinner wrote: > > ... > > > > > > The VFS shrinker code is now already called on a per-sb basis. Each > > > sb has it's own shrinker context that deals with dentries, inodes > > > and anything a filesystem wants to have shrunk in the call. That > > > solves the original issue I had with your "limit the dentry cache > > > size" patch series in that it didn't shrink or limit the other VFS > > > caches that were the ones that were really consuming all your > > > memory... > > > > Thanks for comments, Dave! Still the read only lock without > > increasing sequence number might be useful, no? (patch 1) > > I'll defer to Al on that one - the intricacies of the rename locking > are way over my head. I'm not sure that's safe. Note that one use of rename_lock is that we allow hash lookup to race with d_move(). Which can move object from one hash chain to another, so hash lookup may end up jumping from one chain to another and getting a false negative. That's why __d_lookup() is not safe without read_seqretry loop (or seq_writelock, of course). But look what happens if we do per-sb locks - d_move() derailing the hash lookup might happen on *any* filesystem. They all share the same hash table. So just checking that we hadn't done renames on our filesystem is not enough to make sure we hadn't hit a false negative. Unless we go for making the hashtable itself per-superblock (and I really doubt that it's a good idea), I don't see any obvious ways to avoid that kind of race. IOW, how would you implement safe d_lookup()? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html