On Sat, Apr 25, 2009 at 04:01:43AM -0400, Christoph Hellwig wrote: > On Sat, Apr 25, 2009 at 05:18:29AM +0100, Al Viro wrote: > > However, files_lock part 2 looks very dubious - if nothing else, I would > > expect that you'll get *more* cross-CPU traffic that way, since the CPU > > where final fput() runs will correlate only weakly (if at all) with one > > where open() had been done. So you are getting more cachelines bouncing. > > I want to see the numbers for this one, and on different kinds of loads, > > but as it is I've very sceptical. BTW, could you try to collect stats > > along the lines of "CPU #i has done N_{i,j} removals from sb list for > > files that had been in list #j"? > > > > Splitting files_lock on per-sb basis might be an interesting variant, too. > > We should just kill files_lock and s_files completely. The remaining > user are may remount r/o checks, and with counters in place not only on > the vfsmount but also on the superblock we can kill fs_may_remount_ro in > it's current form. The only interesting bit left after that is > mark_files_ro which is so buggy that I'd prefer to kill it including the > underlying functionality. Maybe... What Eric proposed is essentially a reuse of s_list for per-inode list of struct file. Presumably with something like i_lock for protection. So that's not a conflict. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html