On Sat, 9 Jun 2012 02:31:27 +0300 "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> wrote: > On Fri, Jun 08, 2012 at 03:31:20PM -0700, Andrew Morton wrote: > > On Fri, 8 Jun 2012 23:27:34 +0100 > > Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > > > > > On Fri, Jun 08, 2012 at 03:25:50PM -0700, Andrew Morton wrote: > > > > > > > A neater implementation might be to add a kmem_cache* argument to > > > > unregister_filesystem(). If that is non-NULL, unregister_filesystem() > > > > does the rcu_barrier() and destroys the cache. That way we get to > > > > delete (rather than add) a bunch of code from all filesystems and new > > > > and out-of-tree filesystems cannot forget to perform the rcu_barrier(). > > > > > > There's often enough more than one cache, so that one is no-go. > > > > kmem_cache** ;) > > > > Which filesystems have multiple inode caches? > > Multiple inode caches? No. > Multiple caches with call_rcu() free? See btrfs or gfs2. OK. But for those non-inode caches, the rcu treatment is private to the filesystem. Hence it is appropriate that the filesystem call rcu_barrier() for those caches. But in the case of the inode caches, the rcu treatment is a vfs thing, so it is the vfs which should perform the rcu_barrier(). This is a red herring - those non-inode caches have nothing to do with the issue we're dicussing. So how about open-coding the rcu_barrier() in btrfs and gfs2 for the non-inode caches (which is the appropriate place), and hand the inode cache over to the vfs for treatment (which is the appropriate place). The downside is that btrfs and gfs2 will do an extra rcu_barrier() at umount time. Shrug. If they really want to super-optimise that, they can skip the private rcu_barrier() call and assume that the vfs will be doing it. Not a good idea, IMO. -- To unsubscribe from this list: send the line "unsubscribe ecryptfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html