Re: [PATCH 17/17] RCU'd vfsmounts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 03, 2013 at 01:19:16PM -0700, Linus Torvalds wrote:

> Hmm. The CPU2 mntput can only happen under RCU readlock, right? After
> the RCU grace period _and_ if the umount is going ahead, nothing
> should have a mnt pointer, right?

umount -l doesn't care.

> So I'm wondering if you couldn't just have a synchronize_rcu() in that
> umount path, after clearing mnt_ns. At that point you _know_ you're
> the only one that should have access to the mnt.

We have it there.  See namespace_unlock().  And you are right about the
locking rules for umount_tree(), except that caller is responsible
for dropping those.  With (potentially final) mntput() happening after
both (well, as part of namespace_unlock(), done after synchronize_rcu()).

The problem is this:
A = 1, B = 1
CPU1:
A = 0
<full barrier>
synchronize_rcu()
read B

CPU2:
rcu_read_lock()
B = 0
read A

Are we guaranteed that we won't get both of them seeing ones, in situation
when that rcu_read_lock() comes too late to be noticed by synchronize_rcu()?
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux