On Tue, Oct 23, 2012 at 06:36:15PM +0100, Nix wrote: > On 23 Oct 2012, nix@xxxxxxxxxxxxx uttered the following: > > > On 23 Oct 2012, Trond Myklebust spake thusly: > >> On Tue, 2012-10-23 at 12:46 -0400, J. Bruce Fields wrote: > >>> Looks like there's some confusion about whether nsm_client_get() returns > >>> NULL or an error? > >> > >> nsm_client_get() looks extremely racy in the case where ln->nsm_users == > >> 0. Since we never recheck the value of ln->nsm_users after taking > >> nsm_create_mutex, what is stopping 2 different threads from both setting > >> ln->nsm_clnt and re-initialising ln->nsm_users? > > > > Yep. At the worst possible time: > > > > spin_lock(&ln->nsm_clnt_lock); > > if (ln->nsm_users) { > > if (--ln->nsm_users) > > ln->nsm_clnt = NULL; > > (1) shutdown = !ln->nsm_users; > > } > > spin_unlock(&ln->nsm_clnt_lock); > > > > If a thread reinitializes nsm_users at point (1), after the assignment, > > we could well end up with ln->nsm_clnt NULL and shutdown false. A bit > > later, nsm_mon_unmon gets called with a NULL clnt, and boom. > > Possible fix if so, utterly untested so far (will test when I can face > yet another reboot and fs-corruption-recovery-hell cycle, in a few > hours), may ruin performance, violate locking hierarchies, and consume > kittens: Right, mutexes can't be taken while holding spinlocks. Keep the kittens well away from the computer. --b. > > diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c > index e4fb3ba..da91cdf 100644 > --- a/fs/lockd/mon.c > +++ b/fs/lockd/mon.c > @@ -98,7 +98,6 @@ static struct rpc_clnt *nsm_client_get(struct net *net) > spin_unlock(&ln->nsm_clnt_lock); > goto out; > } > - spin_unlock(&ln->nsm_clnt_lock); > > mutex_lock(&nsm_create_mutex); > clnt = nsm_create(net); > @@ -108,6 +107,7 @@ static struct rpc_clnt *nsm_client_get(struct net *net) > ln->nsm_users = 1; > } > mutex_unlock(&nsm_create_mutex); > + spin_unlock(&ln->nsm_clnt_lock); > out: > return clnt; > } > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html