On Dec. 17, 2009, 0:33 +0200, Peter Staubach <staubach@xxxxxxxxxx> wrote: > Benny Halevy wrote: >> Signed-off-by: Benny Halevy <bhalevy@xxxxxxxxxxx> >> --- >> fs/nfsd/state.h | 8 ++++++++ >> 1 files changed, 8 insertions(+), 0 deletions(-) >> >> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h >> index fefeae2..7e67eca 100644 >> --- a/fs/nfsd/state.h >> +++ b/fs/nfsd/state.h >> @@ -186,6 +186,13 @@ struct nfsd4_sessionid { >> >> #define HEXDIR_LEN 33 /* hex version of 16 byte md5 of cl_name plus '\0' */ >> >> +/* nfs4_client states */ >> +enum nfs4_client_state { >> + CL_STATE_NORMAL, >> + CL_STATE_RENEW, >> + CL_STATE_EXPIRED, >> +}; >> + >> /* >> * struct nfs4_client - one per client. Clientids live here. >> * o Each nfs4_client is hashed by clientid. >> @@ -214,6 +221,7 @@ struct nfs4_client { >> nfs4_verifier cl_confirm; /* generated by server */ >> struct nfs4_cb_conn cl_cb_conn; /* callback info */ >> atomic_t cl_count; /* ref count */ >> + atomic_t cl_state; /* renew / expiry state */ > > The use of an atomic here seems complex and makes the > implementation seem fragile to me. Is the atomic_cmpxchg() > really going to save much time over just using a spinlock > and normal loads and stores to retrieve and get the value > of cl_state? A spinlock would make things much more > obvious how they were supposed to work, to me, at least. > This can be done of course, though logically, I'd still consider defining the equivalent of cmpxchg as a static inline helper since it does exactly what I want to to. As for efficiency the cmpxchg is more as efficient as an uncontended fine grain lock, with no chance of spinning. atomic_t also has a smaller memory footprint than a fine grain lock (per nsf4_client). Using a coarse grain lock is possible but will needless increase the chances for contention. Benny > Thanx... > > ps > >> u32 cl_firststate; /* recovery dir creation */ >> >> /* for nfs41 */ > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html