On 07/13/2013 12:58 PM, Masami Hiramatsu wrote: > Hi, > > (2013/07/09 10:09), Waiman Long wrote:> +/** >> + * lockref_put_or_lock - decrements count unless count <= 1 before decrement >> + * @lockcnt: pointer to lockref structure >> + * Return: 1 if count updated successfully or 0 if count <= 1 and lock taken >> + * >> + * The only difference between lockref_put_or_lock and lockref_put is that >> + * the former function will hold the lock on return while the latter one >> + * will free it on return. >> + */ >> +static __always_inline int lockref_put_or_locked(struct lockref *lockcnt) > Here is a function name typo. _locked should be _lock. > And also, I think we should take a note here to tell this function does *not* > guarantee lockcnt->refcnt == 0 or 1 until unlocked if this returns 0. Thank for pointing this out. I will fix the typo and add additional note to the comments. >> +{ >> + spin_lock(&lockcnt->lock); >> + if (likely(lockcnt->refcnt > 1)) { >> + lockcnt->refcnt--; >> + spin_unlock(&lockcnt->lock); >> + return 1; >> + } >> + return 0; >> +} > Using this implementation guarantees lockcnt->refcnt == 0 or 1 until unlocked > if this returns 0. > > However, the below one looks not guarantee it. Since lockref_add_unless > and spinlock are not done atomically, there is a chance for someone > to increment it right before locking. > > Or, I missed something? For both functions, reference count won't be decremented to 0 and the caller has to handle this case by taking the lock and do whatever it needs to handle it. When refcnt > 1, decrement is done atomically either by cmpxchg or with the spinlock hold. The reason for these 2 functions is to save an extra lock/unlock sequence when this feature is disabled. I will add comments to clarify that. >> +/** >> + * lockref_put_or_lock - Decrements count unless the count is <= 1 >> + * otherwise, the lock will be taken >> + * @lockcnt: pointer to struct lockref structure >> + * Return: 1 if count updated successfully or 0 if count <= 1 and lock taken >> + */ >> +int >> +lockref_put_or_lock(struct lockref *lockcnt) >> +{ >> + if (lockref_add_unless(lockcnt, -1, 1)) >> + return 1; >> + spin_lock(&lockcnt->lock); >> + return 0; >> +} > BTW, it looks that your dcache patch knows this and keeps double check for > the case of lockcnt->refcnt > 1 in dput(). There is a slight chance that the refcnt may be changed in between locked section of code. So it is prudent to double check before decrementing it to zero. Regards, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html