And below is a patch that allows speeding up the fast path a bit. The code in tcm_qla2xxx_put_session() would turn into this: local_irq_save(flags); if (kref_put_and_lock(&se_sess->sess_kref, &ha->hardware_lock, target_release_session)) spin_unlock(&ha->hardware_lock); local_irq_restore(flags); I don't propose we do this yet, correct code is more important than fast code at this point. But we should be able to reclaim most of the performance impact - if it is actually measureable. Jörn -- The grand essentials of happiness are: something to do, something to love, and something to hope for. -- Allan K. Chalmers [PATCH] kref: add kref_put_and_lock() Similar to kref_put(), but will only take a lock (and a performance hit) if the refcount drops to zero. Signed-off-by: Joern Engel <joern@xxxxxxxxx> --- include/linux/kref.h | 3 +++ lib/kref.c | 23 +++++++++++++++++++++++ 2 files changed, 26 insertions(+) diff --git a/include/linux/kref.h b/include/linux/kref.h index d4a62ab..9581493 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -16,6 +16,7 @@ #define _KREF_H_ #include <linux/types.h> +#include <linux/spinlock.h> struct kref { atomic_t refcount; @@ -24,6 +25,8 @@ struct kref { void kref_init(struct kref *kref); void kref_get(struct kref *kref); int kref_put(struct kref *kref, void (*release) (struct kref *kref)); +int kref_put_and_lock(struct kref *kref, spinlock_t *lock, + void (*release)(struct kref *kref)); int kref_sub(struct kref *kref, unsigned int count, void (*release) (struct kref *kref)); diff --git a/lib/kref.c b/lib/kref.c index 3efb882..07a82f3 100644 --- a/lib/kref.c +++ b/lib/kref.c @@ -62,6 +62,29 @@ int kref_put(struct kref *kref, void (*release)(struct kref *kref)) return 0; } +/** + * kref_put_and_lock - decrement refcount for object with locking + * @kref: object + * @lock: the lock to acquire when dropping the last refcount + * @release: pointer to the function that will clean up the object when the + * last reference to the object is released. + * + * Same as kref_put(), except that it will acquire the lock before dropping the + * last refcount. If the last refcount is dropped, the lock will be held on + * return and the return value will be 1. + */ +int kref_put_and_lock(struct kref *kref, spinlock_t *lock, + void (*release)(struct kref *kref)) +{ + WARN_ON(release == NULL); + WARN_ON(release == (void (*)(struct kref *))kfree); + + if (atomic_dec_and_lock(&kref->refcount, lock)) { + release(kref); + return 1; + } + return 0; +} /** * kref_sub - subtract a number of refcounts for object. -- 1.7.10 -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html