Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- On Mar 16, 2022, at 6:45 PM, Namhyung Kim namhyung@xxxxxxxxxx wrote:

> Adding the lock contention tracepoints in various lock function slow
> paths.  Note that each arch can define spinlock differently, I only
> added it only to the generic qspinlock for now.
> 
> Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx>
> ---
> kernel/locking/mutex.c        |  3 +++
> kernel/locking/percpu-rwsem.c |  3 +++
> kernel/locking/qrwlock.c      |  9 +++++++++
> kernel/locking/qspinlock.c    |  5 +++++
> kernel/locking/rtmutex.c      | 11 +++++++++++
> kernel/locking/rwbase_rt.c    |  3 +++
> kernel/locking/rwsem.c        |  9 +++++++++
> kernel/locking/semaphore.c    | 14 +++++++++++++-
> 8 files changed, 56 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index ee2fd7614a93..c88deda77cf2 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> unsigned int subclas
> 	}
> 
> 	set_current_state(state);
> +	trace_contention_begin(lock, 0);

This should be LCB_F_SPIN rather than the hardcoded 0.

> 	for (;;) {
> 		bool first;
> 
> @@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> unsigned int subclas
> skip_wait:
> 	/* got the lock - cleanup and rejoice! */
> 	lock_acquired(&lock->dep_map, ip);
> +	trace_contention_end(lock, 0);
> 
> 	if (ww_ctx)
> 		ww_mutex_lock_acquired(ww, ww_ctx);
> @@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> unsigned int subclas
> err:
> 	__set_current_state(TASK_RUNNING);
> 	__mutex_remove_waiter(lock, &waiter);
> +	trace_contention_end(lock, ret);
> err_early_kill:
> 	raw_spin_unlock(&lock->wait_lock);
> 	debug_mutex_free_waiter(&waiter);
> diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> index c9fdae94e098..833043613af6 100644
> --- a/kernel/locking/percpu-rwsem.c
> +++ b/kernel/locking/percpu-rwsem.c
> @@ -9,6 +9,7 @@
> #include <linux/sched/task.h>
> #include <linux/sched/debug.h>
> #include <linux/errno.h>
> +#include <trace/events/lock.h>
> 
> int __percpu_init_rwsem(struct percpu_rw_semaphore *sem,
> 			const char *name, struct lock_class_key *key)
> @@ -154,6 +155,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore
> *sem, bool reader)
> 	}
> 	spin_unlock_irq(&sem->waiters.lock);
> 
> +	trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ :
> LCB_F_WRITE));
> 	while (wait) {
> 		set_current_state(TASK_UNINTERRUPTIBLE);
> 		if (!smp_load_acquire(&wq_entry.private))
> @@ -161,6 +163,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore
> *sem, bool reader)
> 		schedule();
> 	}
> 	__set_current_state(TASK_RUNNING);
> +	trace_contention_end(sem, 0);

So for the reader-write locks, and percpu rwlocks, the "trace contention end" will always
have ret=0. Likewise for qspinlock, qrwlock, and rtlock. It seems to be a waste of trace
buffer space to always have space for a return value that is always 0.

Sorry if I missed prior dicussions of that topic, but why introduce this single
"trace contention begin/end" muxer tracepoint with flags rather than per-locking-type
tracepoint ? The per-locking-type tracepoint could be tuned to only have the fields
that are needed for each locking type.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux