Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 17 Mar 2022 09:45:28 -0400 (EDT)
Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> wrote:

> > *sem, bool reader)
> > 		schedule();
> > 	}
> > 	__set_current_state(TASK_RUNNING);
> > +	trace_contention_end(sem, 0);  
> 
> So for the reader-write locks, and percpu rwlocks, the "trace contention end" will always
> have ret=0. Likewise for qspinlock, qrwlock, and rtlock. It seems to be a waste of trace
> buffer space to always have space for a return value that is always 0.
> 
> Sorry if I missed prior dicussions of that topic, but why introduce this single
> "trace contention begin/end" muxer tracepoint with flags rather than per-locking-type
> tracepoint ? The per-locking-type tracepoint could be tuned to only have the fields
> that are needed for each locking type.

per-locking-type tracepoint will also add a bigger footprint.

One extra byte is not an issue. This is just the tracepoints. You can still
attach your own specific LTTng trace events that ignores the zero
parameter, and can multiplex into specific types of trace events on your
end.

I prefer the current approach as it keeps the tracing footprint down.

-- Steve



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux