Re: Measuring lock conention

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 13, 2017 at 2:02 PM, Mohamad Gebai <mgebai@xxxxxxxx> wrote:
>
> On 04/13/2017 01:20 PM, Mark Nelson wrote:
>>
>> Nice!  I will give it a try and see how it goes. Specifically I want to
>> compare it to what I ended up working on yesterday.  After the meeting I
>> ended up doing major surgery on an existing gdb based wallclock profiler and
>> modified it to a) work b) be thread aware c) print inverse call-graphs.  The
>> code is still pretty rough but you can see it here:
>>
>> https://github.com/markhpc/gdbprof
>
> Very neat, thank you for this. Please let me know what happens, I'm
> interested to see which tool ends up working best for this use case. Also,
> could you share some information about what you're trying to find out?
>
> If I'm not mistaken, there's work being done to add the call stack of a
> process within the context of LTTng events. That way we could have this
> information when a process blocks in sys_futex.
>
> PS: found it, it's still in RFC -
> https://lists.lttng.org/pipermail/lttng-dev/2017-March/026985.html
>
>

You can also use perf's syscall tracepoints to capture the
syscalls:sys_enter_futex event. This way you get contended mutex. The
nice thing about it is all the normal perf tools apply, so you can see
source annotation, frequency (hot points) and also examine the
callgraph of the hot points.

-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux