Re: why index (collectionIndex) need a lock?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I didn't know about mutrace, thanks for that reference!

On Tue, Sep 30, 2014 at 8:13 PM, Milosz Tanski <milosz@xxxxxxxxx> wrote:
> On Tue, Sep 30, 2014 at 7:36 PM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
>> On Tue, Sep 30, 2014 at 10:42 AM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote:
>>> Also, I don't think this lock has big impact to performance since it is already sharded to index level. I tried with reader/writer implementation of this lock (logic will be somewhat similar to your state concept) and not getting any benefit .
>>
>> If there is interest in identifying locks that are introducing latency
>> it might useful to add some tracking features to Mutex and RWLock. A
>> simple thing would be to just record maximum wait times per lock and
>> dump this via admin socket.
>
> Noah,
>
> You're better off running some kind of synthetic test using mutrace
> (you can't use tcmalloc/jemalloc) or measuring futex syscalls via a
> pref tracepoint. Generally adding this kind of tracking into the locks
> itself ends up being even more expensive.
>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> Milosz Tanski
> CTO
> 16 East 34th Street, 15th floor
> New York, NY 10016
>
> p: 646-253-9055
> e: milosz@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux