Re: [PATCH-tip v5 17/21] TP-futex: Group readers together in wait queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/03/2017 01:23 PM, valdis.kletnieks@xxxxxx wrote:
> On Fri, 03 Feb 2017 13:03:50 -0500, Waiman Long said:
>
>> On a 2-socket 36-core E5-2699 v3 system (HT off) running on a 4.10
>>                           WW futex       TP futex         Glibc
>>                           --------       --------         -----
>> Total locking ops        35,707,234     58,645,434     10,930,422
>> Per-thread avg/sec           99,149        162,887         30,362
>> Per-thread min/sec           93,190         38,641         29,872
>> Per-thread max/sec          104,213        225,983         30,708
> Do we understand where the 38K number came from?  I'm a bit concerned that the
> min-to-max has such a large dispersion compared to all the other numbers.  Was
> that a worst-case issue, and is the worst-case something likely to happen in
> production, or requires special effort to trigger?
>
Because the lock isn't fair and depending on the placement of the lock,
you will see some CPUs have higher likelihood of getting the lock than
the others. This is reflected in the different locking rates as reported
by the micro-benchmark.  As the microbenchmark is included in this patch
set, you can play around with it if you want.

This patch set does guarantee some minimum performance level, but it
can't guarantee fairness for all the lock waiters.

Regards,
Longman


--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux