>>> This is particularly relevant >>> in high contention situations when new threads keep arriving on the same >>> socket as the lock holder. >> In this case, the lock will stay on the same NUMA node/socket for >> 2^numa_spinlock_threshold times, which is the worst case scenario if we >> consider the long-term fairness. And if we have multiple nodes, it will take >> up to 2^numa_spinlock_threshold X (nr_nodes - 1) + nr_cpus_per_node >> lock transitions until any given thread will acquire the lock >> (assuming 2^numa_spinlock_threshold > nr_cpus_per_node). >> > > You're right that the latest version of the patch handles long-term fairness > deterministically. > > As I understand it, the n-th thread in the main queue is guaranteed to > acquire the lock after N lock handovers, where N is bounded by > > n - 1 + 2^numa_spinlock_threshold * (nr_nodes - 1) > > I'm not sure what role the variable nr_cpus_per_node plays in your analysis. Yeah, that’s a minor point, but let me try to clarify. The "n-th thread in the main queue” is (at most) the nr_cpus_per_node-th thread for some node k. So when the node k gets the preference, that thread will get the lock after at most nr_cpus_per_node-1 lock transitions. As we consider the upper bound, your analysis is also correct; mine is just a bit tighter. Makes sense? Regards, — Alex