Re: [PATCH v2 1/4] locking/qspinlock: Handle > 4 slowpath nesting levels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 23, 2019 at 03:11:19PM -0500, Waiman Long wrote:
> On 01/23/2019 04:34 AM, Will Deacon wrote:
> > On Tue, Jan 22, 2019 at 10:49:08PM -0500, Waiman Long wrote:

> >> @@ -412,6 +412,21 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >>  	idx = node->count++;
> >>  	tail = encode_tail(smp_processor_id(), idx);

> >> +	if (unlikely(idx >= MAX_NODES)) {
> >> +		while (!queued_spin_trylock(lock))
> >> +			cpu_relax();
> >> +		goto release;
> >> +	}

> So the additional code checks the idx value and branch to the end of the
> function when the condition is true. There isn't too much overhead here.

So something horrible we could do (and I'm not at all advocating we do
this), is invert node->count. That is, start at 3 and decrement and
detect sign flips.

That avoids the additional compare. It would require we change the
structure layout though, otherwise we keep hitting that second line by
default, which would suck.



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux