On 02/13/2015 09:02 PM, Oleg Nesterov wrote:
On 02/13, Raghavendra K T wrote:
@@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
{
struct __raw_tickets tmp = READ_ONCE(lock->tickets);
- return tmp.tail != tmp.head;
+ return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG);
}
Well, this can probably use __tickets_equal() too. But this is cosmetic.
That looks good. Added.
It seems that arch_spin_is_contended() should be fixed with this change,
(__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC
can be true because of TICKET_SLOWPATH_FLAG in .head, even if it is actually
unlocked.
Done.
Hmm! it was because I was still under impression that slowpath bit is
in tail. You are right, situation could lead to positive max and may
report false contention.
And the "(__ticket_t)" typecast looks unnecessary, it only adds more
confusuin, but this is cosmetic too.
Done.
@@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
* check again make sure it didn't become free while
* we weren't looking.
*/
- if (ACCESS_ONCE(lock->tickets.head) == want) {
+ head = READ_ONCE(lock->tickets.head);
+ if (__tickets_equal(head, want)) {
add_stats(TAKEN_SLOW_PICKUP, 1);
goto out;
This is off-topic, but with or without this change perhaps it makes sense
to add smp_mb__after_atomic(). It is nop on x86, just to make this code
more understandable for those (for me ;) who can never remember even the
x86 rules.
Hope you meant it for add_stat. yes smp_mb__after_atomic() would be
harmless barrier() in x86. Did not add this V5 as yoiu though but this
made me look at slowpath_enter code and added an explicit barrier()
there :).
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html