On 03/19/2015 08:25 AM, Peter Zijlstra wrote:
On Thu, Mar 19, 2015 at 11:12:42AM +0100, Peter Zijlstra wrote:
So I was now thinking of hashing the lock pointer; let me go and quickly
put something together.
A little something like so; ideally we'd allocate the hashtable since
NR_CPUS is kinda bloated, but it shows the idea I think.
And while this has loops in (the rehashing thing) their fwd progress
does not depend on other CPUs.
And I suspect that for the typical lock contention scenarios its
unlikely we ever really get into long rehashing chains.
---
include/linux/lfsr.h | 49 ++++++++++++
kernel/locking/qspinlock_paravirt.h | 143 ++++++++++++++++++++++++++++++++----
2 files changed, 178 insertions(+), 14 deletions(-)
--- /dev/null
+
+static int pv_hash_find(struct qspinlock *lock)
+{
+ u64 hash = hash_ptr(lock, PV_LOCK_HASH_BITS);
+ struct pv_hash_bucket *hb, *end;
+ int cpu = -1;
+
+ if (!hash)
+ hash = 1;
+
+ hb =&__pv_lock_hash[hash_align(hash)];
+ for (;;) {
+ for (end = hb + PV_HB_PER_LINE; hb< end; hb++) {
+ struct qspinlock *l = READ_ONCE(hb->lock);
+
+ /*
+ * If we hit an unused bucket, there is no match.
+ */
+ if (!l)
+ goto done;
After more careful reading, I think the assumption that the presence of
an unused bucket means there is no match is not true. Consider the scenario:
1. cpu 0 puts lock1 into hb[0]
2. cpu 1 puts lock2 into hb[1]
3. cpu 2 clears hb[0]
4. cpu 3 looks for lock2 and doesn't find it
I was thinking about putting some USED flag in the buckets, but then we
will eventually fill them all up as used. If we put the entries into a
hashed linked list, we have to deal with the complicated synchronization
issues with link list update.
At this point, I am thinking using back your previous idea of passing
the queue head information down the queue. I am now convinced that the
unlock call site patching should work. So I will incorporate that in my
next update.
Please let me know if you think my reasoning is not correct.
Thanks,
Longman
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html