On 08/01/2013 03:10 PM, Peter Zijlstra wrote:
On Wed, Jul 31, 2013 at 10:37:10PM -0400, Waiman Long wrote:
OK, so over-all I rather like the thing. It might be good to include a
link to some MCS lock description, sadly wikipedia doesn't have an
article on the concept :/
http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
That seems like nice (short-ish) write-up of the general algorithm.
+typedef struct qspinlock {
+ union {
+ struct {
+ u8 locked; /* Bit lock */
+ u8 reserved;
+ u16 qcode; /* Wait queue code */
+ };
+ u32 qlock;
+ };
+} arch_spinlock_t;
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ barrier();
+ ACCESS_ONCE(lock->locked) = 0;
Its always good to add comments with barriers..
+ smp_wmb();
+}
+/*
+ * The queue node structure
+ */
+struct qnode {
+ struct qnode *next;
+ u8 wait; /* Waiting flag */
+ u8 used; /* Used flag */
+#ifdef CONFIG_DEBUG_SPINLOCK
+ u16 cpu_nr; /* CPU number */
+ void *lock; /* Lock address */
+#endif
+};
+
+/*
+ * The 16-bit wait queue code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index
+ * Bits 2-15: cpu number + 1
+ *
+ * The current implementation will allow a maximum of (1<<14)-1 = 16383 CPUs.
I haven't yet read far enough to figure out why you need the -1 thing,
but effectively you're restricted to 15k due to this.
It is exactly 16k-1 not 15k
That is because CPU_CODE of 1 to 16k represents cpu 0..16k-1
--
To unsubscribe from this list: send the line "unsubscribe linux-arch" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html