> > > + /* > > > + * Set up the new cpu code to be exchanged > > > + */ > > > + my_qcode = SET_QCODE(cpu_nr, qn_idx); > > > + > > > > If we get interrupted here before we have a chance to set the used flag, > > the interrupt handler could pick up the same qnode if it tries to > > acquire queued spin lock. Then we could overwrite the qcode we have set > > here. > > > > Perhaps an exchange operation for the used flag to prevent this race > > condition? > > I don't get why we need the used thing at all; something like: > > struct qna { > int cnt; > struct qnode nodes[4]; > }; > > DEFINE_PER_CPU(struct qna, qna); > > struct qnode *get_qnode(void) > { > struct qna *qna = this_cpu_ptr(&qna); > > return qna->nodes[qna->cnt++]; /* RMW */ > } > > void put_qnode(struct qnode *qnode) > { > struct qna *qna = this_cpu_ptr(&qna); > qna->cnt--; > } > > Should do fine, right? > > If we interrupt the RMW above the interrupted context hasn't yet used > the queue and once we return its free again, so all should be well even > on load-store archs. Agreed. This approach is more efficient and avoid the overhead searching for unused node and setting used flag. Tim > > The nodes array might as well be 3, because NMIs should never contend on > a spinlock, so all we're left with is task, softirq and hardirq context. -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html