On Thu, Aug 25, 2016 at 02:49:59PM +0300, Joonas Lahtinen wrote: > > + * we move the task_list from this the next ready fence to the tail of > > + * the original fence's task_list (and so added to the list to be > > + * woken). > > + */ > > + smp_mb__before_spinlock(); > > + if (!list_empty_careful(&x->task_list)) { > > if (list_empty_careful() > return; It's just broken. I added it recently after reading void finish_wait(wait_queue_head_t *q, wait_queue_t *wait) { unsigned long flags; __set_current_state(TASK_RUNNING); /* * We can check for list emptiness outside the lock * IFF: * - we use the "careful" check that verifies both * the next and prev pointers, so that there cannot * be any half-pending updates in progress on other * CPU's that we haven't seen yet (and that might * still change the stack area. * and * - all other users take the lock (ie we can only * have _one_ other CPU that looks at or modifies * the list). */ if (!list_empty_careful(&wait->task_list)) { spin_lock_irqsave(&q->lock, flags); list_del_init(&wait->task_list); spin_unlock_irqrestore(&q->lock, flags); } } and convinced myself that it was also safe to apply here. Turns out that spinlock is very hard to avoid. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx