On Fri, Sep 16, 2011 at 3:38 AM, Nicholas A. Bellinger <nab@xxxxxxxxxxxxxxx> wrote: > From: Roland Dreier <roland@xxxxxxxxxxxxxxx> > > When work is scheduled with schedule_work(), the work can end up > running on multiple CPUs at the same time -- this happens if > the work is already running on one CPU and schedule_work() is called > on another CPU. This leads to list corruption with target_qf_do_work(), > which is roughly doing: > > spin_lock(...); > list_for_each_entry_safe(...) { > list_del(...); > spin_unlock(...); > > // do stuff > > spin_lock(...); > } > > With multiple CPUs running this code, one CPU can end up deleting the > list entry that the other CPU is about to work on. > > Fix this by splicing the list entries onto a local list and then > operating on that in the work function. Umm. It sounds like what you really want is just a single-threaded workqueue. Wouldn't it be better to do the alloc_workqueue with WQ_UNBOUND, and a max limit of a single thread? There's a helper function for it: alloc_ordered_workqueue(). I dunno. Maybe there's a reason why you actually do want threaded workqueues, but your description makes it sound like this would be better resolved by simply using an ordered on. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html