On 12/15/2010 08:33 PM, James Bottomley wrote: > A single flush won't quite work. The target is a parent of the device, > both of which release methods have execute_in_process_context() > requirements. What can happen here is that the last put of the device > will release the target (from the function). If both are moved to > workqueues, a single flush could cause the execution of the device work, > which then queues up target work (and makes it still pending). A double > flush will solve this (because I think our nesting level doesn't go > beyond 2) but it's a bit ugly ... Yeap, that's an interesting point actually. I just sent the patch butn there is no explicit flush. It's implied by destroy_work() and it has been a bit bothering that destroy_work() could exit with pending works if execution of the current one produces more. I was pondering making destroy_workqueue() actually drain all the scheduled works and maybe trigger a warning if it seems to loop for too long. But, anyways, I don't think that's gonna happen here. If the last put hasn't been executed the module reference wouldn't be zero, so module unload can't initiate, right? > execute_in_process_context() doesn't have this problem because the first > call automatically executes the second inline (because it now has > context). Yes, it wouldn't have that problem but it becomes subtle to high heavens. I don't think the queue destroyed with pending works problem exists here because of the module refcnts but I could be mistaken. Either way, I'll fix destroy_workqueue() such that it actually drains the workqueue before destruction, which actually seems like the right thing to do so that scsi doesn't have to worry about double flushing or whatnot. How does that sound? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html