On Tue, May 31, 2011 at 04:18:14AM -0700, Nicholas A. Bellinger wrote: > I very much agree with these points, and moving to workqueues for TCM > v4.1 (mainline v3.1) for these reasons makes alot of sense to me. I > need to get a better idea of how this will actually look, but am eager > to move ahead.. I think the first step would be to move split the se_task execution from the current thread into a workqueue. That would also help with making the file backend scale much better. If it wasn't for head of queue semantics this would be almost trivial - just embedd a work_struct in every task and queue it up to the work queue, which handles concurrency. I'm not entirely sure how to handle head of queue semantics correctly, maybe just adding a pointer to pending head of queue tasks to the se_dev and executing those first might work. Btw, I can't fully make sense of the current task execution, so here's a few questions: - why do we have separate __transport_execute_tasks and transport_execute_tasks, or rather why is it safe to use __transport_execute_tasks transport_processing_thread without the checks for shutting down? - what is the point of the transport_generic_process_write wrapper? - why does transport_generic_new_cmd call transport_execute_tasks for reads, but for writes we leave it to the ->write_pending method which only calls transport_execute_tasks for some of the fabrics? Note that with the current code transport_generic_new_cmd might not get called from the fabric thread and thus stall the caller - it might make sense switch to calling transport_generic_handle_data for that case. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html