On Fri, Mar 25, 2011 at 03:34:39PM -0700, Dan Williams wrote: > Ok, so that's why I went all the way to recommending a switch to > delayed_workqueue. We can bring down the core and ensure all events > have seen a cancellation. None of these containing objects are > dynamically destroyed outside of driver exit, nor would they care > about the switch from timer callback to process context. That final > flush is the only elusive bit that is covered by flush_workqueue. This seems like a really nasy hack to me. Right now there are 9 callers of isci_timer_create, of those 5 operate on struct scic_sds_controller 2 operate on struct scic_sds_phy 1 operates on struct scic_sds_port 1 operates on struct isci_request of these at least the request has completely different life time rules than the scic_sds_controller, so both the existing code and the flush_workqueue variant would be incorrect. I'd suggest that as a first step you remove the dynamic allocation of the timers with a structure embedded into the containing structure and replace isci_timer_list_destroy with a calls to del_timer_sync when the containing structure is freed. After that you can trivial remove the wrappers and just opencode the Linux timer calls, which should lead to much simpler and easier to understand code. If the process context of delayed work items provices a significant benefit to your execution model you can convert the timers to delayed work items now, and remove the irqsafe locking. Given that the isci code still does some non-trivial work from it's interrupt handlers and tasklets I'm not sure it's going to buy you much, though. It might be worth to try replacing the tasklets with threaded interrupts to simplify the locking model, but it will need carefully benchmarking to evaluate the performance impact. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html