On Wed, Jul 28, 2010 at 09:48:31AM +0200, Tejun Heo wrote: > On 07/27/2010 09:19 PM, Michael S. Tsirkin wrote: > >> Thinking a bit more about it, it kind of sucks that queueing to > >> another worker from worker->func() breaks flush. Maybe the right > >> thing to do there is using atomic_t for done_seq? > > > > I don't believe it will help: we might have: > > > > worker1 runs work > > work requeues itself queued index = 1 > > worker1 reads queued index = 1 > > worker2 runs work > > work requeues itself queued index = 2 > > worker2 runs work > > worker2 reads queued index = 2 > > worker2 writes done index = 2 > > worker1 writes done index = 1 > > > > As you see, done index got moved back. > > Yeah, I think the flushing logic should be moved to the worker. > Are you interested in doing it w/ your change? > > Thanks. I'm unsure how flush_work operates under these conditions. E.g. in workqueue.c, this seems to work by keeping a pointer to current workqueue in the work. But what prevents us from destroying the workqueue when work might not be running? Is this currently broken if you use multiple workqueues for the same work? If yes, I propose we do as I did, making flush_work get worker pointer, and only flushing on that worker. > -- > tejun -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html