On Sun, Jul 25, 2010 at 09:41:22AM +0200, Tejun Heo wrote: > Hello, > > On 07/24/2010 09:14 PM, Michael S. Tsirkin wrote: > >> I've created kthread_worker in wq#for-next tree and already converted > >> ivtv to use it. Once this lands in mainline, I think converting vhost > >> to use it would be better choice. kthread worker code uses basically > >> the same logic used in the vhost_workqueue code but is better > >> organized and documented. So, I think it would be better to stick > >> with the original implementation, as otherwise we're likely to just > >> decrease test coverage without much gain. > >> > >> http://git.kernel.org/?p=linux/kernel/git/tj/wq.git;a=commitdiff;h=b56c0d8937e665a27d90517ee7a746d0aa05af46;hp=53c5f5ba42c194cb13dd3083ed425f2c5b1ec439 > > > > Sure, if we keep using workqueue. But I'd like to investigate this > > direction a bit more because there's discussion to switching from kthread to > > regular threads altogether. > > Hmmm? It doesn't have much to do with workqueue. kthread_worker is a > simple wrapper around kthread. It now assumes kthread but changing it > to be useable with any thread shouldn't be too hard. Wouldn't that be > better? Yes, of course, when common code becomes available we should switch to that. > >> I don't think doing this before executing the function is correct, > > > > Well, before I execute the function work is NULL, so this is skipped. > > Correct? > > > >> so > >> you'll have to release the lock, execute the function, regrab the lock > >> and then do the flush processing. > >> > >> Thanks. > > > > It's done in the loop, so I thought we can reuse the locking > > done for the sake of processing the next work item. > > Makes sense? > > Yeap, right. I think it would make much more sense to use common code > when it becomes available but if you think the posted change is > necessary till then, please feel free to go ahead. > > Thanks. > > -- > tejun -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html