On Thu, Jul 22, 2010 at 11:21:40PM +0200, Tejun Heo wrote: > Hello, > > On 07/22/2010 05:58 PM, Michael S. Tsirkin wrote: > > All the tricky barrier pairing made me uncomfortable. So I came up with > > this on top (untested): if we do all operations under the spinlock, we > > can get by without barriers and atomics. And since we need the lock for > > list operations anyway, this should have no paerformance impact. > > > > What do you think? > > I've created kthread_worker in wq#for-next tree and already converted > ivtv to use it. Once this lands in mainline, I think converting vhost > to use it would be better choice. kthread worker code uses basically > the same logic used in the vhost_workqueue code but is better > organized and documented. So, I think it would be better to stick > with the original implementation, as otherwise we're likely to just > decrease test coverage without much gain. > > http://git.kernel.org/?p=linux/kernel/git/tj/wq.git;a=commitdiff;h=b56c0d8937e665a27d90517ee7a746d0aa05af46;hp=53c5f5ba42c194cb13dd3083ed425f2c5b1ec439 Sure, if we keep using workqueue. But I'd like to investigate this direction a bit more because there's discussion to switching from kthread to regular threads altogether. > > @@ -151,37 +161,37 @@ static void vhost_vq_reset(struct vhost_dev *dev, > > static int vhost_worker(void *data) > > { > > struct vhost_dev *dev = data; > > - struct vhost_work *work; > > + struct vhost_work *work = NULL; > > > > -repeat: > > - set_current_state(TASK_INTERRUPTIBLE); /* mb paired w/ kthread_stop */ > > + for (;;) { > > + set_current_state(TASK_INTERRUPTIBLE); /* mb paired w/ kthread_stop */ > > > > - if (kthread_should_stop()) { > > - __set_current_state(TASK_RUNNING); > > - return 0; > > - } > > + if (kthread_should_stop()) { > > + __set_current_state(TASK_RUNNING); > > + return 0; > > + } > > > > - work = NULL; > > - spin_lock_irq(&dev->work_lock); > > - if (!list_empty(&dev->work_list)) { > > - work = list_first_entry(&dev->work_list, > > - struct vhost_work, node); > > - list_del_init(&work->node); > > - } > > - spin_unlock_irq(&dev->work_lock); > > + spin_lock_irq(&dev->work_lock); > > + if (work) { > > + work->done_seq = work->queue_seq; > > + if (work->flushing) > > + wake_up_all(&work->done); > > I don't think doing this before executing the function is correct, Well, before I execute the function work is NULL, so this is skipped. Correct? > so > you'll have to release the lock, execute the function, regrab the lock > and then do the flush processing. > > Thanks. It's done in the loop, so I thought we can reuse the locking done for the sake of processing the next work item. Makes sense? > -- > tejun -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html