On Wed, Aug 14, 2013 at 09:03:28AM +0200, Bart Van Assche wrote: > If two or more items are queued on dev->work_list before > vhost_worker() starts processing these then the value of > work->done_seq will be set to the sequence number of a work item > that has not yet been processed. Avoid this by letting > vhost_worker() count the number of items that have already been > processed. > > Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx> > Cc: Michael S. Tsirkin <mst@xxxxxxxxxx> > Cc: Asias He <asias@xxxxxxxxxx> I'm confused by this explanation. done_seq is set here: if (work) { work->done_seq = seq; if (work->flushing) wake_up_all(&work->done); } and work is set here: if (!list_empty(&dev->work_list)) { work = list_first_entry(&dev->work_list, struct vhost_work, node); list_del_init(&work->node); seq = work->queue_seq; } this work is processed on the next line: if (work) { __set_current_state(TASK_RUNNING); work->fn(work); if (need_resched()) schedule(); } so how do we end up with a sequence of a work item that isn't processed? > --- > drivers/vhost/vhost.c | 5 ++--- > 1 file changed, 2 insertions(+), 3 deletions(-) > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index e7ffc10..11d668a 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -202,7 +202,7 @@ static int vhost_worker(void *data) > { > struct vhost_dev *dev = data; > struct vhost_work *work; > - unsigned seq; > + unsigned seq = 0; > mm_segment_t oldfs = get_fs(); > > set_fs(USER_DS); > @@ -216,14 +216,13 @@ static int vhost_worker(void *data) > work = list_first_entry(&dev->work_list, > struct vhost_work, node); > list_del_init(&work->node); > - seq = work->queue_seq; > spin_unlock_irq(&dev->work_lock); > > __set_current_state(TASK_RUNNING); > work->fn(work); > > spin_lock_irq(&dev->work_lock); > - work->done_seq = seq; > + work->done_seq = ++seq; > if (work->flushing) > wake_up_all(&work->done); > } > -- > 1.7.10.4 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html