Hello, On 07/29/2010 02:23 PM, Michael S. Tsirkin wrote: > I saw WARN_ON(!list_empty(&dev->work_list)) trigger > so our custom flush is not as airtight as need be. Could be but it's also possible that something has queued something after the last flush? Is the problem reproducible? > This patch switches to a simple atomic counter + srcu instead of > the custom locked queue + flush implementation. > > This will slow down the setup ioctls, which should not matter - > it's slow path anyway. We use the expedited flush to at least > make sure it has a sane time bound. > > Works fine for me. I got reports that with many guests, > work lock is highly contended, and this patch should in theory > fix this as well - but I haven't tested this yet. Hmmm... vhost_poll_flush() becomes synchronize_srcu_expedited(). Can you please explain how it works? synchronize_srcu_expedited() is an extremely heavy operation involving scheduling the cpu_stop task on all cpus. I'm not quite sure whether doing it from every flush is a good idea. Is flush supposed to be a very rare operation? Having custom implementation is fine too but let's try to implement something generic if at all possible. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html