Christian Borntraeger wrote:
Am Mittwoch, 12. Dezember 2007 schrieb Dor Laor:
I think the change below handles the race. Otherwise please detail the
use case.
[...]
@@ -292,6 +292,9 @@ static int virtnet_open(struct net_devic
return -ENOMEM;
napi_enable(&vi->napi);
+
+ vi->rvq->vq_ops->enable(vi->rvq);
+ vi->svq->vq_ops->enable(vi->svq);
If you change it to:
if (!vi->rvq->vq_ops->enable(vi->rvq))
vi->rvq->vq_ops->kick(vi->rvq);
if (!vi->rvq->vq_ops->enable(vi->svq))
vi->rvq->vq_ops->kick(vi->svq);
You solve the race of packets already waiting in the queue without
triggering the irq.
Hmm, I dont fully understand your point. I think this will work as long as
the host has not consumed all inbound buffers. It will also require that
the host sends an additional packet, no? If no additional packet comes the
host has no reason to send an interrupt just because it got a notify
hypercall. kick inside a guest also does not trigger the poll routine.
It also wont work on the following scenario:
in virtnet open we will allocate buffers and send them to the host using the
kick callback. The host can now use _all_ buffers for incoming data while
interrupts are still disabled and the guest is not running.( Lets say the
host bridge has lots of multicast traffic and the guest gets not scheduled
for a while). When the guest now continues and enables the interrupts
nothing happens. Doing a kick does not help, as the host code will bail out
with "no dma memory for transfer".
Christian
You're right I got confused somehow.
So in that case setting the driver status field on open in addition to
your enable will do the trick.
On DRIVER_OPEN the host will trigger an interrupt if the queue is not
empty..
Thanks,
Dor
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/virtualization