在 2022/2/21 下午4:01, Eugenio Perez Martin 写道:
On Mon, Feb 21, 2022 at 8:39 AM Jason Wang <jasowang@xxxxxxxxxx> wrote:
在 2022/2/18 下午8:35, Eugenio Perez Martin 写道:
On Tue, Feb 8, 2022 at 4:23 AM Jason Wang <jasowang@xxxxxxxxxx> wrote:
在 2022/1/31 下午11:34, Eugenio Perez Martin 写道:
On Sat, Jan 29, 2022 at 9:06 AM Jason Wang <jasowang@xxxxxxxxxx> wrote:
在 2022/1/22 上午4:27, Eugenio Pérez 写道:
Signed-off-by: Eugenio Pérez <eperezma@xxxxxxxxxx>
---
hw/virtio/vhost-vdpa.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 18de14f0fb..029f98feee 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -687,13 +687,29 @@ static int vhost_vdpa_set_vring_kick(struct vhost_dev *dev,
}
}
-static int vhost_vdpa_set_vring_call(struct vhost_dev *dev,
- struct vhost_vring_file *file)
+static int vhost_vdpa_set_vring_dev_call(struct vhost_dev *dev,
+ struct vhost_vring_file *file)
{
trace_vhost_vdpa_set_vring_call(dev, file->index, file->fd);
return vhost_vdpa_call(dev, VHOST_SET_VRING_CALL, file);
}
+static int vhost_vdpa_set_vring_call(struct vhost_dev *dev,
+ struct vhost_vring_file *file)
+{
+ struct vhost_vdpa *v = dev->opaque;
+
+ if (v->shadow_vqs_enabled) {
+ int vdpa_idx = vhost_vdpa_get_vq_index(dev, file->index);
+ VhostShadowVirtqueue *svq = g_ptr_array_index(v->shadow_vqs, vdpa_idx);
+
+ vhost_svq_set_guest_call_notifier(svq, file->fd);
Two questions here (had similar questions for vring kick):
1) Any reason that we setup the eventfd for vhost-vdpa in
vhost_vdpa_svq_setup() not here?
I'm not sure what you mean.
The guest->SVQ call and kick fds are set here and at
vhost_vdpa_set_vring_kick. The event notifier handler of the guest ->
SVQ kick_fd is set at vhost_vdpa_set_vring_kick /
vhost_svq_set_svq_kick_fd. The guest -> SVQ call fd has no event
notifier handler since we don't poll it.
On the other hand, the connection SVQ <-> device uses the same fds
from the beginning to the end, and they will not change with, for
example, call fd masking. That's why it's setup from
vhost_vdpa_svq_setup. Delaying to vhost_vdpa_set_vring_call would make
us add way more logic there.
More logic in general shadow vq code but less codes for vhost-vdpa
specific code I think.
E.g for we can move the kick set logic from vhost_vdpa_svq_set_fds() to
here.
But they are different fds. vhost_vdpa_svq_set_fds sets the
SVQ<->device. This function sets the SVQ->guest call file descriptor.
To move the logic of vhost_vdpa_svq_set_fds here would imply either:
a) Logic to know if we are receiving the first call fd or not.
Any reason for this? I guess you meant multiqueue. If yes, it should not
be much difference since we have idx as the parameter.
With "first call fd" I meant "first time we receive the call fd", so
we only set them once.
I think this is going to be easier if I prepare a patch doing your way
and we comment on it.
That would be helpful but if there's no issue with current code (see
below), we can leave it as is and do optimization on top.
That
code is not in the series at the moment, because setting at
vhost_vdpa_dev_start tells the difference for free. Is just adding
code, not moving.
b) Logic to set again *the same* file descriptor to device, with logic
to tell if we have missed calls. That logic is not implemented for
device->SVQ call file descriptor, because we are assuming it never
changes from vhost_vdpa_svq_set_fds. So this is again adding code.
At this moment, we have:
vhost_vdpa_svq_set_fds:
set SVQ<->device fds
vhost_vdpa_set_vring_call:
set guest<-SVQ call
vhost_vdpa_set_vring_kick:
set guest->SVQ kick.
If I understood correctly, the alternative would be something like:
vhost_vdpa_set_vring_call:
set guest<-SVQ call
if(!vq->call_set) {
- set SVQ<-device call.
- vq->call_set = true
}
vhost_vdpa_set_vring_kick:
set guest<-SVQ call
if(!vq->dev_kick_set) {
- set guest->device kick.
- vq->dev_kick_set = true
}
dev_reset / dev_stop:
for vq in vqs:
vq->dev_kick_set = vq->dev_call_set = false
...
Or have I misunderstood something?
I wonder what happens if MSI-X is masking in guest. So if I understand
correctly, we don't disable the eventfd from device? If yes, this seems
suboptinal.
We cannot disable the device's call fd unless SVQ actively poll it. As
I see it, if the guest masks the call fd, it could be because:
a) it doesn't want to receive more calls because is processing buffers
b) Is going to burn a cpu to poll it.
The masking only affects SVQ->guest call. If we also mask device->SVQ,
we're adding latency in the case a), and we're effectively disabling
forwarding in case b).
Right, so we need leave a comment to explain this, then I'm totally fine
with this approach.
It only works if guest is effectively not interested in calls because
is not going to retire used buffers, but in that case it doesn't hurt
to simply maintain the device->call fd, the eventfds are going to be
silent anyway.
Thanks!
Yes.
Thanks
Thanks
Thanks!
Thanks
2) The call could be disabled by using -1 as the fd, I don't see any
code to deal with that.
Right, I didn't take that into account. vhost-kernel takes also -1 as
kick_fd to unbind, so SVQ can be reworked to take that into account
for sure.
Thanks!
Thanks
+ return 0;
+ } else {
+ return vhost_vdpa_set_vring_dev_call(dev, file);
+ }
+}
+
/**
* Set shadow virtqueue descriptors to the device
*
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization