On Mon, Dec 19, 2022 at 3:12 PM Yongji Xie <xieyongji@xxxxxxxxxxxxx> wrote: > > On Mon, Dec 19, 2022 at 2:06 PM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > On Mon, Dec 19, 2022 at 12:39 PM Yongji Xie <xieyongji@xxxxxxxxxxxxx> wrote: > > > > > > On Fri, Dec 16, 2022 at 11:58 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > > > On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@xxxxxxxxxxxxx> wrote: > > > > > > > > > > This introduces set_irq_affinity callback in > > > > > vdpa_config_ops so that vdpa device driver can > > > > > get the interrupt affinity hint from the virtio > > > > > device driver. The interrupt affinity hint would > > > > > be needed by the interrupt affinity spreading > > > > > mechanism. > > > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx> > > > > > --- > > > > > drivers/virtio/virtio_vdpa.c | 4 ++++ > > > > > include/linux/vdpa.h | 8 ++++++++ > > > > > 2 files changed, 12 insertions(+) > > > > > > > > > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > > > > > index 08084b49e5a1..4731e4616ee0 100644 > > > > > --- a/drivers/virtio/virtio_vdpa.c > > > > > +++ b/drivers/virtio/virtio_vdpa.c > > > > > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > > > > > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > > > > > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > > > > > const struct vdpa_config_ops *ops = vdpa->config; > > > > > + struct irq_affinity default_affd = { 0 }; > > > > > struct vdpa_callback cb; > > > > > int i, err, queue_idx = 0; > > > > > > > > > > + if (ops->set_irq_affinity) > > > > > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); > > > > > > > > I wonder if we need to do this in vhost-vDPA. > > > > > > I don't get why we need to do this in vhost-vDPA? Should this be done in VM? > > > > If I was not wrong, this tries to set affinity on the host instead of > > the guest. More below. > > > > Yes, it's host stuff. This is used by the virtio device driver to pass > the irq affinity hint (tell which irq vectors don't need affinity > management) to the irq affinity manager. In the VM case, it should > only be related to the guest's virtio device driver and pci irq > affinity manager. So I don't get why we need to do this in vhost-vDPA. It's not necessarily the VM, do we have the same requirement for userspace (like DPDK) drivers? Thanks > > > > > > > > Or it's better to have a > > > > default affinity by the vDPA parent > > > > > > > > > > I think both are OK. But the default value should always be zero, so I > > > put it in a common place. > > > > I think we should either: > > > > 1) document the zero default value in vdpa.c > > 2) set the zero in both vhost-vdpa and virtio-vdpa, or in the vdpa core > > > > Can we only call it in the virtio-vdpa case? Thus the vdpa device > driver can know whether it needs to do the automatic irq affinity > management or not. In the vhost-vdpa case, we actually don't need the > irq affinity management. > > Thanks, > Yongji > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization