Re: 回复: [PATCH v5] vp_vdpa: don't allocate unused msix vectors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 23, 2024 at 08:42:57AM +0000, Angus Chen wrote:
> Hi mst.
> 
> > -----Original Message-----
> > From: Michael S. Tsirkin <mst@xxxxxxxxxx>
> > Sent: Tuesday, April 23, 2024 4:35 PM
> > To: Gavin Liu <gavin.liu@xxxxxxxxxxxxxxx>
> > Cc: jasowang@xxxxxxxxxx; Angus Chen <angus.chen@xxxxxxxxxxxxxxx>;
> > virtualization@xxxxxxxxxxxxxxx; xuanzhuo@xxxxxxxxxxxxxxxxx;
> > linux-kernel@xxxxxxxxxxxxxxx; Heng Qi <hengqi@xxxxxxxxxxxxxxxxx>
> > Subject: Re: 回复: [PATCH v5] vp_vdpa: don't allocate unused msix vectors
> > 
> > On Tue, Apr 23, 2024 at 01:39:17AM +0000, Gavin Liu wrote:
> > > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
> > > > From: Yuxue Liu <yuxue.liu@xxxxxxxxxxxxxxx>
> > > >
> > > > When there is a ctlq and it doesn't require interrupt callbacks,the
> > > > original method of calculating vectors wastes hardware msi or msix
> > > > resources as well as system IRQ resources.
> > > >
> > > > When conducting performance testing using testpmd in the guest os, it
> > > > was found that the performance was lower compared to directly using
> > > > vfio-pci to passthrough the device
> > > >
> > > > In scenarios where the virtio device in the guest os does not utilize
> > > > interrupts, the vdpa driver still configures the hardware's msix
> > > > vector. Therefore, the hardware still sends interrupts to the host os.
> > >
> > > >I just have a question on this part. How come hardware sends interrupts does
> > not guest driver disable them?
> > >
> > >    1:Assuming the guest OS's Virtio device is using PMD mode, QEMU sets
> > the call fd to -1
> > >    2:On the host side, the vhost_vdpa program will set
> > vp_vdpa->vring[i].cb.callback to invalid
> > >    3:Before the modification, the vp_vdpa_request_irq function does not
> > check whether
> > >       vp_vdpa->vring[i].cb.callback is valid. Instead, it enables the
> > hardware's MSIX
> > > 	  interrupts based on the number of queues of the device
> > >
> > 
> > So MSIX is enabled but why would it trigger? virtio PMD in poll mode
> > presumably suppresses interrupts after all.
> Virtio pmd is in the guest,but in host side,the msix is enabled,then the device will triger 
> Interrupt normally. I analysed this bug before,and I think gavin is right.
> Did I make it clear?

Not really. Guest disables interrupts presumably (it's polling)
why does device still send them?


> > 
> > >
> > >
> > > ----- Original Message -----
> > > From: Michael S. Tsirkin mst@xxxxxxxxxx
> > > Sent: April 22, 2024 20:09
> > > To: Gavin Liu gavin.liu@xxxxxxxxxxxxxxx
> > > Cc: jasowang@xxxxxxxxxx; Angus Chen angus.chen@xxxxxxxxxxxxxxx;
> > virtualization@xxxxxxxxxxxxxxx; xuanzhuo@xxxxxxxxxxxxxxxxx;
> > linux-kernel@xxxxxxxxxxxxxxx; Heng Qi hengqi@xxxxxxxxxxxxxxxxx
> > > Subject: Re: [PATCH v5] vp_vdpa: don't allocate unused msix vectors
> > >
> > >
> > >
> > > External Mail: This email originated from OUTSIDE of the organization!
> > > Do not click links, open attachments or provide ANY information unless you
> > recognize the sender and know the content is safe.
> > >
> > >
> > > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
> > > > From: Yuxue Liu <yuxue.liu@xxxxxxxxxxxxxxx>
> > > >
> > > > When there is a ctlq and it doesn't require interrupt callbacks,the
> > > > original method of calculating vectors wastes hardware msi or msix
> > > > resources as well as system IRQ resources.
> > > >
> > > > When conducting performance testing using testpmd in the guest os, it
> > > > was found that the performance was lower compared to directly using
> > > > vfio-pci to passthrough the device
> > > >
> > > > In scenarios where the virtio device in the guest os does not utilize
> > > > interrupts, the vdpa driver still configures the hardware's msix
> > > > vector. Therefore, the hardware still sends interrupts to the host os.
> > >
> > > I just have a question on this part. How come hardware sends interrupts does
> > not guest driver disable them?
> > >
> > > > Because of this unnecessary
> > > > action by the hardware, hardware performance decreases, and it also
> > > > affects the performance of the host os.
> > > >
> > > > Before modification:(interrupt mode)
> > > >  32:  0   0  0  0 PCI-MSI 32768-edge    vp-vdpa[0000:00:02.0]-0
> > > >  33:  0   0  0  0 PCI-MSI 32769-edge    vp-vdpa[0000:00:02.0]-1
> > > >  34:  0   0  0  0 PCI-MSI 32770-edge    vp-vdpa[0000:00:02.0]-2
> > > >  35:  0   0  0  0 PCI-MSI 32771-edge    vp-vdpa[0000:00:02.0]-config
> > > >
> > > > After modification:(interrupt mode)
> > > >  32:  0  0  1  7   PCI-MSI 32768-edge  vp-vdpa[0000:00:02.0]-0
> > > >  33: 36  0  3  0   PCI-MSI 32769-edge  vp-vdpa[0000:00:02.0]-1
> > > >  34:  0  0  0  0   PCI-MSI 32770-edge  vp-vdpa[0000:00:02.0]-config
> > > >
> > > > Before modification:(virtio pmd mode for guest os)
> > > >  32:  0   0  0  0 PCI-MSI 32768-edge    vp-vdpa[0000:00:02.0]-0
> > > >  33:  0   0  0  0 PCI-MSI 32769-edge    vp-vdpa[0000:00:02.0]-1
> > > >  34:  0   0  0  0 PCI-MSI 32770-edge    vp-vdpa[0000:00:02.0]-2
> > > >  35:  0   0  0  0 PCI-MSI 32771-edge    vp-vdpa[0000:00:02.0]-config
> > > >
> > > > After modification:(virtio pmd mode for guest os)
> > > >  32: 0  0  0   0   PCI-MSI 32768-edge   vp-vdpa[0000:00:02.0]-config
> > > >
> > > > To verify the use of the virtio PMD mode in the guest operating
> > > > system, the following patch needs to be applied to QEMU:
> > > > https://lore.kernel.org/all/20240408073311.2049-1-yuxue.liu@jaguarmicr
> > > > o.com
> > > >
> > > > Signed-off-by: Yuxue Liu <yuxue.liu@xxxxxxxxxxxxxxx>
> > > > Acked-by: Jason Wang <jasowang@xxxxxxxxxx>
> > > > Reviewed-by: Heng Qi <hengqi@xxxxxxxxxxxxxxxxx>
> > > > ---
> > > > V5: modify the description of the printout when an exception occurs
> > > > V4: update the title and assign values to uninitialized variables
> > > > V3: delete unused variables and add validation records
> > > > V2: fix when allocating IRQs, scan all queues
> > > >
> > > >  drivers/vdpa/virtio_pci/vp_vdpa.c | 22 ++++++++++++++++------
> > > >  1 file changed, 16 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > index df5f4a3bccb5..8de0224e9ec2 100644
> > > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > > @@ -160,7 +160,13 @@ static int vp_vdpa_request_irq(struct vp_vdpa
> > *vp_vdpa)
> > > >       struct pci_dev *pdev = mdev->pci_dev;
> > > >       int i, ret, irq;
> > > >       int queues = vp_vdpa->queues;
> > > > -     int vectors = queues + 1;
> > > > +     int vectors = 1;
> > > > +     int msix_vec = 0;
> > > > +
> > > > +     for (i = 0; i < queues; i++) {
> > > > +             if (vp_vdpa->vring[i].cb.callback)
> > > > +                     vectors++;
> > > > +     }
> > > >
> > > >       ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX);
> > > >       if (ret != vectors) {
> > > > @@ -173,9 +179,12 @@ static int vp_vdpa_request_irq(struct vp_vdpa
> > *vp_vdpa)
> > > >       vp_vdpa->vectors = vectors;
> > > >
> > > >       for (i = 0; i < queues; i++) {
> > > > +             if (!vp_vdpa->vring[i].cb.callback)
> > > > +                     continue;
> > > > +
> > > >               snprintf(vp_vdpa->vring[i].msix_name,
> > VP_VDPA_NAME_SIZE,
> > > >                       "vp-vdpa[%s]-%d\n", pci_name(pdev), i);
> > > > -             irq = pci_irq_vector(pdev, i);
> > > > +             irq = pci_irq_vector(pdev, msix_vec);
> > > >               ret = devm_request_irq(&pdev->dev, irq,
> > > >                                      vp_vdpa_vq_handler,
> > > >                                      0,
> > vp_vdpa->vring[i].msix_name,
> > > > @@ -185,21 +194,22 @@ static int vp_vdpa_request_irq(struct vp_vdpa
> > *vp_vdpa)
> > > >                               "vp_vdpa: fail to request irq for
> > vq %d\n", i);
> > > >                       goto err;
> > > >               }
> > > > -             vp_modern_queue_vector(mdev, i, i);
> > > > +             vp_modern_queue_vector(mdev, i, msix_vec);
> > > >               vp_vdpa->vring[i].irq = irq;
> > > > +             msix_vec++;
> > > >       }
> > > >
> > > >       snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE,
> > "vp-vdpa[%s]-config\n",
> > > >                pci_name(pdev));
> > > > -     irq = pci_irq_vector(pdev, queues);
> > > > +     irq = pci_irq_vector(pdev, msix_vec);
> > > >       ret = devm_request_irq(&pdev->dev, irq, vp_vdpa_config_handler,
> > 0,
> > > >                              vp_vdpa->msix_name, vp_vdpa);
> > > >       if (ret) {
> > > >               dev_err(&pdev->dev,
> > > > -                     "vp_vdpa: fail to request irq for vq %d\n", i);
> > > > +                     "vp_vdpa: fail to request irq for config: %d\n",
> > > > + ret);
> > > >                       goto err;
> > > >       }
> > > > -     vp_modern_config_vector(mdev, queues);
> > > > +     vp_modern_config_vector(mdev, msix_vec);
> > > >       vp_vdpa->config_irq = irq;
> > > >
> > > >       return 0;
> > > > --
> > > > 2.43.0
> > >
> 





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux