Re: Performance test result between virtio_pci MSI-X disable and enable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday 01 December 2010 17:29:44 lidong chen wrote:
> maybe because i modify the code in assigned_dev_iomem_map().
> 
> i used RHEL6, and calc_assigned_dev_id is belowï
> 
> static uint32_t calc_assigned_dev_id(uint8_t bus, uint8_t devfn)
> {
>     return (uint32_t)bus << 8 | (uint32_t)devfn;
> }
> 
> and in patch there are there param.
> +                msix_mmio.id = calc_assigned_dev_id(r_dev->h_segnr,
> +                        r_dev->h_busnr, r_dev->h_devfn);

This one should be fine because h_segnr should be 0 here.

But I strongly recommend you to use latest KVM and latest QEmu, we won't know what 
would happen during the rebase... (maybe my patch is a little old for the latest 
one, so my kvm base is 365bb670a44b217870c2ee1065f57bb43b57e166, qemu base is 
420fe74769cc67baec6f3d962dc054e2972ca3ae).

Things to be checked:
1. If two devices' MMIO have been registered successfully.
2. If you can see the mask bit accessing in kernel from both devices.

--
regards
Yang, Sheng

> 
> 
> #ifdef KVM_CAP_MSIX_MASK
>             if (cap_mask) {
>                 memset(&msix_mmio, 0, sizeof msix_mmio);
>                 msix_mmio.id = calc_assigned_dev_id(r_dev->h_busnr,
> r_dev->h_devfn);
>                 msix_mmio.type = KVM_MSIX_TYPE_ASSIGNED_DEV;
>                 msix_mmio.base_addr = e_phys + offset;
>                 msix_mmio.max_entries_nr = r_dev->max_msix_entries_nr;
>                 msix_mmio.flags = KVM_MSIX_MMIO_FLAG_REGISTER;
>                 ret = kvm_update_msix_mmio(kvm_context, &msix_mmio);
>                 if (ret)
>                     fprintf(stderr, "fail to register in-kernel
> msix_mmio!\n"); }
> #endif
> 
> 2010/12/1 Yang, Sheng <sheng.yang@xxxxxxxxx>:
> > On Wednesday 01 December 2010 16:54:16 lidong chen wrote:
> >> yes, i patch qemu as well.
> >> 
> >> and i found the address of second vf is not in mmio range. the first
> >> one is fine.
> > 
> > So looks like something wrong with MMIO register part. Could you check
> > the registeration in assigned_dev_iomem_map() of the 4th patch for QEmu?
> > I suppose something wrong with it. I would try to reproduce it here.
> > 
> > And if you only use one vf, how about the gain?
> > 
> > --
> > regards
> > Yang, Sheng
> > 
> >> 2010/12/1 Yang, Sheng <sheng.yang@xxxxxxxxx>:
> >> > On Wednesday 01 December 2010 16:41:38 lidong chen wrote:
> >> >> I used sr-iov, give each vm 2 vf.
> >> >> after apply the patch, and i found performence is the same.
> >> >> 
> >> >> the reason is in function msix_mmio_write, mostly addr is not in mmio
> >> >> range.
> >> > 
> >> > Did you patch qemu as well? You can see it's impossible for kernel
> >> > part to work alone...
> >> > 
> >> > http://www.mail-archive.com/kvm@xxxxxxxxxxxxxxx/msg44368.html
> >> > 
> >> > --
> >> > regards
> >> > Yang, Sheng
> >> > 
> >> >> static int msix_mmio_write(struct kvm_io_device *this, gpa_t addr,
> >> >> int len, const void *val)
> >> >> {
> >> >> 
> >> >>       struct kvm_assigned_dev_kernel *adev =
> >> >>       
> >> >>                       container_of(this, struct
> >> >>                       kvm_assigned_dev_kernel,
> >> >>                       
> >> >>                                    msix_mmio_dev);
> >> >>       
> >> >>       int idx, r = 0;
> >> >>       unsigned long new_val = *(unsigned long *)val;
> >> >>       
> >> >>       mutex_lock(&adev->kvm->lock);
> >> >>       if (!msix_mmio_in_range(adev, addr, len)) {
> >> >>       
> >> >>               // return here.
> >> >>               
> >> >>                  r = -EOPNOTSUPP;
> >> >>               
> >> >>               goto out;
> >> >>       
> >> >>       }
> >> >> 
> >> >> i printk the value:
> >> >> addr             start           end           len
> >> >> F004C00C   F0044000  F0044030     4
> >> >> 
> >> >> 00:06.0 Ethernet controller: Intel Corporation Unknown device 10ed
> >> >> (rev 01) Subsystem: Intel Corporation Unknown device 000c
> >> >> 
> >> >>       Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> >> >> 
> >> >> ParErr- Stepping- SERR- FastB2B-
> >> >> 
> >> >>       Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> >> >> 
> >> >> <TAbort- <MAbort- >SERR- <PERR-
> >> >> 
> >> >>       Latency: 0
> >> >>       Region 0: Memory at f0040000 (32-bit, non-prefetchable)
> >> >>       [size=16K] Region 3: Memory at f0044000 (32-bit,
> >> >>       non-prefetchable) [size=16K] Capabilities: [40] MSI-X: Enable+
> >> >>       Mask- TabSize=3
> >> >>       
> >> >>               Vector table: BAR=3 offset=00000000
> >> >>               PBA: BAR=3 offset=00002000
> >> >> 
> >> >> 00:07.0 Ethernet controller: Intel Corporation Unknown device 10ed
> >> >> (rev 01) Subsystem: Intel Corporation Unknown device 000c
> >> >> 
> >> >>       Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> >> >> 
> >> >> ParErr- Stepping- SERR- FastB2B-
> >> >> 
> >> >>       Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> >> >> 
> >> >> <TAbort- <MAbort- >SERR- <PERR-
> >> >> 
> >> >>       Latency: 0
> >> >>       Region 0: Memory at f0048000 (32-bit, non-prefetchable)
> >> >>       [size=16K] Region 3: Memory at f004c000 (32-bit,
> >> >>       non-prefetchable) [size=16K] Capabilities: [40] MSI-X: Enable+
> >> >>       Mask- TabSize=3
> >> >>       
> >> >>               Vector table: BAR=3 offset=00000000
> >> >>               PBA: BAR=3 offset=00002000
> >> >> 
> >> >> +static bool msix_mmio_in_range(struct kvm_assigned_dev_kernel *adev,
> >> >> +                           gpa_t addr, int len)
> >> >> +{
> >> >> +     gpa_t start, end;
> >> >> +
> >> >> +     BUG_ON(adev->msix_mmio_base == 0);
> >> >> +     start = adev->msix_mmio_base;
> >> >> +     end = adev->msix_mmio_base + PCI_MSIX_ENTRY_SIZE *
> >> >> +             adev->msix_max_entries_nr;
> >> >> +     if (addr >= start && addr + len <= end)
> >> >> +             return true;
> >> >> +
> >> >> +     return false;
> >> >> +}
> >> >> 
> >> >> 2010/11/30 Yang, Sheng <sheng.yang@xxxxxxxxx>:
> >> >> > On Tuesday 30 November 2010 17:10:11 lidong chen wrote:
> >> >> >> sr-iov also meet this problem, MSIX mask waste a lot of cpu
> >> >> >> resource.
> >> >> >> 
> >> >> >> I test kvm with sriov, which the vf driver could not disable msix.
> >> >> >> so the host os waste a lot of cpu.  cpu rate of host os is 90%.
> >> >> >> 
> >> >> >> then I test xen with sriov, there ara also a lot of vm exits
> >> >> >> caused by MSIX mask.
> >> >> >> but the cpu rate of xen and domain0 is less than kvm. cpu rate of
> >> >> >> xen and domain0 is 60%.
> >> >> >> 
> >> >> >> without sr-iov, the cpu rate of xen and domain0 is higher than
> >> >> >> kvm.
> >> >> >> 
> >> >> >> so i think the problem is kvm waste more cpu resource to deal with
> >> >> >> MSIX mask. and we can see how xen deal with MSIX mask.
> >> >> >> 
> >> >> >> if this problem sloved, maybe with MSIX enabled, the performace is
> >> >> >> better.
> >> >> > 
> >> >> > Please refer to my posted patches for this issue.
> >> >> > 
> >> >> > http://www.spinics.net/lists/kvm/msg44992.html
> >> >> > 
> >> >> > --
> >> >> > regards
> >> >> > Yang, Sheng
> >> >> > 
> >> >> >> 2010/11/23 Avi Kivity <avi@xxxxxxxxxx>:
> >> >> >> > On 11/23/2010 09:27 AM, lidong chen wrote:
> >> >> >> >> can you tell me something about this problem.
> >> >> >> >> thanks.
> >> >> >> > 
> >> >> >> > Which problem?
> >> >> >> > 
> >> >> >> > --
> >> >> >> > I have a truly marvellous patch that fixes the bug which this
> >> >> >> > signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux