On Mon, Mar 16, 2015 at 3:49 PM, Bandan Das <bsd@xxxxxxxxxx> wrote: > jacob jacob <opstkusr@xxxxxxxxx> writes: > >> On Mon, Mar 16, 2015 at 2:12 PM, Bandan Das <bsd@xxxxxxxxxx> wrote: >>> jacob jacob <opstkusr@xxxxxxxxx> writes: >>> >>>> I also see the following in dmesg in the VM. >>>> >>>> [ 0.095758] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) >>>> [ 0.096006] acpi PNP0A03:00: ACPI _OSC support notification failed, >>>> disabling PCIe ASPM >>>> [ 0.096915] acpi PNP0A03:00: Unable to request _OSC control (_OSC >>>> support mask: 0x08) >>> IIRC, For OSC control, after BIOS is done with (whatever initialization >>> it needs to do), it clears a bit so that the OS can take over. This message, >>> you are getting is a sign of a bug in the BIOS (usually). But I don't >>> know if this is related to your problem. Does "dmesg | grep -e DMAR -e IOMMU" >>> give anything useful ? >> >> Do not see anything useful in the output.. > > Ok, Thanks. Can you please post the output as well ? > dmesg | grep -e DMAR -e IOMMU [ 0.000000] ACPI: DMAR 0x00000000BDF8B818 000160 (v01 INTEL S2600GL 06222004 INTL 20090903) [ 0.000000] Intel-IOMMU: enabled [ 0.168227] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020df [ 0.169529] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020df [ 0.171409] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0 [ 0.171865] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1 [ 0.172319] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1 [ 3.433119] IOMMU 0 0xfbffe000: using Queued invalidation [ 3.433611] IOMMU 1 0xebffc000: using Queued invalidation [ 3.434170] IOMMU: hardware identity mapping for device 0000:00:00.0 [ 3.434664] IOMMU: hardware identity mapping for device 0000:00:01.0 [ 3.435175] IOMMU: hardware identity mapping for device 0000:00:01.1 . . [ 3.500268] IOMMU: Setting RMRR: [ 3.502559] IOMMU: Prepare 0-16MiB unity mapping for LPC >>>> [ 0.097072] acpi PNP0A03:00: fail to add MMCONFIG information, >>>> can't access extended PCI configuration space under this bridge. >>>> >>>> Does this indicate any issue related to PCI passthrough? >>>> >>>> Would really appreciate any input on how to bebug this further. >>> >>> Did you get a chance to try a newer kernel ? >> Currently am using 3.18.7-200.fc21.x86_64 which is pretty recent. >> Are you suggesting trying the newer kernel just on the host? (or VM too?) > Both preferably to 3.19. But it's just a wild guess. I saw i40e related fixes, > particularly "i40e: fix un-necessary Tx hangs" in 3.19-rc5. This is not exactly > what you are seeing but I was still wondering if it could help. > > Meanwhile, I am trying to get hold of a card myself to try and reproduce > it at my end. Thx. Please let me know if there is anything else that i could try out. Since the NIC works just fine on the host, doesn't it rule out any i40e driver related issue? > > Thanks, > Bandan > >>>> On Fri, Mar 13, 2015 at 10:08 AM, jacob jacob <opstkusr@xxxxxxxxx> wrote: >>>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate >>>>>> driver. Just to rule out the possibility that there might be some driver fixes that >>>>>> could help with this, it might be a good idea to try a 3.19 or later upstream >>>>>> kernel. >>>>>> >>>>> >>>>> I tried with the latest DPDK release too (dpdk-1.8.0) and see the same issue. >>>>> As mentioned earlier, i do not see any issues at all when running >>>>> tests using either i40e or dpdk on the host itself. >>>>> This is the reason why i am suspecting if it is anything to do with KVM/libvirt. >>>>> Both with regular PCI passthrough and VF passthrough i see issues. It >>>>> is always pointing to some issue with packet transmission. Receive >>>>> seems to work ok. >>>>> >>>>> >>>>> On Thu, Mar 12, 2015 at 8:02 PM, Bandan Das <bsd@xxxxxxxxxx> wrote: >>>>>> jacob jacob <opstkusr@xxxxxxxxx> writes: >>>>>> >>>>>>> On Thu, Mar 12, 2015 at 3:07 PM, Bandan Das <bsd@xxxxxxxxxx> wrote: >>>>>>>> jacob jacob <opstkusr@xxxxxxxxx> writes: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Seeing failures when trying to do PCI passthrough of Intel XL710 40G >>>>>>>>> interface to KVM vm. >>>>>>>>> 0a:00.1 Ethernet controller: Intel Corporation Ethernet >>>>>>>>> Controller XL710 for 40GbE QSFP+ (rev 01) >>>>>>>> >>>>>>>> You are assigning the PF right ? Does assigning VFs work or it's >>>>>>>> the same behavior ? >>>>>>> >>>>>>> Yes.Assigning VFs worked ok.But this had other issues while bringing down VMs. >>>>>>> Interested in finding out if PCI passthrough of 40G intel XL710 >>>>>>> interface is qualified in some specific kernel/kvm release. >>>>>> >>>>>> So, it could be the i40e driver then ? Because IIUC, VFs use a separate >>>>>> driver. Just to rule out the possibility that there might be some driver fixes that >>>>>> could help with this, it might be a good idea to try a 3.19 or later upstream >>>>>> kernel. >>>>>> >>>>>>>>> From dmesg on host: >>>>>>>>> >>>>>>>>>> [80326.559674] kvm: zapping shadow pages for mmio generation wraparound >>>>>>>>>> [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr: 0x1c9 >>>>>>>>>> [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6 >>>>>>>>>> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7 >>>>>>>>>> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6 >>>>>>>>>> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606 >>>>>>>> >>>>>>>> These are harmless and are related to unimplemented PMU msrs, >>>>>>>> not VFIO. >>>>>>>> >>>>>>>> Bandan >>>>>>> -- >>>>>>> To unsubscribe from this list: send the line "unsubscribe kvm" in >>>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx >>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html