InfiniBand Passthrough not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
 Hello,
 
I want to passthrough an InfiniBand-Card to a guest and have some problems with that. My CPU (Intel Xeon E5-2650) is vt-d capable and vt-d is activated in the BIOS. My CentOS 6.4 (2.6.32-358.11.1.el6.x86_64) is booting with the command line "... intel_iommu=on iommu=pt".
 
The guest has the same OS and kernel as the host, also the driver used for the InfiniBand-Card is the same (used the official Mellanox OFED driver pack).
 
This is what I've done:
 
0.) Verify that InfiniBand working on host:
root@host# ibv_devinfo
hca_id:    mthca0
    transport:            InfiniBand (0)
    fw_ver:                1.0.800
    node_guid:            ...
    sys_image_guid:            ...
    vendor_id:            0x08f1
    vendor_part_id:            25204
    hw_ver:                0xA0
    board_id:            ...
    phys_port_cnt:            1
        port:    1
            state:            PORT_ACTIVE (4)
            max_mtu:        2048 (4)
            active_mtu:        2048 (4)
            sm_lid:            3
            port_lid:        5
            port_lmc:        0x00
            link_layer:        InfiniBand
 
1.) DMAR and IOMMU messages at host:
root@host# dmesg  | grep -e DMAR -e IOMMU
ACPI: DMAR 000000007e27ea30 00160 (v01 A M I   OEMDMAR 00000001 INTL 00000001)
Intel-IOMMU: enabled
dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0462 ecap f020fe
dmar: IOMMU 1: reg_base_addr dfffc000 ver 1:0 cap d2078c106f0462 ecap f020fe
IOMMU 0xfbffe000: using Queued invalidation
IOMMU 0xdfffc000: using Queued invalidation
IOMMU: hardware identity mapping for device 0000:00:00.0
[... (a lot of mapping messages) ...]
IOMMU: hardware identity mapping for device 0000:81:00.0 (this is the IB card)
IOMMU: Setting RMRR:
IOMMU: Prepare 0-16MiB unity mapping for LPC
dmar: DMAR:[DMA Read] Request device [81:00.0] fault addr 107294f000
DMAR:[fault reason 06] PTE Read access is not set
dmar: DMAR:[DMA Read] Request device [81:00.0] fault addr 107294f000
DMAR:[fault reason 06] PTE Read access is not set
2.) Output of lspci at host
81:00.0 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx HCA] (rev a0)
        Subsystem: Mellanox Technologies MT25204 [InfiniHost III Lx HCA]
        Flags: fast devsel, IRQ 114
        Memory at f8a00000 (64-bit, non-prefetchable) [size=1M]
        Memory at 3c1e00000000 (64-bit, prefetchable) [size=8M]
        Capabilities: [40] Power Management version 2
        Capabilities: [48] Vital Product Data
        Capabilities: [90] MSI: Enable- Count=1/32 Maskable- 64bit+
        Capabilities: [84] MSI-X: Enable- Count=32 Masked-
        Capabilities: [60] Express Endpoint, MSI 00
              Kernel driver in use: ib_mthca
        Kernel modules: ib_mthca
 
3.) Adding the device with virsh edit:
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </hostdev>
 
The last <address .../>-Tag with "slot=0x07" was added automatically by virsh.
 
4.)  Removing device with nodedev-dettach:
root@host# virsh nodedev-dettach pci_0000_81_00_0
Device pci_0000_81_00_0 detached
 
lspci on the host now shows the same as in (1) but the "Kernel driver in use" changed:
        Kernel driver in use: pci-stub
 
5.) InfiniBand now stops working on host (as expected):
root@host# ibv_devinfo
No IB devices found
 
6.) starting guest, then  lspci -v:
00:07.0 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx HCA] (rev a0)
    Subsystem: Mellanox Technologies MT25204 [InfiniHost III Lx HCA]
    Physical Slot: 7
    Flags: fast devsel, IRQ 10
    Memory at f2100000 (32-bit, non-prefetchable) [size=1M]
    Memory at f2800000 (32-bit, prefetchable) [size=8M]
    Capabilities: [48] Vital Product Data
    Capabilities: [60] Express Endpoint, MSI 00
    Capabilities: [40] Power Management version 2
    Capabilities: [84] MSI-X: Enable- Count=32 Masked-
    Capabilities: [90] MSI: Enable- Count=1/32 Maskable- 64bit-
    Kernel modules: ib_mthca
 
I noticed that there is a difference: The memory on guest is 32-bit, on the host it says 64-bit
 
7.) IB not working on guest:
root@guest# ibv_devinfo
No IB devices found
root@guest# ibhosts
src/query_smp.c:228; can't open UMAD port ((null):0)
/usr/sbin/ibnetdiscover: iberror: failed: discover failed
 
Do you have any clue where I must tweak my settings in order to get InfiniBand working on my virtual machine? If this is no libvirt problem, I am sorry :)
 
Regards,
Sebastian
_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users

[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux