Hi, I have been trying to get the PCI pass through working on my Asus Crosshair IV Formula motherboard. The motherboard does support IOMMU (AMD-Vi), and IOMMU as well as SVM are enabled in the BIOS. PCI pass through does seem to work just fine when it comes to with built in network device (Marvel 8059 Yukon), however I can't get it working with two PCI RTL-8169 network cards. Both devices are bound to pci-stub. Every time I attempt I get a message that the device is busy, as shown below: PCI region 1 at address 0xf9dff800 has size 0x100, which is not a multiple of 4K. You might experience some performance hit due to that. Failed to assign device "(null)" : Device or resource busy *** The driver 'pci-stub' is occupying your device 0000:01:05.0. The kernel logs show the following: Dec 14 10:19:55 phalsenet kernel: [ 1718.806644] pci-stub 0000:01:05.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 Dec 14 10:19:55 phalsenet kernel: [ 1718.836741] pci-stub 0000:01:05.0: restoring config space at offset 0x1 (was 0x2b00400, writing 0x2b00103) Dec 14 10:19:55 phalsenet kernel: [ 1718.903118] assign device 0:1:5.0 failed Dec 14 10:19:55 phalsenet kernel: [ 1718.903161] pci-stub 0000:01:05.0: PCI INT A disabled The box is running Gentoo Linux. I have tested with 2.6.34 and 2.6.36 kernels, with kvm and kvm-amd modules that came with the kernels as well as with the latest kvm-kmod sources (2.6.36.1). The qemu-kvm version is 0.13.0. Here lspci -v output from one of the network devices: 01:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8169 Gigabit Ethernet (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8169 Gigabit Ethernet Flags: 66MHz, medium devsel, IRQ 20 I/O ports at b400 [size=256] Memory at f9dff800 (32-bit, non-prefetchable) [size=256] Expansion ROM at f9da0000 [disabled] [size=128K] Capabilities: [dc] Power Management version 2 Kernel driver in use: pci-stub Any help would be appreciated. Regards, Andrew -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html