On Mon, 19 Mar 2018 22:29:32 -0400 Tsu-Hsiang K Cheng <tcheng8@xxxxxxxxxxxxxx> wrote: > Hi there, > > > We were trying to assigned the SSD (Crucial M500) to the KVM guest > under the VFIO framework and with the Intel VT-d support. When the > guest directly read from or wrote to the SSD (O_DIRECT), there were > many VM exits due to EPT_MISCONFIG. After gathering information from > the internet and the KVM code, starting from "handle_ept_misconfig", > it seemed the exits were related to the MMIO operations. As far as we > understood about the PCI device passthrough, the guest could control > the PCI device without the hypervisor intervention. In our case, the > KVM hypervisor trapped and emulated the MMIO operations. We were > wondering if we understood this correctly? What would be the reasons > for such a traps and emulations? It would be great if there As I'm sure you know, you can't actually assign an SSD with vfio, you can however assign the HBA to which the SSD is attached. The performance of the assigned HBA is going to depend on the resources specific to that HBA. For example on my laptop, I have an Intel SATA controller with the following resources: Region 0: I/O ports at 30a8 [size=8] Region 1: I/O ports at 30b4 [size=4] Region 2: I/O ports at 30a0 [size=8] Region 3: I/O ports at 30b0 [size=4] Region 4: I/O ports at 3060 [size=32] Region 5: Memory at e123c000 (32-bit, non-prefetchable) [size=2K] All of these are likely to trap through QEMU. I/O port resources always trap through QEMU and the MMIO resource is only 2K. We do have code in the kernel and QEMU that will attempt to claim an additional 2K in the above example, allowing a 4K page to be directly mapped for the device, but it depends on the host leaving that additional 2K unmapped to any other device and also the guest resource mapping. So without specific details of the HBA being assigned, it's not entirely unexpected that resources of a SATA HBA aren't necessarily well aligned for optimal device assignment performance. Thanks, Alex