Re: VM Exit and EPT_MISCONFIG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alex,


This is helpful. Yes, I did mean to assign the HBA, where the SSD was
attached, to the guest. On my server machine, the Intel SATA
controllers has the following resources.

Region 0: I/O ports at 90b0 [size=8]
Region 1: I/O ports at 90a0 [size=4]
Region 2: I/O ports at 9090 [size=8]
Region 3: I/O ports at 9080 [size=4]
Region 4: I/O ports at 9000 [size=32]
Region 5: Memory at dfd04000 (32-bit, non-prefetchable) [size=2K]

Like you have mentioned, the access to the IO ports were trapped
through QEMU and the 2K memory is not 4K aligned. It would be great if
you can point us the QEMU and kernel code for more insights. Thank
you.


Best,

Kevin



On Tue, Mar 20, 2018 at 10:43 AM, Alex Williamson
<alex.williamson@xxxxxxxxxx> wrote:
> On Mon, 19 Mar 2018 22:29:32 -0400
> Tsu-Hsiang K Cheng <tcheng8@xxxxxxxxxxxxxx> wrote:
>
>> Hi there,
>>
>>
>> We were trying to assigned the SSD (Crucial M500) to the KVM guest
>> under the VFIO framework and with the Intel VT-d support. When the
>> guest directly read from or wrote to the SSD (O_DIRECT), there were
>> many VM exits due to EPT_MISCONFIG. After gathering information from
>> the internet and the KVM code, starting from "handle_ept_misconfig",
>> it seemed the exits were related to the MMIO operations. As far as we
>> understood about the PCI device passthrough, the guest could control
>> the PCI device without the hypervisor intervention. In our case, the
>> KVM hypervisor trapped and emulated the MMIO operations. We were
>> wondering if we understood this correctly? What would be the reasons
>> for such a traps and emulations? It would be great if there
>
> As I'm sure you know, you can't actually assign an SSD with vfio, you
> can however assign the HBA to which the SSD is attached.  The
> performance of the assigned HBA is going to depend on the resources
> specific to that HBA.  For example on my laptop, I have an Intel SATA
> controller with the following resources:
>
>         Region 0: I/O ports at 30a8 [size=8]
>         Region 1: I/O ports at 30b4 [size=4]
>         Region 2: I/O ports at 30a0 [size=8]
>         Region 3: I/O ports at 30b0 [size=4]
>         Region 4: I/O ports at 3060 [size=32]
>         Region 5: Memory at e123c000 (32-bit, non-prefetchable) [size=2K]
>
> All of these are likely to trap through QEMU.  I/O port resources
> always trap through QEMU and the MMIO resource is only 2K.  We do have
> code in the kernel and QEMU that will attempt to claim an additional 2K
> in the above example, allowing a 4K page to be directly mapped for the
> device, but it depends on the host leaving that additional 2K unmapped
> to any other device and also the guest resource mapping.  So without
> specific details of the HBA being assigned, it's not entirely
> unexpected that resources of a SATA HBA aren't necessarily well aligned
> for optimal device assignment performance.  Thanks,
>
> Alex



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux