Re: Speeding up VMX with GDT fixmap trickery?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/06/2017 02:13, Andy Lutomirski wrote:
> Hi all-
>
> As promised when Thomas did his GDT fixmap work, here is a draft patch
> to speed up KVM by extending it.
>
> The downside of this patch is that it makes the fixmap significantly
> larger on 64-bit systems if NR_CPUS is large (it adds 15 more pages
> per CPU).  I don't know if we care at all.  It also bloats the kernel
> image by 4k and wastes 4k of RAM for the entire time the system is
> booted.  We could avoid the latter bit (sort of) by not mapping the
> extra fixmap pages at all and handling the resulting faults somehow.
> That would scare me -- now we have IRET generating #PF when running
> malicious , and that way lies utter madness.
>
> The upside is that we don't need to do LGDT after a vmexit on VMX.
> LGDT is slooooooooooow.  But no, I haven't benchmarked this yet.
>
> What do you all think?
>
> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/kvm&id=e249a09787d6956b52d8260b2326d8f12f768799
>
> Andrew/Boris/Juergen: what does Xen think about setting a very high
> GDT limit?  Will it let us?  Should I fix it by changing
> load_fixmap_gdt() (i.e. uncommenting the commented bit) or by teaching
> the Xen paravirt code to just ignore the monstrous limit?  Or is it
> not a problem in the first place?

When running PV, any selector under 0xe000 is fair game, and anything
over that is Xen's.

OTOH, the set of software running as a PV guest, and also running KVM is
empty.  An HVM guest (which when nested, is the only viable option to
run KVM) has total control over its GDT.

~Andrew



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux