Re: List of unaccessible x86 states

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 20.10.2009, at 21:09, Gleb Natapov wrote:

On Tue, Oct 20, 2009 at 08:59:48PM +0200, Alexander Graf wrote:

On 20.10.2009, at 20:55, Gleb Natapov wrote:

On Tue, Oct 20, 2009 at 03:51:02PM +0200, Alexander Graf wrote:

On 20.10.2009, at 15:48, Gleb Natapov wrote:

On Tue, Oct 20, 2009 at 03:41:57PM +0200, Alexander Graf wrote:

On 20.10.2009, at 15:37, Jan Kiszka wrote:

Alexander Graf wrote:
On 20.10.2009, at 15:01, Jan Kiszka wrote:

Hi all,

as the list of yet user-unaccessible x86 states is a bit
volatile ATM,
this is an attempt to collect the precise requirements for
additional
state fields. Once everyone feels the list is complete, we can
decide
how to partition it into one ore more substates for the new
KVM_GET/SET_VCPU_STATE interface.

What I read so far (or tried to patch already):

- nmi_masked
- nmi_pending
- nmi_injected
- kvm_queued_exception (whole struct content)
- KVM_REQ_TRIPLE_FAULT (from vcpu.requests)

Unclear points (for me) from the last discussion:

- sipi_vector
- MCE (covered via kvm_queued_exception, or does it
require more?)

Please extend or correct the list as required.

hflags. Qemu supports GIF, kvm supports GIF, but no side
knows how to
sync it.

BTW, GIF is related to svm nesting, right?

Yes and no. It's an architecture addition that came with SVM, yes.

The problem is that I don't want to support migrating while in a
Why not?

Because then we'd have to transfer the whole host cpu cache and the
merged intercept bitmaps to userspace as well. That's just too many
internals to expose IMHO.

But the amount of information is constant no matter how l2 guest there
are. Correct? We can expose it as separate substate.

Or we can just not migrate while in a nested guest :-). Which will
make everything a lot easier.

Suppose we have a l2 guest that handles interrupt/nmis by itself how can we
force it to exit?

If the nested hypervisor doesn't intercept INTR we don't support it anyways.

I don't think requesting certain cpu state before
migration is the right thing to do. What if user paused a VM and then
decided to migrate?

So pausing has to make it go out of nested guest context too?
Then we're not in the nested guest context, right? :)

Or VM was paused automatically because of shortage
of disk space and management want to migrate VM to other host with
bigger disk?

Same as before.


Really, pushing the whole nesting state over is not a good idea.

Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux