On Sat, Feb 20, 2010 at 09:14:06AM -1000, Zachary Amsden wrote:
Perhaps I am misunderstanding, but I don't see how nested SVM instances
can be properly migrated. How does one extract and rebuild the nested
hsave control block?
Migrating guests which run in nested mode could not be migrated in a
save way currently but there are plans to fix that :-)
The first step is to save the l1 cpu state in the guest supplied hsave
area. But that is not sufficient because this does not work for all l1
state.
Thanks for the fast response! I am glad to know both my reading of the
code is correct and also that there are plans to fix it. For now, it
gives me freedom to fix a couple outstanding bugs and not worry about
breaking a complex feature as nested migration through a bisectable
patch-set.
If it isn't done already, one possible way to add it as an extension
might be to represent the data as additional MSRs which are saved and
restored with migration.
This sounds complicated.
I think it's actually pretty easy.
The infrastructure is already there to import / export and migrate MSR
settings. MSRs are also 64-bit, and hold "model-specific" settings, so
if you don't mind thinking of the nested feature as a model-specific
feature of the KVM-SVM CPU, it's even somewhat well defined in terms of
the architecture.
In that case, the simplest approach, mapping a set of MSRs 1-1 onto the
vmcb could be one possible implementation of a migration solution. I
don't think you would even need very much code; it could be simply
blasted one qword at a time into the struct. You would need only
minimal error checking - most checks are done by hardware - and the only
security concern, exposing host pages to the guest - is actually really
a point of correctness - on migration, any physical page frames
referenced in the hardware struct will obviously need to be reallocated
anyway.
Mostly the problem is figuring out what chunk of MSR space to use.
Zach
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html