How do we eliminate nested_run_pending? Do we enforce the invariant that nested_run_pending is never set on return to userspace, or do we return an error if GET_NESTED_STATE is called when nested_run_pending is set? On Mon, Jan 8, 2018 at 2:35 AM, David Hildenbrand <david@xxxxxxxxxx> wrote: > On 19.12.2017 22:29, Paolo Bonzini wrote: >> On 19/12/2017 20:21, Jim Mattson wrote: >>> One reason is that it is a bit awkward for GET_NESTED_STATE to modify >>> guest memory. I don't know about qemu, but our userspace agent expects >>> guest memory to be quiesced by the time it starts going through its >>> sequence of GET_* ioctls. Sure, we could introduce a pre-migration >>> ioctl, but is that the best way to handle this? Another reason is that >>> it is a bit awkward for SET_NESTED_STATE to require guest memory. >>> Again, I don't know about qemu, but our userspace agent does not >>> expect any guest memory to be available when it starts going through >>> its sequence of SET_* ioctls. Sure, we could prefetch the guest page >>> containing the current VMCS12, but is that better than simply >>> including the current VMCS12 in the NESTED_STATE payload? Moreover, >>> these unpredictable (from the guest's point of view) updates to guest >>> memory leave a bad taste in my mouth (much like SMM). >> >> IIRC QEMU has no problem with either, but I think your concerns are >> valid. The active VMCS is processor state, not memory state. Same for >> the host save data in SVM. >> >> The unstructured "blob" of data is not an issue. If it becomes a >> problem, we can always document the structure... > > Thinking about it, I agree. It might be simpler/cleaner to transfer the > "loaded" VMCS. But I think we should take care of only transferring data > that actually is CPU state and not special to our current > implementation. (e.g. nested_run_pending I would says is special to out > current implementation, but we can discuss) > > So what I would consider VMX state: > - vmxon > - vmxon_ptr > - vmptr > - cached_vmcs12 > - ... ? > >> >> Paolo > > > -- > > Thanks, > > David / dhildenb