Re: [PATCH 10/11] KVM: nVMX: Synchronize VMCS12 content with the shadow vmcs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 10, 2013 at 10:15:37PM +0300, Abel Gordon wrote:
> 
> 
> Gleb Natapov <gleb@xxxxxxxxxx> wrote on 09/04/2013 04:14:35 PM:
> 
> > On Mon, Mar 11, 2013 at 09:54:47AM +0200, Abel Gordon wrote:
> > > "Nadav Har'El" <nyh@xxxxxxxxxxxxxxxxxxx> wrote on 11/03/2013 12:43:35
> AM:
> > >
> > >
> > > > On Sun, Mar 10, 2013, Abel Gordon wrote about "[PATCH 10/11] KVM:
> > > > nVMX: Synchronize VMCS12 content with the shadow vmcs":
> > > > >     nested_vmx_vmexit(vcpu);
> > > > > +   if (enable_shadow_vmcs)
> > > > > +      copy_vmcs12_to_shadow(to_vmx(vcpu));
> > > >
> > > > I was curious why your patch adds this call to copy_vmcs12_to_shadow
> > > after
> > > > every nested_vmx_vmexit (3 times), instead of making this call inside
> > > > nested_vmx_vmexit(), say right after prepare_vmcs12(). Until I saw:
> > >
> 
> > > Because nested code sometimes modifies vmcs fileds "after"
> > > nested_vmx_vmexit (see below). I was afraid nested logic
> > > may be changed in the future and some field may become out-of-sync.
> > >
> > > If we do have to call copy_vmcs12_to_shadow explicitly, then, it will
> be
> > > more difficult to miss some field.
> > >
> 
> 
> > I think the patch already miss some fields. What if nested_vmx_run()
> > fails and calls nested_vmx_entry_failure(). nested_vmx_entry_failure()
> > sets vmcs12->vm_exit_reason and vmcs12->exit_qualification, but where do
> > we copy them back to shadow before going back to L1?
> 
> Good catch! :)
> 
> Note that the entry path is easy to handle because we copy the fields
> as part of nested_vmx_entry. This is not like exit path where
> KVM(L0) code can modify fields after nested_vmx_vmexit is called.
> 
> So here, we could simple call copy_vmcs12_to_shadow if the entry fails
> (as part of nested_vmx_entry_failure or nested_vmx). We could optimize
> the code by updating these specific fields directly, but I don't think
> we really need to optimize code that is part of the error path.
> 
We needn't.

> > May be we need to introduce vmcs12 accessors to track what is changes
> > and if something need to be copied to shadow before going back to L1.
> 
> That means we will need to modify all the lines of code that uses
> "vmcs12->" with an inline nested_vmcs_read or nested_vmcs_write function.
> Inside these inline functions we could access the shadow vmcs directly.
> However, to access the shadow vmcs we need to vmptrld first and this will
> force
> unnecessary vmptrlds (between shadow vmcs 12 and current vmcs 01) each time
> the code accesses a vmcs12 field. Alternatively, if we want to avoid
> unnecessary vmptrlds each time we access vmcs12 we could simple set a
> flag that indicates when a shadow field was changed. In this case, we will
> need to find all the places to check the flag and copy the fields,
> considering both success and error paths.
That's not how I see it. nested_vmcs_write() will set a request bit in
vcpu (this is the flag you mention above). The bit will be checked during
a guest entry and vmcs12 will be synced to shadow at this point. Later
we can track what fields were written and sync only them.

> Finally, I am afraid that these directions will introduce new issues,
> will force us to modify too many lines and they may create a merge/rebase
> mess...
Conflicts should be trivial and I do not expect many iterations for the
patch series. I like the approach you take to use vmcs shadowing and most
comment are nitpicks.  I promise to review the next version as soon as
it is posted :)

> 
> Maybe we should simple fix nested_vmx_entry_failure (as well as the
> other fixes you suggested in other patches) and apply the code.
> Do you agree ?
> 
The approach is error prone. Even if we will fix all bugs in the current
code each new nested vmx modification will have to be reviewed with shadowing
in mind and sometimes it is hard to see global picture just from a
patch. It is better to hide shadow details from occasional nested vmx hacker.
Are you expecting EPT patches to make it right for instance?

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux