Gleb Natapov <gleb@xxxxxxxxxx> wrote on 11/04/2013 09:54:11 AM: > On Wed, Apr 10, 2013 at 10:15:37PM +0300, Abel Gordon wrote: > > > > > > Gleb Natapov <gleb@xxxxxxxxxx> wrote on 09/04/2013 04:14:35 PM: > > > I think the patch already miss some fields. What if nested_vmx_run() > > > fails and calls nested_vmx_entry_failure(). nested_vmx_entry_failure () > > > sets vmcs12->vm_exit_reason and vmcs12->exit_qualification, but where do > > > we copy them back to shadow before going back to L1? > > > > Good catch! :) > > > > Note that the entry path is easy to handle because we copy the fields > > as part of nested_vmx_entry. This is not like exit path where > > KVM(L0) code can modify fields after nested_vmx_vmexit is called. > > > > So here, we could simple call copy_vmcs12_to_shadow if the entry fails > > (as part of nested_vmx_entry_failure or nested_vmx). We could optimize > > the code by updating these specific fields directly, but I don't think > > we really need to optimize code that is part of the error path. > > > We needn't. > > > > May be we need to introduce vmcs12 accessors to track what is changes > > > and if something need to be copied to shadow before going back to L1. > > > > That means we will need to modify all the lines of code that uses > > "vmcs12->" with an inline nested_vmcs_read or nested_vmcs_write function. > > Inside these inline functions we could access the shadow vmcs directly. > > However, to access the shadow vmcs we need to vmptrld first and this will > > force > > unnecessary vmptrlds (between shadow vmcs 12 and current vmcs 01) each time > > the code accesses a vmcs12 field. Alternatively, if we want to avoid > > unnecessary vmptrlds each time we access vmcs12 we could simple set a > > flag that indicates when a shadow field was changed. In this case, we will > > need to find all the places to check the flag and copy the fields, > > considering both success and error paths. > That's not how I see it. nested_vmcs_write() will set a request bit in > vcpu (this is the flag you mention above). The bit will be checked during > a guest entry and vmcs12 will be synced to shadow at this point. Later > we can track what fields were written and sync only them. > > > Finally, I am afraid that these directions will introduce new issues, > > will force us to modify too many lines and they may create a merge/rebase > > mess... > Conflicts should be trivial and I do not expect many iterations for the > patch series. I like the approach you take to use vmcs shadowing and most > comment are nitpicks. I promise to review the next version as soon as > it is posted :) > > > > > Maybe we should simple fix nested_vmx_entry_failure (as well as the > > other fixes you suggested in other patches) and apply the code. > > Do you agree ? > > > The approach is error prone. Even if we will fix all bugs in the current > code each new nested vmx modification will have to be reviewed with shadowing > in mind and sometimes it is hard to see global picture just from a > patch. It is better to hide shadow details from occasional nested vmx hacker. But how ? I suggested 2 alternatives with pros/cons. Do you have any other alternative in mid ? Three more alternatives: (3) set vmcs12 to null once the shadow is synched (so inappropriate accesses due to buggy code will access a null pointer and crash kvm). (4) like -1-, but using the accessors and WARN/PANIC. (5) change nested_vmx_vmexit so it can update additional fields passed by the caller. Additional vmcs12 fields should NOT be updated directly (as it is today). They should be updated only when calling nested_vmx_vmexit. > Are you expecting EPT patches to make it right for instance? Yep, I actually added shadow vmcs also to a KVM version that includes nEPT patches Nadav Har'El shared. It required adding 1 call to copy_vmcs12_to_shadow after a nested exit related to which updates vmcs12 fields. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html