-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Il 26/09/2013 19:19, Jan Kiszka ha scritto: > On 2013-09-26 17:04, Paolo Bonzini wrote: >> Il 16/09/2013 10:11, Arthur Chunqi Li ha scritto: >>> This patch contains the following two changes: >>> 1. Fix the bug in nested preemption timer support. If vmexit L2->L0 >>> with some reasons not emulated by L1, preemption timer value should >>> be save in such exits. >>> 2. Add support of "Save VMX-preemption timer value" VM-Exit controls >>> to nVMX. >>> >>> With this patch, nested VMX preemption timer features are fully >>> supported. >>> >>> Signed-off-by: Arthur Chunqi Li <yzt356@xxxxxxxxx> >>> --- >>> ChangeLog to v4: >>> Format changes and remove a flag in nested_vmx. >>> arch/x86/include/uapi/asm/msr-index.h | 1 + >>> arch/x86/kvm/vmx.c | 44 +++++++++++++++++++++++++++++++-- >>> 2 files changed, 43 insertions(+), 2 deletions(-) >> >> Hi all, >> >> the test fails for me if the preemption timer value is set to a value >> that is above ~2000 (which means ~65000 TSC cycles on this machine). >> The preemption timer seems to count faster than what is expected, for >> example only up to 4 million cycles if you set it to one million. >> So, I am leaving the patch out of kvm/queue for now, until I can >> test it on more processors. > > So this behaviour is a regression of this patch (and your own version as > well)? Without this patch Arthur's preemption timer test doesn't work at all. If I only apply this hunk, which disables the preemption timer while in L1: @@ -8396,6 +8375,8 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu) load_vmcs12_host_state(vcpu, vmcs12); + vmcs_write32(PIN_BASED_VM_EXEC_CONTROL, vmx_pin_based_exec_ctrl(vmx)); + /* Update TSC_OFFSET if TSC was changed while L2 ran */ vmcs_write64(TSC_OFFSET, vmx->nested.vmcs01_tsc_offset); then the testcase works for somewhat larger values of the preemption timer (up to ~1500000 TSC cycles), but then fails. If I apply the full patch below, the save-timer-value control works but the timer stops working except for very small values of the preemption timer. >> @@ -7294,6 +7272,15 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) >> | (1 << VCPU_EXREG_CR3)); >> vcpu->arch.regs_dirty = 0; >> >> + if (is_guest_mode(vcpu)) { >> + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); >> + if ((vmcs12->pin_based_vm_exec_control & PIN_BASED_VMX_PREEMPTION_TIMER) && >> + (vmcs12->vm_exit_controls & VM_EXIT_SAVE_VMX_PREEMPTION_TIMER)) { >> + vmcs12->vmx_preemption_timer_value = >> + vmcs_read32(VMX_PREEMPTION_TIMER_VALUE); > > Is the preemption timer guaranteed to continue to tick while in L0? I > think this is what you are relying on here, right? No, it is definitely not ticking. But I was trying to simplify Arthur's code as much as possible, while fixing the worst bug (i.e. that after L2->L0->L2 the timer value restarts ticking from the initial value). Support for save-timer-value comes for free, and doesn't affect the result of the testcase. But this would cause the preemption timer to be late. It doesn't explain why the preemption timer expires faster than the (scaled) TSC. Paolo -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJSRHM1AAoJEBvWZb6bTYbyMHEP+gJro4KyKB9cAPE6+e/305mV /px/FWoY1Y5GlQ9juVV2DGQRipKVeIcN3pf3vZYZMSWzPCmbprmnx5nAy3NIpDi7 UM8gjZ+8dyzIfU/BfaEQWrI5o1mzcOhQTUVHw3bz09NxUWgoq7bp+eODR/kRyrw3 5EVOHjzaffZOug//z+boPRnaLQjvd4cFKv9HuKoo4hGjgQLYC6HfYWou2wvRzw9V La63p8w3gkVn2Fx31ppszLyS1qDIAJ0C5b3RwQp2jl5Kzp407VK7QGe9xoPiD8ZO bTsJs8nTbbITezxk8z9TUbSWWoVCd0M8aAfKhLB4xzubIWC/qQDA0Mwcu2SiNo3v m+vZsX3A358efmmjnkA3FUlGwwFBp5xWk9XuHkLhxh2AYWBl3+/IAZXg2yJP8SSq ShD4mcImWZTLFI5IK/02erDNdGYLTlnGC06537Fv9fzGAOtgowaOc8ScSxJDQn/n MbBBGvGa3Ps6RnF+eY4Xi8N9rsfbcVAGbVSdVMR0SXanuUT+h8R1kd7xtnYV2LpH twAreBVO+zxKhYYfnLj6Gu/oDDWAENDFMf1t6xumlflchR2gdcztmgp/OMkTK4QS IhWiaCzjxl/lD9vxyxstIhJNvMzsKk0o+K4X0h80oKizoDJybPcNZIaU2wNXs3iJ Q5nhpbTsl+kZLUlAWvlp =vW1q -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html