Il 26/09/2013 19:47, Paolo Bonzini ha scritto: > > If I only apply this hunk, which disables the preemption timer while > in L1: > > @@ -8396,6 +8375,8 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu) > > load_vmcs12_host_state(vcpu, vmcs12); > > + vmcs_write32(PIN_BASED_VM_EXEC_CONTROL, vmx_pin_based_exec_ctrl(vmx)); > + > /* Update TSC_OFFSET if TSC was changed while L2 ran */ > vmcs_write64(TSC_OFFSET, vmx->nested.vmcs01_tsc_offset); > > then the testcase works for somewhat larger values of the preemption timer > (up to ~1500000 TSC cycles), but then fails. I mean if I apply it on top of current kvm/next, without Arthur's patch. If I apply the hunk on top of Arthur's patch nothing changes and the timer testcase starts breaking around ~65000 TSC cycles. It is a bit problematic that adding printks changes something, so that the test starts passing. I haven't tried tracepoints yet. Jan, which L1 is using the preemption timer? Any reason why you added it? I wonder if it isn't better to revert it, since it is quite broken. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html