On Mon, Sep 27, 2021, Paolo Bonzini wrote: > On Mon, Sep 27, 2021 at 5:17 PM Christian Borntraeger > <borntraeger@xxxxxxxxxx> wrote: > > > So I think there are two possibilities that makes sense: > > > > > > * track what is using KVM_CAP_HALT_POLL, and make writes to halt_poll_ns follow that > > > > what about using halt_poll_ns for those VMs that did not uses KVM_CAP_HALT_POLL and the private number for those that did. > > Yes, that's what I meant. David pointed out that doesn't allow you to > disable halt polling altogether, but for that you can always ask each > VM's userspace one by one, or just not use KVM_CAP_HALT_POLL. (Also, I > don't know about Google's usecase, but mine was actually more about > using KVM_CAP_HALT_POLL to *disable* halt polling on some VMs!). I kinda like the idea if special-casing halt_poll_ns=0, e.g. for testing or in-the-field mitigation if halt-polling is broken. It'd be trivial to support, e.g. @@ -3304,19 +3304,23 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu) update_halt_poll_stats(vcpu, start, poll_end, !waited); if (halt_poll_allowed) { + max_halt_poll_ns = vcpu->kvm->max_halt_poll_ns; + if (!max_halt_poll_ns || !halt_poll_ns) <------ squish the max if halt_poll_ns==0 + max_halt_poll_ns = halt_poll_ns; + if (!vcpu_valid_wakeup(vcpu)) { shrink_halt_poll_ns(vcpu); - } else if (vcpu->kvm->max_halt_poll_ns) { + } else if (max_halt_poll_ns) { if (halt_ns <= vcpu->halt_poll_ns) ; /* we had a long block, shrink polling */ else if (vcpu->halt_poll_ns && - halt_ns > vcpu->kvm->max_halt_poll_ns) + halt_ns > max_halt_poll_ns) shrink_halt_poll_ns(vcpu); /* we had a short halt and our poll time is too small */ - else if (vcpu->halt_poll_ns < vcpu->kvm->max_halt_poll_ns && - halt_ns < vcpu->kvm->max_halt_poll_ns) - grow_halt_poll_ns(vcpu); + else if (vcpu->halt_poll_ns < max_halt_poll_ns && + halt_ns < max_halt_poll_ns) + grow_halt_poll_ns(vcpu, max_halt_poll_ns); } else { vcpu->halt_poll_ns = 0; }