On Tue, 01 Nov 2022 19:39:25 +0000, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Mon, Oct 31, 2022, Gavin Shan wrote: > > The VCPU isn't expected to be runnable when the dirty ring becomes soft > > full, until the dirty pages are harvested and the dirty ring is reset > > from userspace. So there is a check in each guest's entrace to see if > > the dirty ring is soft full or not. The VCPU is stopped from running if > > its dirty ring has been soft full. The similar check will be needed when > > the feature is going to be supported on ARM64. As Marc Zyngier suggested, > > a new event will avoid pointless overhead to check the size of the dirty > > ring ('vcpu->kvm->dirty_ring_size') in each guest's entrance. > > > > Add KVM_REQ_DIRTY_RING_SOFT_FULL. The event is raised when the dirty ring > > becomes soft full in kvm_dirty_ring_push(). The event is cleared in the > > check, done in the newly added helper kvm_dirty_ring_check_request(), or > > when the dirty ring is reset by userspace. Since the VCPU is not runnable > > when the dirty ring becomes soft full, the KVM_REQ_DIRTY_RING_SOFT_FULL > > event is always set to prevent the VCPU from running until the dirty pages > > are harvested and the dirty ring is reset by userspace. > > > > kvm_dirty_ring_soft_full() becomes a private function with the newly added > > helper kvm_dirty_ring_check_request(). The alignment for the various event > > definitions in kvm_host.h is changed to tab character by the way. In order > > to avoid using 'container_of()', the argument @ring is replaced by @vcpu > > in kvm_dirty_ring_push() and kvm_dirty_ring_reset(). The argument @kvm to > > kvm_dirty_ring_reset() is dropped since it can be retrieved from the VCPU. > > > > Link: https://lore.kernel.org/kvmarm/87lerkwtm5.wl-maz@xxxxxxxxxx > > Suggested-by: Marc Zyngier <maz@xxxxxxxxxx> > > Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx> > > Reviewed-by: Peter Xu <peterx@xxxxxxxxxx> > > --- > > Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx> > > > @@ -142,13 +144,17 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring) > > > > kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask); > > > > + if (!kvm_dirty_ring_soft_full(ring)) > > + kvm_clear_request(KVM_REQ_DIRTY_RING_SOFT_FULL, vcpu); > > + > > Marc, Peter, and/or Paolo, can you confirm that clearing the request > here won't cause ordering problems? Logically, this makes perfect > sense (to me, since I suggested it), but I'm mildly concerned I'm > overlooking an edge case where KVM could end up with a soft-full > ring but no pending request. I don't think you'll end-up with a soft-full and no request situation, as kvm_make_request() enforces ordering, and you're making the request on the vcpu itself. Even on arm64, this is guaranteed to be ordered (same CPU, same address). However, resetting the ring and clearing the request are not ordered, which can lead to a slightly odd situation where the two events are out of sync. But kvm_dirty_ring_check_request() requires both the request to be set and the ring to be full to take any action. This work around the lack of order. I'd be much happier if kvm_clear_request() was fully ordered, but I otherwise don't think we have an issue here. Thanks, M. -- Without deviation from the norm, progress is not possible.