Re: [PATCH v6 1/8] KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sean,

On 10/21/22 11:25 PM, Sean Christopherson wrote:
On Fri, Oct 21, 2022, Gavin Shan wrote:
I think Marc want to make the check more generalized with a new event [1].

Generalized code can be achieved with a helper though.  The motivation is indeed
to avoid overhead on every run:

   : A seemingly approach would be to make this a request on dirty log
   : insertion, and avoid the whole "check the log size" on every run,
   : which adds pointless overhead to unsuspecting users (aka everyone).


https://lore.kernel.org/kvmarm/87lerkwtm5.wl-maz@xxxxxxxxxx


Ok. I would say both are the motivations. I will refer to the words in the commit
log and include the link. In that way, the motivations are cleared mentioned in
the commit log.

I'm pretty sure the check can be moved to the very end of the request checks,
e.g. to avoid an aborted VM-Enter attempt if one of the other request triggers
KVM_REQ_RING_SOFT_FULL.

Heh, this might actually be a bug fix of sorts.  If anything pushes to the ring
after the check at the start of vcpu_enter_guest(), then without the request, KVM
would enter the guest while at or above the soft limit, e.g. record_steal_time()
can dirty a page, and the big pile of stuff that's behind KVM_REQ_EVENT can
certainly dirty pages.


When dirty ring becomes full, the VCPU can't handle any operations, which will
bring more dirty pages.

Right, but there's a buffer of 64 entries on top of what the CPU can buffer (VMX's
PML can buffer 512 entries).  Hence the "soft full".  If x86 is already on the
edge of exhausting that buffer, i.e. can fill 64 entries while handling requests,
than we need to increase the buffer provided by the soft limit because sooner or
later KVM will be able to fill 65 entries, at which point errors will occur
regardless of when the "soft full" request is processed.

In other words, we can take advantage of the fact that the soft-limit buffer needs
to be quite conservative.


Right, there are extra 64 entries in the ring between soft full and hard full.
Another 512 entries are reserved when PML is enabled. However, the other requests,
who produce dirty pages, are producers to the ring. We can't just have the assumption
that those producers will need less than 64 entries. So I think KVM_REQ_DIRTY_RING_SOFT_FULL
has higher priority than other requests, except KVM_REQ_VM_DEAD. KVM_REQ_VM_DEAD
needs to be handled immediately.

Would it make sense to clear the request in kvm_dirty_ring_reset()?  I don't care
about the overhead of having to re-check the request, the goal would be to help
document what causes the request to go away.

E.g. modify kvm_dirty_ring_reset() to take @vcpu and then do:

	if (!kvm_dirty_ring_soft_full(ring))
		kvm_clear_request(KVM_REQ_RING_SOFT_FULL, vcpu);


It's reasonable to clear KVM_REQ_DIRTY_RING_SOFT_FULL when the ring is reseted.
@vcpu can be achieved by container_of(..., ring).

Using container_of() is silly, there's literally one caller that does:

	kvm_for_each_vcpu(i, vcpu, kvm)
		cleared += kvm_dirty_ring_reset(vcpu->kvm, &vcpu->dirty_ring);


May I ask why it's silly by using container_of()? In order to avoid using
container_of(), kvm_dirty_ring_push() also need @vcpu. So lets change those
two functions to something like below. Please double-check if they looks good
to you?

  void kvm_dirty_ring_push(struct kvm_vcpu *vcpu, u32 slot, u64 offset);
  int kvm_dirty_ring_reset(struct kvm_vcpu *vcpu);

Thanks,
Gavin

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux