Hi, Shivam, On Sun, Mar 06, 2022 at 10:08:48PM +0000, Shivam Kumar wrote: > +static inline int kvm_vcpu_check_dirty_quota(struct kvm_vcpu *vcpu) > +{ > + u64 dirty_quota = READ_ONCE(vcpu->run->dirty_quota); > + u64 pages_dirtied = vcpu->stat.generic.pages_dirtied; > + struct kvm_run *run = vcpu->run; > + > + if (!dirty_quota || (pages_dirtied < dirty_quota)) > + return 1; > + > + run->exit_reason = KVM_EXIT_DIRTY_QUOTA_EXHAUSTED; > + run->dirty_quota_exit.count = pages_dirtied; > + run->dirty_quota_exit.quota = dirty_quota; Pure question: why this needs to be returned to userspace? Is this value set from userspace? > + return 0; > +} The other high level question is whether you have considered using the ring full event to achieve similar goal? Right now KVM_EXIT_DIRTY_RING_FULL event is generated when per-vcpu ring gets full. I think there's a problem that the ring size can not be randomly set but must be a power of 2. Also, there is a maximum size of ring allowed at least. However since the ring size can be fairly small (e.g. 4096 entries) it can still achieve some kind of accuracy. For example, the userspace can quickly kick the vcpu back to VM_RUN only until it sees that it reaches some quota (and actually that's how dirty-limit is implemented on QEMU, contributed by China Telecom): https://lore.kernel.org/qemu-devel/cover.1646243252.git.huangy81@xxxxxxxxxxxxxxx/ Is there perhaps some explicit reason that dirty ring cannot be used? Thanks! -- Peter Xu