Re: [PATCH v3 1/3] KVM: Implement dirty quota-based throttling of vcpus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 03/05/22 7:13 pm, Peter Xu wrote:
On Tue, May 03, 2022 at 12:52:26PM +0530, Shivam Kumar wrote:
On 03/05/22 3:44 am, Peter Xu wrote:
Hi, Shivam,

On Sun, Mar 06, 2022 at 10:08:48PM +0000, Shivam Kumar wrote:
+static inline int kvm_vcpu_check_dirty_quota(struct kvm_vcpu *vcpu)
+{
+	u64 dirty_quota = READ_ONCE(vcpu->run->dirty_quota);
+	u64 pages_dirtied = vcpu->stat.generic.pages_dirtied;
+	struct kvm_run *run = vcpu->run;
+
+	if (!dirty_quota || (pages_dirtied < dirty_quota))
+		return 1;
+
+	run->exit_reason = KVM_EXIT_DIRTY_QUOTA_EXHAUSTED;
+	run->dirty_quota_exit.count = pages_dirtied;
+	run->dirty_quota_exit.quota = dirty_quota;
Pure question: why this needs to be returned to userspace?  Is this value
set from userspace?

1) The quota needs to be replenished once exhasuted.
2) The vcpu should be made to sleep if it has consumed its quota pretty
quick.

Both these actions are performed on the userspace side, where we expect a
thread calculating the quota at very small regular intervals based on
network bandwith information. This can enable us to micro-stun the vcpus
(steal their runtime just the moment they were dirtying heavily).

We have implemented a "common quota" approach, i.e. transfering any unused
quota to a common pool so that it can be consumed by any vcpu in the next
interval on FCFS basis.

It seemed fit to implement all this logic on the userspace side and just
keep the "dirty count" and the "logic to exit to userspace whenever the vcpu
has consumed its quota" on the kernel side. The count is required on the
userspace side because there are cases where a vcpu can actually dirty more
than its quota (e.g. if PML is enabled). Hence, this information can be
useful on the userspace side and can be used to re-adjust the next quotas.
I agree this information is useful.  Though my question was that if the
userspace should have a copy (per-vcpu) of that already then it's not
needed to pass it over to it anymore?
This is how we started but then based on the feedback from Sean, we moved 'pages_dirtied' to vcpu stats as it can be a useful stat. The 'dirty_quota' variable is already shared with userspace as it is in the vcpu run struct and hence the quota can be modified by userspace on the go. So, it made sense to pass both the variables at the time of exit (the vcpu might be exiting with an old copy of dirty quota, which the userspace needs to know).

Thanks.
Thank you for the question. Please let me know if you have further concerns.

+	return 0;
+}
The other high level question is whether you have considered using the ring
full event to achieve similar goal?

Right now KVM_EXIT_DIRTY_RING_FULL event is generated when per-vcpu ring
gets full.  I think there's a problem that the ring size can not be
randomly set but must be a power of 2.  Also, there is a maximum size of
ring allowed at least.

However since the ring size can be fairly small (e.g. 4096 entries) it can
still achieve some kind of accuracy.  For example, the userspace can
quickly kick the vcpu back to VM_RUN only until it sees that it reaches
some quota (and actually that's how dirty-limit is implemented on QEMU,
contributed by China Telecom):

https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_cover.1646243252.git.huangy81-40chinatelecom.cn_&d=DwIBaQ&c=s883GpUCOChKOHiocYtGcg&r=4hVFP4-J13xyn-OcN0apTCh8iKZRosf5OJTQePXBMB8&m=y6cIruIsp50rH6ImgUi28etki9RTCTHLhRic4IzAtLa62j9PqDMsKGmePy8wGIy8&s=tAZZzTjg74UGxGVzhlREaLYpxBpsDaNV4X_DNdOcUJ8&e=

Is there perhaps some explicit reason that dirty ring cannot be used?

Thanks!
When we started this series, AFAIK it was not possible to set the dirty ring
size once the vcpus are created. So, we couldn't dynamically set dirty ring
size.
Agreed.  The ring size can only be set when startup and can't be changed.

Also, since we are going for micro-stunning and the allowed dirties in
such small intervals can be pretty low, it can cause issues if we can
only use a dirty quota which is a power of 2. For instance, if the dirty
quota is to be set to 9, we can only set it to 16 (if we round up) and if
dirty quota is to be set to 15 we can only set it to 8 (if we round
down). I hope you'd agree that this can make a huge difference.
Yes. As discussed above, I didn't expect the ring size to be the quota
per-se, so what I'm wondering is whether we can leverage a small and
constant sized ring to emulate the behavior of a quota with any size, but
with a minimum granule of the dirty ring size.
This would be an interesting thing to try. I've already planned efforts to optimise it for dirty ring interface. Thank you for this suggestion.

Side question: Is there any plan to make it possible to dynamically update the dirty ring size?
Also, this approach works for both dirty bitmap and dirty ring interface
which can help in extending this solution to other architectures.
Is there any specific arch that you're interested outside x86?
x86 is the first priority but this patchset targets s390 and arm as well.

Logically we can also think about extending dirty ring to other archs, but
there were indeed challenges where some pages can be dirtied without a vcpu
context, and that's why it was only supported initially on x86.
This is an interesting problem and we are aware of it. We have a couple of ideas but they are very raw as of now.

I think it should not be a problem for the quota solution, because it's
backed up by the dirty bitmap so no dirty page will be overlooked for
migration purpose, which is definitely a benefit.  But I'm still curious
whether you looked into any specific archs already (x86 doesn't have such
problem) so that maybe there's some quota you still want to apply elsewhere
where there's no vcpu context.
Yes, this is kind of similar to one of the ideas we have thought. Though, there are many things which need a lot of brainstorming, e.g. the ratio in which we can split the overall quota to accomodate for dirties with no vcpu context.
Thanks,
Thanks again for these invaluable comments, Peter.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux