Re: [PATCH v2 0/1] KVM: Dirty quota-based throttling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would be grateful if I could receive some feedback on the dirty quota v2 patchset.

On 20/12/21 11:27 am, Shivam Kumar wrote:
This is v2 of the dirty quota series, with some fundamental changes in
implementation based on the feedback received. The major changes are listed
below:


i) Squashed the changes into one commit.

Previously, the patchset had six patches but there was a lack of
completeness in individual patches. Also, v2 has much simpler
implementation and so it made sense to squash the changes into just one
commit:
   KVM: Implement dirty quota-based throttling of vcpus


ii) Unconditionally incrementing dirty count of vcpu.

As per the discussion on previous patchset, dirty count can serve purposes
other than just migration, e.g. can be used to estimate per-vcpu dirty
rate. Also, incrementing the dirty count unconditionally can avoid
acquiring and releasing kvm mutex lock in the VMEXIT path for every page
write fault.


iii) Sharing dirty quota and dirty count with the userspace through
kvm_run.

Previously, dirty quota and dirty count were defined in a struct which was
mmaped so that these variables could be shared with userspace. Now, dirty
quota is defined in the kvm_run structure and dirty count is also passed
to the userspace through kvm_run only, to prevent memory wastage.


iv) Organised the implementation to accommodate other architectures in
upcoming patches.

We have added the dirty count to the kvm_vcpu_stat_generic structure so
that it can be used as a vcpu stat for all the architectures. For any new
architecture, we now just need to add a conditional exit to userspace from
the kvm run loop.


v) Removed the ioctl to enable/disable dirty quota: Dirty quota throttling
can be enabled/disabled based on the dirty quota value itself. If dirty
quota is zero, throttling is disabled. For any non-zero value of dirty
quota, the vcpu has to exit to userspace whenever dirty count equals/
exceeds dirty quota. Thus, we don't need a separate flag to enable/disable
dirty quota throttling and hence no ioctl is required.


vi) Naming changes: "Dirty quota migration" has been replaced with a more
reasonable term "dirty quota throttling".


Here's a brief overview of how dirty quota throttling is expected to work:

With dirty quota throttling, memory dirtying is throttled by setting a
limit on the number of pages a vcpu can dirty in given fixed microscopic
size time intervals (dirty quota intervals).


Userspace                                 KVM

[At the start of dirty logging]
Initialize dirty quota to some
non-zero value for each vcpu.    ----->   [When dirty logging starts]
                                           Start incrementing dirty count
                                           for every dirty by the vcpu.

                                           [Dirty count equals/exceeds
                                           dirty quota]
If the vcpu has already claimed  <-----   Exit to userspace.
its quota for the current dirty
quota interval, sleep the vcpu
until the next interval starts.

Give the vcpu its share for the
current dirty quota interval.    ----->   Continue dirtying with the newly
                                           received quota.

[At the end of dirty logging]
Set dirty quota back to zero
for every vcpu.                 ----->    Throttling disabled.


The userspace can design a strategy to allocate the overall scope of
dirtying for the VM (which it can estimate on the basis of available
network bandwidth and degree of throttling) among the vcpus, e.g.

Equally dividing the available scope of dirtying to all the vcpus can
ensure fairness and selective throttling as the vcpu dirtying extensively
will consume its share very soon and will have to wait for a new share to
continue dirtying without affecting some other vcpu which might be running
mostly-read-workload and thus might not consume its share soon enough.
This ensures that only write workloads are penalised with little effect on
read workloads.

However, there can be skewed cases where a few vcpus might not be dirtying
enough and might be sitting on a huge dirty quota pool. This unused dirty
quota could be used by other vcpus. So, the share of a vcpu, if not
claimed in a given interval, can be added to a common pool which can be
served on a First-Come-First-Basis. This common pool can be claimed by any
vcpu only after it has exhausted its individual share for the given time
interval.


Please find v1 of dirty quota series here:
https://lore.kernel.org/kvm/20211114145721.209219-1-shivam.kumar1@xxxxxxxxxxx/

Please find the KVM Forum presentation on dirty quota-based throttling
here: https://www.youtube.com/watch?v=ZBkkJf78zFA


Shivam Kumar (1):
   KVM: Implement dirty quota-based throttling of vcpus

  arch/x86/kvm/x86.c        | 17 +++++++++++++++++
  include/linux/kvm_types.h |  5 +++++
  include/uapi/linux/kvm.h  | 12 ++++++++++++
  virt/kvm/kvm_main.c       |  4 ++++
  4 files changed, 38 insertions(+)




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux