Re: Interest in contributing to KVM TODO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sean,

Thank you for getting back to us. We are looking at a timeline of
about 3-4 weeks. We are interested in KVM/virtualization in general
and memory management. Ideally, we would work on a fairly scoped task
given our limited experience and timeline. We initially planned to
change the implementation details in `x86/kvm/mmu/mmu.c` to use
approximate LRU, then run some performance benchmarks to compare with
the current FIFO implementation. However, we understand from your
reply that KVM has defaulted to TDP MMU.

Are there any backlog tasks that fit our timeline? We are open to
anything (comparison benchmarks, bug fixes, small features, etc.). Our
solution will probably not be foolproof, but we hope to produce
something actionable for future KVM efforts. Happy to discuss further.

Regards,
Aaron
---------------------------------------------------------------------
From: Sean Christopherson <seanjc@xxxxxxxxxx>
Date: Thursday, 20 February 2025 at 11:06 AM
To: Aaron Ang <a1ang@xxxxxxxx>
Cc: kvm@xxxxxxxxxxxxxxx <kvm@xxxxxxxxxxxxxxx>, jukaufman@xxxxxxxx
<jukaufman@xxxxxxxx>, eth003@xxxxxxxx <eth003@xxxxxxxx>, Alex Asch
<aasch@xxxxxxxx>
Subject: Re: Interest in contributing to KVM TODO

On Wed, Jan 22, 2025, Aaron Ang wrote:
> Hi KVM team,
>
> We are a group of graduate students from the University of California,
> San Diego, interested in contributing to KVM as part of our class
> project. We have identified a task from the TODO that we would like to

Oof, https://urldefense.com/v3/__https://www.linux-kvm.org/page/TODO__;!!Mih3wA!GHlu-1K3bzLd-0OuL5LV30AtMELMkt6xiZXu7wenC6b28xauSSG0z0BVnF-9bLDpYHD97a1YrLov9w$
 is a "bit" stale.

> tackle: Improve mmu page eviction algorithm (currently FIFO, change to
> approximate LRU). May I know if there are any updates on this task,
> and is there room for us to develop in this space?

AFAIK, no one is working on this particular task, but honestly I
wouldn't bother.
There are use cases that still rely on shadow paging[1], but those tend to be
highly specialized and either ensure there are always "enough" MMU
pages available,
or in the case of PVM, I suspect there are _significant_ out-of-tree changes to
optimize shadow paging as a whole.

With the TDP MMU, KVM completely ignores the MMU page limit (both KVM's default
and the limit set by KVM_SET_NR_MMU_PAGES.  With TDP, i.e. without
shadow paging,
the number of possible MMU pages is a direct function of the amount of memory
exposed to the guest, i.e. there is no danger of KVM accumulating too many page
tables due shadowing a large number of guest CR3s.

With nested TDP, KVM does employ shadow paging, but the behavior of an
L1 hypervisor
using TDP is wildly different than an L1 kernel managing "legacy" page
tables for
itself and userspace.  If an L1 hypervisor manages to run up against KVM's limit
on the number of MMU pages, then in all likelihood it deserves to die :-)

What areas are y'all looking to explore?  E.g. KVM/virtualization in general,
memory management in particular, something else entirely?  And what timeline are
you operating on, i.e. how big of a problem/project are you looking to tackle?

[1] https://urldefense.com/v3/__https://lore.kernel.org/all/20240226143630.33643-1-jiangshanlai@gmail.com__;!!Mih3wA!GHlu-1K3bzLd-0OuL5LV30AtMELMkt6xiZXu7wenC6b28xauSSG0z0BVnF-9bLDpYHD97a1KsoVHiQ$

> We also plan to introduce other algorithms and compare their performance
> across various workloads. We would be happy to talk to the engineers owning
> the MMU code to see how we can coordinate our efforts. Thank you.





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux