Re: [RFC PATCH v3 00/10] Add support for shared PTEs across processes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/7/24 9:45 AM, Sean Christopherson wrote:
On Mon, Oct 07, 2024, David Hildenbrand wrote:
On 07.10.24 17:58, Dave Hansen wrote:
On 10/7/24 01:44, David Hildenbrand wrote:
On 02.10.24 19:35, Dave Hansen wrote:
We were just chatting about this on David Rientjes's MM alignment call.
Unfortunately I was not able to attend this time, my body decided it's a
good idea to stay in bed for a couple of days.

I thought I'd try to give a little brain

Let's start by thinking about KVM and secondary MMUs.  KVM has a primary
mm: the QEMU (or whatever) process mm.  The virtualization (EPT/NPT)
tables get entries that effectively mirror the primary mm page tables
and constitute a secondary MMU.  If the primary page tables change,
mmu_notifiers ensure that the changes get reflected into the
virtualization tables and also that the virtualization paging structure
caches are flushed.

msharefs is doing something very similar.  But, in the msharefs case,
the secondary MMUs are actually normal CPU MMUs.  The page tables are
normal old page tables and the caches are the normal old TLB.  That's
what makes it so confusing: we have lots of infrastructure for dealing
with that "stuff" (CPU page tables and TLB), but msharefs has
short-circuited the infrastructure and it doesn't work any more.
It's quite different IMHO, to a degree that I believe they are different
beasts:

Secondary MMUs:
* "Belongs" to same MM context and the primary MMU (process page tables)
I think you're speaking to the ratio here.  For each secondary MMU, I
think you're saying that there's one and only one mm_struct.  Is that right?
Yes, that is my understanding (at least with KVM). It's a secondary MMU
derived from exactly one primary MMU (MM context -> page table hierarchy).
I don't think the ratio is what's important.  I think the important takeaway is
that the secondary MMU is explicitly tied to the primary MMU that it is tracking.
This is enforced in code, as the list of mmu_notifiers is stored in mm_struct.

The 1:1 ratio probably holds true today, e.g. for KVM, each VM is associated with
exactly one mm_struct.  But fundamentally, nothing would prevent a secondary MMU
that manages a so called software TLB from tracking multiple primary MMUs.

E.g. it wouldn't be all that hard to implement in KVM (a bit crazy, but not hard),
because KVM's memslots disallow gfn aliases, i.e. each index into KVM's secondary
MMU would be associated with at most one VMA and thus mm_struct.

Pulling Dave's earlier comment in:

  : But the short of it is that the msharefs host mm represents a "secondary
  : MMU".  I don't think it is really that special of an MMU other than the
  : fact that it has an mm_struct.

and David's (so. many. Davids):

  : I better not think about the complexity of seconary MMUs + mshare (e.g.,
  : KVM with mshare in guest memory): MMU notifiers for all MMs must be
  : called ...

mshare() is unique because it creates the possibly of chained "secondary" MMUs.
I.e. the fact that it has an mm_struct makes it *very* special, IMO.

This is definitely a gap with the current mshare implementation. Mapping memory

that relies on an mmu notifier in an mshare region will result in the notifier callbacks

never being called. On the surface it seems like the mshare mm needs notifiers that

go through every mm that has mapped the mshare region and calls its notifiers.



* Maintains separate tables/PTEs, in completely separate page table
    hierarchy
This is the case for KVM and the VMX/SVM MMUs, but it's not generally
true about hardware.  IOMMUs can walk x86 page tables and populate the
IOTLB from the _same_ page table hierarchy as the CPU.
Yes, of course.
Yeah, the recent rework of invalidate_range() => arch_invalidate_secondary_tlbs()
sums things up nicely:

commit 1af5a8109904b7f00828e7f9f63f5695b42f8215
Author:     Alistair Popple <apopple@xxxxxxxxxx>
AuthorDate: Tue Jul 25 23:42:07 2023 +1000
Commit:     Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
CommitDate: Fri Aug 18 10:12:41 2023 -0700

     mmu_notifiers: rename invalidate_range notifier
There are two main use cases for mmu notifiers. One is by KVM which uses
     mmu_notifier_invalidate_range_start()/end() to manage a software TLB.
The other is to manage hardware TLBs which need to use the
     invalidate_range() callback because HW can establish new TLB entries at
     any time.  Hence using start/end() can lead to memory corruption as these
     callbacks happen too soon/late during page unmap.
mmu notifier users should therefore either use the start()/end() callbacks
     or the invalidate_range() callbacks.  To make this usage clearer rename
     the invalidate_range() callback to arch_invalidate_secondary_tlbs() and
     update documention.

I believe if I implemented an arch_invalidate_secondary_tlbs notifier that flushed all

TLBs, that would solve the problem of ensuring TLBs are flushed before pages being

unmapped from an mshare region are freed. However, it seems like there would be

potentially be a lot more collateral damage from flushing everything since the flushes

would happen more often.





[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux