On Tue, Nov 07, 2023 at 08:11:09AM -0800, James Houghton wrote: > This extra ~8 bytes per page overhead is real, and it is the > theoretical maximum additional overhead that userfaultfd would require > over a KVM-based demand paging alternative when we are using > hugepages. Consider the case where we are using THPs and have just > finished post-copy, and we haven't done any collapsing yet: > > For userfaultfd: because we have UFFDIO_COPY'd or UFFDIO_CONTINUE'd at > 4K (because we demand-fetched at 4K), the userspace page tables are > entirely shattered. KVM has no choice but to have an entirely > shattered second-stage page table as well. > > For KVM demand paging: the userspace page tables can remain entirely > populated, so we get PMD mappings here. KVM, though, uses 4K SPTEs > because we have only just finished post-copy and haven't started > collapsing yet. > > So both systems end up with a shattered second stage page table, but > userfaultfd has a shattered userspace page table as well (+8 bytes/4K > if using THP, +another 8 bytes/2M if using HugeTLB-1G, etc.) and that > is where the extra overhead comes from. > > The second mapping of guest memory that we use today (through which we > install memory), given that we are using hugepages, will use PMDs and > PUDs, so the overhead is minimal. > > Hope that clears things up! Ah I see, thanks James. Though, is this a real concern in production use, considering worst case 0.2% overhead (all THP backed) and only exist during postcopy, only on destination host? In all cases, I agree that's still a valid point then, comparing to a constant 1/32k consumption with a bitmap. Thanks, -- Peter Xu