On Mon, Dec 12, 2022 at 11:32 AM Ben Gardon <bgardon@xxxxxxxxxx> wrote: > > On Thu, Dec 8, 2022 at 11:39 AM David Matlack <dmatlack@xxxxxxxxxx> wrote: > > > > Abstract away kvm_mmu_max_mapping_level(), which is an x86-specific > > function for computing the max level that a given GFN can be mapped in > > KVM's page tables. This will be used in a future commit to enable moving > > the TDP MMU to common code. > > > > Provide a default implementation for non-x86 architectures that just > > returns the max level. This will result in more zapping than necessary > > when disabling dirty logging (i.e. less than optimal performance) but no > > correctness issues. > > Apologies if you already implemented it in a later patch in this > series, but would it not at least be possible to port > host_pfn_mapping_level to common code and check that? > I'm assuming, though I could be wrong, that all archs map GFNs with at > most a host page table granularity mapping. > I suppose that doesn't strictly need to be included in this series, > but it would be worth addressing in the commit description. It's not implemented later in this series, but I agree it's something we should do. In fact, it's worth doing regardless of this series as a way to share more code across architectures (e.g. KVM/ARM has it's own version in arch/arm64/kvm/mmu.c:get_user_mapping_size()). _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm