On Tue, 20 Jul 2021 18:23:02 +0100, Alexandru Elisei <alexandru.elisei@xxxxxxx> wrote: > > Hi Marc, > > I just can't figure out why having the mmap lock is not needed to walk the > userspace page tables. Any hints? Or am I not seeing where it's taken? I trust Sean's explanation was complete enough! > On 7/17/21 10:55 AM, Marc Zyngier wrote: > > We currently rely on the kvm_is_transparent_hugepage() helper to > > discover whether a given page has the potential to be mapped as > > a block mapping. > > > > However, this API doesn't really give un everything we want: > > - we don't get the size: this is not crucial today as we only > > support PMD-sized THPs, but we'd like to have larger sizes > > in the future > > - we're the only user left of the API, and there is a will > > to remove it altogether > > > > To address the above, implement a simple walker using the existing > > page table infrastructure, and plumb it into transparent_hugepage_adjust(). > > No new page sizes are supported in the process. > > > > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx> > > --- > > arch/arm64/kvm/mmu.c | 46 ++++++++++++++++++++++++++++++++++++++++---- > > 1 file changed, 42 insertions(+), 4 deletions(-) > > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > index 3155c9e778f0..db6314b93e99 100644 > > --- a/arch/arm64/kvm/mmu.c > > +++ b/arch/arm64/kvm/mmu.c > > @@ -433,6 +433,44 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, > > return 0; > > } > > > > +static struct kvm_pgtable_mm_ops kvm_user_mm_ops = { > > + /* We shouldn't need any other callback to walk the PT */ > > + .phys_to_virt = kvm_host_va, > > +}; > > + > > +struct user_walk_data { > > + u32 level; > > +}; > > + > > +static int user_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > > + enum kvm_pgtable_walk_flags flag, void * const arg) > > +{ > > + struct user_walk_data *data = arg; > > + > > + data->level = level; > > + return 0; > > +} > > + > > +static int get_user_mapping_size(struct kvm *kvm, u64 addr) > > +{ > > + struct user_walk_data data; > > + struct kvm_pgtable pgt = { > > + .pgd = (kvm_pte_t *)kvm->mm->pgd, > > + .ia_bits = VA_BITS, > > + .start_level = 4 - CONFIG_PGTABLE_LEVELS, > > + .mm_ops = &kvm_user_mm_ops, > > + }; > > + struct kvm_pgtable_walker walker = { > > + .cb = user_walker, > > + .flags = KVM_PGTABLE_WALK_LEAF, > > + .arg = &data, > > + }; > > + > > + kvm_pgtable_walk(&pgt, ALIGN_DOWN(addr, PAGE_SIZE), PAGE_SIZE, &walker); > > I take it that it is guaranteed that kvm_pgtable_walk() will never > fail? For example, I can see it failing if someone messes with > KVM_PGTABLE_MAX_LEVELS. But that's an architectural constant. How could it be messed with? When we introduce 5 levels of page tables, we'll have to check all this anyway. > To be honest, I would rather have a check here instead of > potentially feeding a bogus value to ARM64_HW_PGTABLE_LEVEL_SHIFT. > It could be a VM_WARN_ON, so there's no runtime overhead unless > CONFIG_DEBUG_VM. Fair enough. That's easy enough to check. > The patch looks good to me so far, but I want to give it another > look (or two) after I figure out why the mmap semaphone is not > needed. Thanks, M. -- Without deviation from the norm, progress is not possible.