Hi Alex, Alexandru Elisei <alexandru.elisei@xxxxxxx> writes: > Hi Punit, > > Thank you for having a look! > > On 9/11/20 9:34 AM, Punit Agrawal wrote: >> Hi Alexandru, >> >> Alexandru Elisei <alexandru.elisei@xxxxxxx> writes: >> >>> When userspace uses hugetlbfs for the VM memory, user_mem_abort() tries to >>> use the same block size to map the faulting IPA in stage 2. If stage 2 >>> cannot the same block mapping because the block size doesn't fit in the >>> memslot or the memslot is not properly aligned, user_mem_abort() will fall >>> back to a page mapping, regardless of the block size. We can do better for >>> PUD backed hugetlbfs by checking if a PMD block mapping is supported before >>> deciding to use a page. >> I think this was discussed in the past. >> >> I have a vague recollection of there being a problem if the user and >> stage 2 mappings go out of sync - can't recall the exact details. > > I'm not sure what you mean by the two tables going out of sync. I'm looking at > Documentation/vm/unevictable-lru.rst and this is what it says regarding hugetlbfs: > > "VMAs mapping hugetlbfs page are already effectively pinned into memory. We > neither need nor want to mlock() these pages. However, to preserve the prior > behavior of mlock() - before the unevictable/mlock changes - mlock_fixup() will > call make_pages_present() in the hugetlbfs VMA range to allocate the huge pages > and populate the ptes." > > Please correct me if I'm wrong, but my interpretation is that once a hugetlbfs > page has been mapped in a process' address space, the only way to unmap it is via > munmap. If that's the case, the KVM mmu notifier should take care of unmapping > from stage 2 the entire memory range addressed by the hugetlbfs pages, > right? You're right - I managed to confuse myself. Thinking about it with a bit more context, I don't see a problem with what the patch is doing. Apologies for the noise. >> >> Putting it out there in case anybody else on the thread can recall the >> details of the previous discussion (offlist). >> >> Though things may have changed and if it passes testing - then maybe I >> am mis-remembering. I'll take a closer look at the patch and shout out >> if I notice anything. > > The test I ran was to boot a VM and run ltp (with printk's sprinkled in the host > kernel to see what page size and where it gets mapped/unmapped at stage 2). Do you > mind recommending other tests that I might run? You may want to put the changes through VM save / restore and / or live migration. It should help catch any issues with transitioning from hugepages to regular pages. Hope that helps. Thanks, Punit [...] _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm