On Tue, Nov 23, 2021 at 12:51 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Mon, Nov 22, 2021 at 04:01:02PM -0800, Mina Almasry wrote: > > Add PM_THP_MAPPED MAPPING to allow userspace to detect whether a given virt > > address is currently mapped by a transparent huge page or not. Example > > use case is a process requesting THPs from the kernel (via a huge tmpfs > > mount for example), for a performance critical region of memory. The > > userspace may want to query whether the kernel is actually backing this > > memory by hugepages or not. > > So you want this bit to be clear if the memory is backed by a hugetlb > page? > Yes I believe so. I do not see value in telling the userspace that the virt address is backed by a hugetlb page, since if the memory is mapped by MAP_HUGETLB or is backed by a hugetlb file then the memory is backed by hugetlb pages and there is no vagueness from the kernel here. Additionally hugetlb interfaces are more size based rather than PMD or not. arm64 for example supports 64K, 2MB, 32MB and 1G 'huge' pages and it's an implementation detail that those sizes are mapped CONTIG PTE, PMD, CONITG PMD, and PUD respectively, and the specific mapping mechanism is typically not exposed to the userspace and might not be stable. Assuming pagemap_hugetlb_range() == PMD_MAPPED would not technically be correct. > > if (page && page_mapcount(page) == 1) > > flags |= PM_MMAP_EXCLUSIVE; > > + if (page && is_transparent_hugepage(page)) > > + flags |= PM_THP_MAPPED; > > because honestly i'd expect it to be more useful to mean "This memory > is mapped by a PMD entry" and then the code would look like: > > if (page) > flags |= PM_PMD_MAPPED; > > (and put a corresponding change in pagemap_hugetlb_range)