On Thu, Aug 06, 2020 at 06:15:00PM +0100, Matthew Wilcox wrote: > On Thu, Aug 06, 2020 at 05:53:10PM +0200, Vlastimil Babka wrote: > > On 8/6/20 5:39 PM, Matthew Wilcox wrote: > > >> >> +++ b/mm/huge_memory.c > > >> >> @@ -2125,7 +2125,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > > >> >> * Set PG_double_map before dropping compound_mapcount to avoid > > >> >> * false-negative page_mapped(). > > >> >> */ > > >> >> - if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) { > > >> >> + if (head_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) { > > >> > > > >> > I'm a little nervous about this one. The page does actually come from > > >> > pmd_page(), and today that's guaranteed to be a head page. But I'm > > >> > not convinced that's going to still be true in twenty years. With the > > >> > current THP patchset, I won't allocate pages larger than PMD order, but > > >> > I can see there being interest in tracking pages in chunks larger than > > >> > 2MB in the future. And then pmd_page() might well return a tail page. > > >> > So it might be a good idea to not convert this one. > > >> > > >> Hmm the function converts the compound mapcount of the whole page to a > > >> HPAGE_PMD_NR of base pages. If suddenly the compound page was bigger than a pmd, > > >> then I guess this wouldn't work properly anymore without changes anyway? > > >> Maybe we could stick something like VM_BUG_ON(PageTransHuge(page)) there as > > >> "enforced documentation" for now? > > > > > > I think it would work as-is. But also I may have totally misunderstood it. > > > I'll write this declaratively and specifically for x86 (PMD order is 9) > > > ... tell me when I've made a mistake ;-) > > > > > > This function is for splitting the PMD. We're leaving the underlying > > > page intact and just changing the page table. So if, say, we have an > > > underlying 4MB page (and maybe the pages are mapped as PMDs in this > > > process), we might get subpage number 512 of this order-10 page. We'd > > > need to check the DoubleMap bit on subpage 1, and the compound_mapcount > > > also stored in page 1, but we'd only want to spread the mapcount out > > > over the 512 subpages from 512-1023; we wouldn't want to spread it out > > > over 0-511 because they aren't affected by this particular PMD. > > > > Yeah, and then we decrease the compound mapcount, which is a counter of "how > > many times is this compound page mapped as a whole". But we only removed (the > > second) half of the compound mapping, so imho that would be wrong? > > I'd expect that count to be incremented by 1 for each PMD that it's > mapped to? ie change the definition of that counter slightly. > > > > Having to reason about stuff like this is why I limited the THP code to > > > stop at PMD order ... I don't want to make my life even more complicated > > > than I have to! > > > > Kirill might correct me but I'd expect the THP code right now has baked in many > > assumptions about THP pages being exactly HPAGE_PMD_ORDER large? That will be true for PMD-mapped THP pages after applying Matthew's patchset. > There are somewhat fewer places that make that assumption after applying > the ~80 patches here ... http://git.infradead.org/users/willy/pagecache.git The patchset allows for THP to be anywhere between order-2 and order-9 (on x86-64). > I have mostly not touched the anonymous THPs (obviously some of the code > paths are shared), although both Kirill & I think there's a win to be > had there too. Yeah. Reducing LRU handling overhead alone should be enough to justify the effort. But we still would need to have numbers. -- Kirill A. Shutemov