Re: [External] Re: [PATCH v3 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 10, 2020 at 5:48 PM Oscar Salvador <osalvador@xxxxxxx> wrote:
>
> On Tue, Nov 10, 2020 at 02:40:54PM +0800, Muchun Song wrote:
> > Only the first HugeTLB page should split the PMD to PTE. The other 63
> > HugeTLB pages
> > do not need to split. Here I want to make sure we are the first.
>
> I think terminology is loosing me here.
>
> Say you allocate a 2MB HugeTLB page at ffffea0004100000.
>
> The vmemmap range that the represents this is ffffea0004000000 - ffffea0004200000.
> That is a 2MB chunk PMD-mapped.
> So, in order to free some of those vmemmap pages, we need to break down
> that area, remapping it to PTE-based.
> I know what you mean, but we are not really splitting hugetlg pages, but
> the memmap range they are represented with.

Yeah, you are right. We are splitting the vmemmap instead of hugetlb.
Sorry for the confusion.

>
> About:
>
> "Only the first HugeTLB page should split the PMD to PTE. The other 63
> HugeTLB pages
> do not need to split. Here I want to make sure we are the first."
>
> That only refers to gigantic pages, right?

Yeah, now it only refers to gigantic pages. Originally, I also wanted to merge
vmemmap PTE to PMD for normal 2MB HugeTLB pages. So I introduced
those macros(e.g. freed_vmemmap_hpage). For 2MB HugeTLB pages, I
haven't found an elegant solution. Hopefully, when you or someone have
read all of the patch series, we can come up with an elegant solution to
merge PTE.

Thanks.

>
> > > > +static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > > > +{
> > > > +     pmd_t *pmd;
> > > > +     spinlock_t *ptl;
> > > > +     LIST_HEAD(free_pages);
> > > > +
> > > > +     if (!free_vmemmap_pages_per_hpage(h))
> > > > +             return;
> > > > +
> > > > +     pmd = vmemmap_to_pmd(head);
> > > > +     ptl = vmemmap_pmd_lock(pmd);
> > > > +     if (vmemmap_pmd_huge(pmd)) {
> > > > +             VM_BUG_ON(!pgtable_pages_to_prealloc_per_hpage(h));
> > >
> > > I think that checking for free_vmemmap_pages_per_hpage is enough.
> > > In the end, pgtable_pages_to_prealloc_per_hpage uses free_vmemmap_pages_per_hpage.
> >
> > The free_vmemmap_pages_per_hpage is not enough. See the comments above.
>
> My comment was about the VM_BUG_ON.

Sorry, yeah, we can drop it. Thanks.

>
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux