Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Dec 13, 2020 at 11:45:26PM +0800, Muchun Song wrote:
> +
> +/*
> + * vmemmap_rmap_walk - walk vmemmap page table
> + *
> + * @rmap_pte:		called for each non-empty PTE (lowest-level) entry.
> + * @reuse:		the page which is reused for the tail vmemmap pages.
> + * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
> + */
> +struct vmemmap_rmap_walk {
> +	void (*rmap_pte)(pte_t *pte, unsigned long addr,
> +			 struct vmemmap_rmap_walk *walk);
> +	struct page *reuse;
> +	struct list_head *vmemmap_pages;
> +};

Why did you chose this approach in this version?
Earlier versions of this patchset had a single vmemmap_to_pmd() function
which returned the PMD, and now we have serveral vmemmap_{levels}_range
and a vmemmap_rmap_walk.
A brief explanation about why this change was introduced would have been nice.

I guess it is because ealier versions were too oriented for the usecase
this patchset presents, while the new versions tries to be more broad
about future re-uses of the interface?


-- 
Oscar Salvador
SUSE L3



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux