Re: [PATCH 3/5] mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/26/21 2:16 PM, Matthew Wilcox wrote:
> On Wed, Jul 14, 2021 at 05:17:58PM +0800, Muchun Song wrote:
>> +static __always_inline struct page *page_head_if_fake(const struct page *page)
>> +{
>> +	if (!hugetlb_free_vmemmap_enabled)
>> +		return NULL;
>> +
>> +	/*
>> +	 * Only addresses aligned with PAGE_SIZE of struct page may be fake head
>> +	 * struct page. The alignment check aims to avoid access the fields (
>> +	 * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly)
>> +	 * cold cacheline in some cases.
>> +	 */
>> +	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
>> +	    test_bit(PG_head, &page->flags)) {
>> +		unsigned long head = READ_ONCE(page[1].compound_head);
>> +
>> +		if (likely(head & 1))
>> +			return (struct page *)(head - 1);
>> +	}
>> +
>> +	return NULL;
>> +}
> 
> Why return 'NULL' instead of 'page'?
> 
> This is going to significantly increase the cost of calling
> compound_page() (by whichever spelling it has).  That will make
> the folio patchset more compelling ;-)

Matthew, any suggestions for benchmarks/workloads to measure the
increased overhead?  Suspect you have some ideas based on folio work.

My concern is that we are introducing overhead for code paths not
associated with this feature.  The next patch even tries to minimize
this overhead a bit if hugetlb_free_vmemmap_enabled is not set.
-- 
Mike Kravetz



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux