Re: [PATCH] mm: get pfn by page_to_pfn() instead of save in page->private

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 18-10-18 21:04:29, Wei Yang wrote:
> This is not necessary to save the pfn to page->private.
> 
> The pfn could be retrieved by page_to_pfn() directly.

Yes it can, but a cursory look at the commit which has introduced this
suggests that this is a micro-optimization. Mel would know more of
course. There are some memory models where page_to_pfn is close to free.

If that is the case I am not really sure it is measurable or worth it.
In any case any change to this code should have a proper justification.
In other words, is this change really needed? Does it help in any
aspect? Possibly readability? The only thing I can guess from this
changelog is that you read the code and stumble over this. If that is
the case I would recommend asking author for the motivation and
potentially add a comment to explain it better rather than shoot a patch
rightaway.

> Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx>
> ---
> Maybe I missed some critical reason to save pfn to private.
> 
> Thanks in advance if someone could reveal the special reason.
> ---
>  mm/page_alloc.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 15ea511fb41c..a398eafbae46 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2793,24 +2793,19 @@ void free_unref_page(struct page *page)
>  void free_unref_page_list(struct list_head *list)
>  {
>  	struct page *page, *next;
> -	unsigned long flags, pfn;
> +	unsigned long flags;
>  	int batch_count = 0;
>  
>  	/* Prepare pages for freeing */
> -	list_for_each_entry_safe(page, next, list, lru) {
> -		pfn = page_to_pfn(page);
> -		if (!free_unref_page_prepare(page, pfn))
> +	list_for_each_entry_safe(page, next, list, lru)
> +		if (!free_unref_page_prepare(page, page_to_pfn(page)))
>  			list_del(&page->lru);
> -		set_page_private(page, pfn);
> -	}
>  
>  	local_irq_save(flags);
>  	list_for_each_entry_safe(page, next, list, lru) {
> -		unsigned long pfn = page_private(page);
> -
>  		set_page_private(page, 0);
>  		trace_mm_page_free_batched(page);
> -		free_unref_page_commit(page, pfn);
> +		free_unref_page_commit(page, page_to_pfn(page));
>  
>  		/*
>  		 * Guard against excessive IRQ disabled times when we get
> -- 
> 2.15.1
> 

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux