Re: [PATCH 4.19 1/7] mm: drop meminit_pfn_in_nid as it is redundant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 19, 2020 at 09:24:19AM -0400, Pavel Tatashin wrote:
> From: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
> 
> From: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
> 
> commit 56ec43d8b02719402c9fcf984feb52ec2300f8a5 upstream.
> 
> As best as I can tell the meminit_pfn_in_nid call is completely redundant.
> The deferred memory initialization is already making use of
> for_each_free_mem_range which in turn will call into __next_mem_range
> which will only return a memory range if it matches the node ID provided
> assuming it is not NUMA_NO_NODE.
> 
> I am operating on the assumption that there are no zones or pgdata_t
> structures that have a NUMA node of NUMA_NO_NODE associated with them.  If
> that is the case then __next_mem_range will never return a memory range
> that doesn't match the zone's node ID and as such the check is redundant.
> 
> So one piece I would like to verify on this is if this works for ia64.
> Technically it was using a different approach to get the node ID, but it
> seems to have the node ID also encoded into the memblock.  So I am
> assuming this is okay, but would like to get confirmation on that.
> 
> On my x86_64 test system with 384GB of memory per node I saw a reduction
> in initialization time from 2.80s to 1.85s as a result of this patch.
> 
> Link: http://lkml.kernel.org/r/20190405221219.12227.93957.stgit@localhost.localdomain
> Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx>
> Reviewed-by: Pavel Tatashin <pavel.tatashin@xxxxxxxxxxxxx>
> Acked-by: Michal Hocko <mhocko@xxxxxxxx>
> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxx>
> Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
> Cc: Dave Jiang <dave.jiang@xxxxxxxxx>
> Cc: David S. Miller <davem@xxxxxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: Khalid Aziz <khalid.aziz@xxxxxxxxxx>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
> Cc: Laurent Dufour <ldufour@xxxxxxxxxxxxxxxxxx>
> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxxxxxxx>
> Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>
> Cc: Vlastimil Babka <vbabka@xxxxxxx>
> Cc: <yi.z.zhang@xxxxxxxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>
> ---
>  mm/page_alloc.c | 51 ++++++++++++++-----------------------------------
>  1 file changed, 14 insertions(+), 37 deletions(-)

Given the recent changes backported to 4.19.y, is this series still
needed?

If so, can you please regenerate it and resend as it does not apply to
the current 4.19.y tree.

thanks,

greg k-h



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux