Re: [PATCH] mm/sparse: only sub-section aligned range would be populated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01.07.20 04:32, Wei Yang wrote:
> On Tue, Jun 30, 2020 at 02:52:35PM +0200, David Hildenbrand wrote:
>> On 30.06.20 04:14, Wei Yang wrote:
>>> There are two code path which invoke __populate_section_memmap()
>>>
>>>   * sparse_init_nid()
>>>   * sparse_add_section()
>>>
>>> For both case, we are sure the memory range is sub-section aligned.
>>>
>>>   * we pass PAGES_PER_SECTION to sparse_init_nid()
>>>   * we check range by check_pfn_span() before calling
>>>     sparse_add_section()
>>>
>>> Also, the counterpart of __populate_section_memmap(), we don't do such
>>> calculation and check since the range is checked by check_pfn_span() in
>>> __remove_pages().
>>>
>>> Clear the calculation and check to keep it simple and comply with its
>>> counterpart.
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx>
>>> ---
>>>  mm/sparse-vmemmap.c | 16 ++--------------
>>>  1 file changed, 2 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>>> index 0db7738d76e9..24b01ebae111 100644
>>> --- a/mm/sparse-vmemmap.c
>>> +++ b/mm/sparse-vmemmap.c
>>> @@ -247,20 +247,8 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
>>>  struct page * __meminit __populate_section_memmap(unsigned long pfn,
>>>  		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
>>>  {
>>> -	unsigned long start;
>>> -	unsigned long end;
>>> -
>>> -	/*
>>> -	 * The minimum granularity of memmap extensions is
>>> -	 * PAGES_PER_SUBSECTION as allocations are tracked in the
>>> -	 * 'subsection_map' bitmap of the section.
>>> -	 */
>>> -	end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
>>> -	pfn &= PAGE_SUBSECTION_MASK;
>>> -	nr_pages = end - pfn;
>>> -
>>> -	start = (unsigned long) pfn_to_page(pfn);
>>> -	end = start + nr_pages * sizeof(struct page);
>>> +	unsigned long start = (unsigned long) pfn_to_page(pfn);
>>> +	unsigned long end = start + nr_pages * sizeof(struct page);
>>>  
>>>  	if (vmemmap_populate(start, end, nid, altmap))
>>>  		return NULL;
>>>
>>
>> Can we add a WARN_ON_ONCE to catch mis-use in the future?
>>
>> if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
>>                 !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))
>> 	return NULL;
> 
> How about to add this into both population and depopulation?

We don't have a similar wrapper for vmemmap_free().

-- 
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux