Re: [RFC PATCH 2/4] mm/hotplug: walk_memroy_range on memory_block uit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 26, 2017 at 12:32:40AM -0700, John Hubbard wrote:
>On 06/24/2017 07:52 PM, Wei Yang wrote:
>> hotplug memory range is memory_block aligned and walk_memroy_range guarded
>> with check_hotplug_memory_range(). This is save to iterate on the
>> memory_block base.> 
>> This patch adjust the iteration unit and assume there is not hole in
>> hotplug memory range.
>
>Hi Wei,
>
>In the patch subject, s/memroy/memory/ , and s/uit/unit/, and
>s/save/safe.
>

Nice.

>Actually, I still have a tough time with it, so maybe the 
>description above could instead be worded approximately
>like this:
>
>Given that a hotpluggable memory range is now block-aligned,
>it is safe for walk_memory_range to iterate by blocks.
>
>Change walk_memory_range() so that it iterates at block
>boundaries, rather than section boundaries. Also, skip the check
>for whether pages are present in the section, and assume 
>that there are no holes in the range. (<Insert reason why 
>that is safe, here>)
>

Sounds better than mine :-)

>
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx>
>> ---
>>  mm/memory_hotplug.c | 10 ++--------
>>  1 file changed, 2 insertions(+), 8 deletions(-)
>> 
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index f5d06afc8645..a79a83ec965f 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1858,17 +1858,11 @@ int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn,
>>  	unsigned long pfn, section_nr;
>>  	int ret;
>>  
>> -	for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
>> +	for (pfn = start_pfn; pfn < end_pfn;
>> +		pfn += PAGES_PER_SECTION * sections_per_block) {
>
>Here, and in one or two other spots in the patch, it would be nice
>to repeat your approach from patch 0001, where you introduced a
>pages_per_block variable. That definitely helps when reading the code.
>

Sounds nice, let me try to introduce the pages_per_block.

>>  		section_nr = pfn_to_section_nr(pfn);
>> -		if (!present_section_nr(section_nr))
>> -			continue;
>
>Why is it safe to assume no holes in the memory range? (Maybe Michal's 
>patch already covered this and I haven't got that far yet?)
>
>The documentation for this routine says that it walks through all
>present memory sections in the range, so it seems like this patch
>breaks that.
>

Hmm... it is a little bit hard to describe.

First the documentation of the function is a little misleading. When you look
at the code, it call the "func" only once for a memory_block, not for every
present mem_section as it says. So have some memory in the memory_block would
meet the requirement.

Second, after the check in patch 1, it is for sure the range is memory_block
aligned, which means it must have some memory in that memory_block. It would
be strange if someone claim to add a memory range but with no real memory.

This is why I remove the check here.


>>  
>>  		section = __nr_to_section(section_nr);
>> -		/* same memblock? */
>> -		if (mem)
>> -			if ((section_nr >= mem->start_section_nr) &&
>> -			    (section_nr <= mem->end_section_nr))
>> -				continue;
>
>Yes, that deletion looks good.
>

From this we can see, if there IS some memory, the function will be invoked
and only invoked once.

>thanks,
>john h
>
>>  
>>  		mem = find_memory_block_hinted(section, mem);
>>  		if (!mem)
>> 

-- 
Wei Yang
Help you, Help me

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux