On 08/11/2017 05:37 AM, Michal Hocko wrote:
On Mon 07-08-17 16:38:39, Pavel Tatashin wrote:
In deferred_init_memmap() where all deferred struct pages are initialized
we have a check like this:
if (page->flags) {
VM_BUG_ON(page_zone(page) != zone);
goto free_range;
}
This way we are checking if the current deferred page has already been
initialized. It works, because memory for struct pages has been zeroed, and
the only way flags are not zero if it went through __init_single_page()
before. But, once we change the current behavior and won't zero the memory
in memblock allocator, we cannot trust anything inside "struct page"es
until they are initialized. This patch fixes this.
This patch defines a new accessor memblock_get_reserved_pfn_range()
which returns successive ranges of reserved PFNs. deferred_init_memmap()
calls it to determine if a PFN and its struct page has already been
initialized.
Why don't we simply check the pfn against pgdat->first_deferred_pfn?
Because we are initializing deferred pages, and all of them have pfn
greater than pgdat->first_deferred_pfn. However, some of deferred pages
were already initialized if they were reserved, in this path:
mem_init()
free_all_bootmem()
free_low_memory_core_early()
for_each_reserved_mem_region()
reserve_bootmem_region()
init_reserved_page() <- if this is deferred reserved page
__init_single_pfn()
__init_single_page()
So, currently, we are using the value of page->flags to figure out if
this page has been initialized while being part of deferred page, but
this is not going to work for this project, as we do not zero the memory
that is backing the struct pages, and size the value of page->flags can
be anything.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>