New helper to free highmem pages in larger chunks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I noticed increased boot time when enabling highmem for ARC. Turns out that
freeing highmem pages into buddy allocator is done page at a time, while it is
batched for low mem pages. Below is call flow.

I'm thinking of writing free_highmem_pages() which takes start and end pfn and
want to solicit some ideas whether to write it from scratch or preferably call
existing __free_pages_memory() to reuse the logic to convert a pfn range into
{pfn, order} tuples.

For latter however there are semantical differences as you can see below which I'm
not sure of:
  -highmem page->count is set to 1, while 0 for low mem
  -atomic clearing of page reserved flag vs. non atomic


mem_init
     for (tmp = min_high_pfn; tmp < max_pfn; tmp++)
	free_highmem_page(pfn_to_page(tmp));
	     __free_reserved_page
		ClearPageReserved(page);   <--- atomic
		init_page_count(page);  <-- _count = 1
		__free_page(page);    <-- free SINGLE page


     free_all_bootmem
	free_low_memory_core_early
	   __free_memory_core(start, end)
	       __free_pages_memory(s_pfn, e_pfn) <- creates "order" sized batches
		    __free_pages_bootmem(pfn, order)
		        __free_pages_boot_core(start_page, start_pfn, order)
				loops from 0 to (1 << order)
				    __ClearPageReserved(p);   <-- non atomic
				    set_page_count(p, 0);  <--- _count = 0

				__free_pages(page, order);    <--- free BATCH
--
To unsubscribe from this list: send the line "unsubscribe linux-arch" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux