Re: [RFC v3 0/5] Transparent on-demand struct page initialization embedded in the buddy allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 14, 2013 at 01:05:56PM +0200, Ingo Molnar wrote:
> 
> * Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > [...]
> > 
> > Ok, so I don't know all the issues, and in many ways I don't even really 
> > care. You could do it other ways, I don't think this is a big deal. The 
> > part I hate is the runtime hook into the core MM page allocation code, 
> > so I'm just throwing out any random thing that comes to my mind that 
> > could be used to avoid that part.
> 
> So, my hope was that it's possible to have a single, simple, zero-cost 
> runtime check [zero cost for already initialized pages], because it can be 
> merged into already existing page flag mask checks present here and 
> executed for every freshly allocated page:
> 
> static inline int check_new_page(struct page *page)
> {
>         if (unlikely(page_mapcount(page) |
>                 (page->mapping != NULL)  |
>                 (atomic_read(&page->_count) != 0)  |
>                 (page->flags & PAGE_FLAGS_CHECK_AT_PREP) |
>                 (mem_cgroup_bad_page_check(page)))) {
>                 bad_page(page);
>                 return 1;
>         }
>         return 0;
> }
> 
> We already run this for every new page allocated and the initialization 
> check could hide in PAGE_FLAGS_CHECK_AT_PREP in a zero-cost fashion.
> 
> I'd not do any of the ensure_page_is_initialized() or 
> __expand_page_initialization() complications in this patch-set - each page 
> head represents itself and gets iterated when check_new_page() is done.
> 
> During regular bootup we'd initialize like before, except we don't set up 
> the page heads but memset() them to zero. With each page head 32 bytes 
> this would mean 8 GB of page head memory to clear per 1 TB - with 16 TB 
> that's 128 GB to clear - that ought to be possible to do rather quickly, 
> perhaps with some smart SMP cross-call approach that makes sure that each 
> memset is done in a node-local fashion. [*]
> 
> Such an approach should IMO be far smaller and less invasive than the 
> patches presented so far: it should be below 100 lines or so.
> 
> I don't know why there's such a big difference between the theory I 
> outlined and the invasive patch-set implemented so far in practice, 
> perhaps I'm missing some complication. I was trying to probe that 
> difference, before giving up on the idea and punting back to the async 
> hotplug-ish approach which would obviously work well too.
> 

The reason, which I failed to mention, is once we pull off a page the lru in
either __rmqueue_fallback or __rmqueue_smallest the first thing we do with it
is expand() or sometimes move_freepages().  These then trip over some BUG_ON and
VM_BUG_ON.
Those BUG_ONs are what keep causing me to delve into the ensure/expand foolishness.

Nate

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]