On 6/25/2013 11:44 AM, H. Peter Anvin wrote: > On 06/25/2013 11:40 AM, Yinghai Lu wrote: >> On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin <hpa@xxxxxxxxx> wrote: >>> On 06/25/2013 10:35 AM, Mike Travis wrote: >> >>> However, please consider Ingo's counterproposal of doing this via the >>> buddy allocator, i.e. hugepages being broken on demand. That is a >>> *very* powerful model, although would require more infrastructure. >> >> Can you or Ingo elaborate more about the buddy allocator proposal? >> > > Start by initializing 1G hyperpages only, but mark them so that the > allocator knows that if it needs to break them apart it has to > initialize the page structures for the 2M subpages. > > Same thing with 2M -> 4K. > > -hpa > > It is worth experimenting with but the big question would be, if it still avoids the very expensive "memmap_init_zone" and it's sub-functions using huge expanses of memory. I'll do some experimenting as soon as I can. Our 32TB system is being brought back to 16TB (we found a number of problems as we get closer and closer to the 64TB limit), but that's still a significant size. -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html