On Thu, Dec 03, 2020 at 09:42:54AM -0800, Andy Lutomirski wrote: > I suspect that something much more clever could be done in which the heap is divided up into a few independently randomized sections and heap pages are randomized within the sections might do much better. There should certainly be a lot of room for something between what we have now and a fully randomized scheme. > > It might also be worth looking at what other OSes do. How about dividing the address space up into 1GB sections (or, rather, PUD_SIZE sections), allocating from each one until it's 50% full, then choose another one? Sufficiently large allocations would ignore this division and just look for any space. I'm thinking something like the slab allocator (so the 1GB chunk would go back into the allocatable list when >50% of it was empty). That might strike a happy medium between full randomisation and efficient use of page tables / leaving large chunks of address space free for large mmaps.