On Tue, Dec 17, 2013 at 9:47 AM, Alex Thorlton <athorlton@xxxxxxx> wrote: > On Tue, Dec 17, 2013 at 08:54:10AM -0800, Andy Lutomirski wrote: >> On Tue, Dec 17, 2013 at 8:04 AM, Alex Thorlton <athorlton@xxxxxxx> wrote: >> > On Mon, Dec 16, 2013 at 05:43:40PM -0800, Andy Lutomirski wrote: >> >> On Mon, Dec 16, 2013 at 9:12 AM, Alex Thorlton <athorlton@xxxxxxx> wrote: >> >> >> Please cc Andrea on this. >> >> > >> >> > I'm going to clean up a few small things for a v2 pretty soon, I'll be >> >> > sure to cc Andrea there. >> >> > >> >> >> > My proposed solution to the problem is to allow users to set a >> >> >> > threshold at which THPs will be handed out. The idea here is that, when >> >> >> > a user faults in a page in an area where they would usually be handed a >> >> >> > THP, we pull 512 pages off the free list, as we would with a regular >> >> >> > THP, but we only fault in single pages from that chunk, until the user >> >> >> > has faulted in enough pages to pass the threshold we've set. Once they >> >> >> > pass the threshold, we do the necessary work to turn our 512 page chunk >> >> >> > into a proper THP. As it stands now, if the user tries to fault in >> >> >> > pages from different nodes, we completely give up on ever turning a >> >> >> > particular chunk into a THP, and just fault in the 4K pages as they're >> >> >> > requested. We may want to make this tunable in the future (i.e. allow >> >> >> > them to fault in from only 2 different nodes). >> >> >> >> >> >> OK. But all 512 pages reside on the same node, yes? Whereas with thp >> >> >> disabled those 512 pages would have resided closer to the CPUs which >> >> >> instantiated them. >> >> > >> >> > As it stands right now, yes, since we're pulling a 512 page contiguous >> >> > chunk off the free list, everything from that chunk will reside on the >> >> > same node, but as I (stupidly) forgot to mention in my original e-mail, >> >> > one piece I have yet to add is the functionality to put the remaining >> >> > unfaulted pages from our chunk *back* on the free list after we give up >> >> > on handing out a THP. Once this is in there, things will behave more >> >> > like they do when THP is turned completely off, i.e. pages will get >> >> > faulted in closer to the CPU that first referenced them once we give up >> >> > on handing out the THP. >> >> >> >> This sounds like it's almost the worst possible behavior wrt avoiding >> >> memory fragmentation. If userspace mmaps a very large region and then >> >> starts accessing it randomly, it will allocate a bunch of contiguous >> >> 512-page regions, claim one page from each, and return the other 511 >> >> pages to the free list. Memory is now maximally fragmented from the >> >> point of view of future THP allocations. >> > >> > Maybe I'm missing the point here to some degree, but the way I think >> > about this is that if we trigger the behavior to return the pages to the >> > free list, we don't *want* future THP allocations in that range of >> > memory for the current process anyways. So, having the memory be >> > fragmented from the point of view of future THP allocations isn't an >> > issue. >> > >> >> Except that you're causing a problem for the whole system because one >> process is triggering the "hugepages aren't helpful" heuristic. > > I do see where you're coming from here. Do you have any good tests > that can cause this type of memory fragmentation that I might be able to > take a look at, to see how we might combat that issue in this case? > It seems like something that could occur anyways, but my patch would > create a situation where it could become a problem much more quickly. mmap lots of space (comparable to total system memory). Touch every 512th page. (This will consume ~0.2% of memory with your patches.) Now run any workload that benefits from THP (without unmapping the first thing). Make sure it still works well. --Andy -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>