> Is there a setting that will turn off the must-be-the-same-node > behavior? There are workloads where TLB matters more than cross-node > traffic (or where all the pages are hopelessly shared between nodes, > but hugepages are still useful). That's pretty much how THPs already behave in the kernel, so if you want to allow THPs to be handed out to one node, but referenced from many others, you'd just set the threshold to 1, and let the existing code take over. As for the must-be-the-same-node behavior: I'd actually say it's more like a "must have so much on one node" behavior, in that, if you set the threshold to 16, for example, 16 4K pages must be faulted in on the same node, in the same contiguous 2M chunk, before a THP will be created. What happens after that THP is created is out of our control, it could be referenced from anywhere. The idea here is that we can tune things so that jobs that behave poorly with THP on will not be given THPs, but the jobs that like THPs can still get them. Granted, there are still issues with this approach, but I think it's a bit better than just handing out a THP because we touched one byte in a 2M chunk. - Alex -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>