On 9/10/18 10:20 AM, Davidlohr Bueso wrote: > On Mon, 10 Sep 2018, Waiman Long wrote: >> On 09/08/2018 12:13 AM, John Hubbard wrote: [...] >>> It's also interesting that there are two main huge page systems (THP and Hugetlbfs), and I sometimes >>> wonder the obvious thing to wonder: are these sufficiently different to warrant remaining separate, >>> long-term? Yes, I realize they're quite different in some ways, but still, one wonders. :) >> >> One major difference between hugetlbfs and THP is that the former has to >> be explicitly managed by the applications that use it whereas the latter >> is done automatically without the applications being aware that THP is >> being used at all. Performance wise, THP may or may not increase >> application performance depending on the exact memory access pattern, >> though the chance is usually higher that an application will benefit >> than suffer from it. >> >> If an application know what it is doing, using hughtblfs can boost >> performance more than it can ever achieved by THP. Many large enterprise >> applications, like Oracle DB, are using hugetlbfs and explicitly disable >> THP. So unless THP can improve its performance to a level that is >> comparable to hugetlbfs, I won't see the later going away. > > Yep, there are a few non-trivial workloads out there that flat out discourage > thp, ie: redis to avoid latency issues. > Yes, the need for guaranteed, available-now huge pages in some cases is understood. That's not the quite same as saying that there have to be two different subsystems, though. Nor does it even necessarily imply that the pool has to be reserved in the same way as hugetlbfs does it...exactly. So I'm wondering if THP behavior can be made to mimic hugetlbfs enough (perhaps another option, in addition to "always, never, madvise") that we could just use THP in all cases. But the "transparent" could become a sliding scale that could go all the way down to "opaque" (hugetlbfs behavior). thanks, -- John Hubbard NVIDIA