On Fri, 2013-12-20 at 14:01 +0000, Mel Gorman wrote: > On Thu, Dec 19, 2013 at 05:02:02PM -0800, Andrew Morton wrote: > > On Wed, 18 Dec 2013 15:53:59 +0900 Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> wrote: > > > > > If parallel fault occur, we can fail to allocate a hugepage, > > > because many threads dequeue a hugepage to handle a fault of same address. > > > This makes reserved pool shortage just for a little while and this cause > > > faulting thread who can get hugepages to get a SIGBUS signal. > > > > > > To solve this problem, we already have a nice solution, that is, > > > a hugetlb_instantiation_mutex. This blocks other threads to dive into > > > a fault handler. This solve the problem clearly, but it introduce > > > performance degradation, because it serialize all fault handling. > > > > > > Now, I try to remove a hugetlb_instantiation_mutex to get rid of > > > performance degradation. > > > > So the whole point of the patch is to improve performance, but the > > changelog doesn't include any performance measurements! > > > > I don't really deal with hugetlbfs any more and I have not examined this > series but I remember why I never really cared about this mutex. It wrecks > fault scalability but AFAIK fault scalability almost never mattered for > workloads using hugetlbfs. The most common user of hugetlbfs by far is > sysv shared memory. The memory is faulted early in the lifetime of the > workload and after that it does not matter. At worst, it hurts application > startup time but that is still poor motivation for putting a lot of work > into removing the mutex. Yep, important hugepage workloads initially pound heavily on this lock, then it naturally decreases. > Microbenchmarks will be able to trigger problems in this area but it'd > be important to check if any workload that matters is actually hitting > that problem. I was thinking of writing one to actually get some numbers for this patchset -- I don't know of any benchmark that might stress this lock. However I first measured the amount of cycles it costs to start an Oracle DB and things went south with these changes. A simple 'startup immediate' calls hugetlb_fault() ~5000 times. For a vanilla kernel, this costs ~7.5 billion cycles and with this patchset it goes up to ~27.1 billion. While there is naturally a fair amount of variation, these changes do seem to do more harm than good, at least in real world scenarios. Thanks, Davidlohr -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>