On Fri, Jan 31, 2014 at 09:36:46AM -0800, Davidlohr Bueso wrote: > From: Davidlohr Bueso <davidlohr@xxxxxx> > > The kernel can currently only handle a single hugetlb page fault at a time. > This is due to a single mutex that serializes the entire path. This lock > protects from spurious OOM errors under conditions of low of low availability > of free hugepages. This problem is specific to hugepages, because it is > normal to want to use every single hugepage in the system - with normal pages > we simply assume there will always be a few spare pages which can be used > temporarily until the race is resolved. > > Address this problem by using a table of mutexes, allowing a better chance of > parallelization, where each hugepage is individually serialized. The hash key > is selected depending on the mapping type. For shared ones it consists of the > address space and file offset being faulted; while for private ones the mm and > virtual address are used. The size of the table is selected based on a compromise > of collisions and memory footprint of a series of database workloads. Hello, Thanks for doing this patchset. :) Just one question! Why do we need a separate hash key depending on the mapping type? Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>