> On May 14, 2018, at 9:19 AM, Christopher Lameter <cl@xxxxxxxxx> wrote: > > Cool. This could be controlled by the faultaround logic right? If we get > fault_around_bytes up to huge page size then it is reasonable to use a > huge page directly. It isn't presently but certainly could be; for the prototype it tries to map a large page when needed and, should that fail, it will fall through to the normal fault around code. I would think we would want a separate parameter, as I can see the usefulness of more fine-grained control. Many users may want to try mapping a large page if possible, but would prefer a smaller number of bytes to be read in fault around should we need to fall back to using PAGESIZE pages. > fault_around_bytes can be set via sysfs so there is a natural way to > control this feature there I think. I agree; perhaps I could use "fault_around_thp_bytes" or something similar. >> Since this approach will map a PMD size block of the memory map at a time, we >> should see a slight uptick in time spent in disk I/O but a substantial drop in >> page faults as well as a reduction in iTLB misses as address ranges will be >> mapped with the larger page. Analysis of a test program that consists of a very >> large text area (483,138,032 bytes in size) that thrashes D$ and I$ shows this >> does occur and there is a slight reduction in program execution time. > > I think we would also want such a feature for regular writable pages as > soon as possible. That is my ultimate long-term goal for this project - full r/w support of large THP pages; prototyping with read-only text pages seemed like the best first step to get a sense of the possible benefits. -- Bill