On 05.01.21 10:20, Michal Hocko wrote: > On Mon 04-01-21 15:00:31, Dave Hansen wrote: >> On 1/4/21 12:11 PM, David Hildenbrand wrote: >>>> Yeah, it certainly can't be the default, but it *is* useful for >>>> thing where we know that there are no cache benefits to zeroing >>>> close to where the memory is allocated. >>>> >>>> The trick is opting into it somehow, either in a process or a VMA. >>>> >>> The patch set is mostly trying to optimize starting a new process. So >>> process/vma doesn‘t really work. >> >> Let's say you have a system-wide tunable that says: pre-zero pages and >> keep 10GB of them around. Then, you opt-in a process to being allowed >> to dip into that pool with a process-wide flag or an madvise() call. >> You could even have the flag be inherited across execve() if you wanted >> to have helper apps be able to set the policy and access the pool like >> how numactl works. > > While possible, it sounds quite heavy weight to me. Page allocator would > have to somehow maintain those pre-zeroed pages. This pool will also > become a very scarce resource very soon because everybody just want to > run faster. So this would open many more interesting questions. Agreed. > > A global knob with all or nothing sounds like an easier to use and > maintain solution to me. I mean, that brings me back to my original suggestion: just use hugetlbfs and implement some sort of pre-zeroing there (worker thread, whatsoever). Most vfio users should already be better of using hugepages. It's a "pool of pages" already. Selected users use it. I really don't see a need to extend the buddy with something like that. -- Thanks, David / dhildenb _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization