On Tue, Feb 21, 2023 at 09:37:17AM -0500, Pasha Tatashin wrote: > Hey Matthew, > > Thank you for looking into this. > > On Tue, Feb 21, 2023 at 8:46 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > > > On Mon, Feb 20, 2023 at 02:10:24PM -0500, Pasha Tatashin wrote: > > > Within Google the vast majority of memory, over 90% has a single > > > owner. This is because most of the jobs are not multi-process but > > > instead multi-threaded. The examples of single owner memory > > > allocations are all tcmalloc()/malloc() allocations, and > > > mmap(MAP_ANONYMOUS | MAP_PRIVATE) allocations without forks. On the > > > other hand, the struct page metadata that is shared for all types of > > > memory takes 1.6% of the system memory. It would be reasonable to find > > > ways to optimize memory such that the common som case has a reduced > > > amount of metadata. > > > > > > This would be similar to HugeTLB and DAX that are treated as special > > > cases, and can release struct pages for the subpages back to the > > > system. > > > > DAX can't, unless something's changed recently. You're referring to > > CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP > > DAX has a similar optimization: > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v6.2&id=e3246d8f52173a798710314a42fea83223036fc8 Oh, devdax, not fsdax. > > > The proposal is to discuss a new som driver that would use HugeTLB as > > > a source of 2M chunks. When user creates a som memory, i.e.: > > > > > > mmap(MAP_ANONYMOUS | MAP_PRIVATE); > > > madvise(mem, length, MADV_DONTFORK); > > > > > > A vma from the som driver is used instead of regular anon vma. > > > > That's going to be "interesting". The VMA is already created with > > the call to mmap(), and madvise has not traditionally allowed drivers > > to replace a VMA. You might be better off creating a /dev/som and > > hacking the malloc libraries to pass an fd from that instead of passing > > MAP_ANONYMOUS. > > I do not plan to replace VMA after madvise(), I showed the syscall > sequence to show how Single Owner Memory can be enforced today. > However, in the future we either need to add another mmap() flag for > single owner memory if that is proved to be important or as you > suggested use ioctl() through /dev/som. Not ioctl(). Pass an fd from /dev/som to mmap and have the som driver set up the VMA. > > > The discussion should include the following topics: > > > - Interaction with folio and the proposed struct page {memdesc}. > > > - Handling for migrate_pages() and friends. > > > - Handling for FOLL_PIN and FOLL_LONGTERM. > > > - What type of madvise() properties the som memory should handle > > > > Obviously once we get to dynamically allocated memdescs, this whole > > thing goes away, so I'm not excited about making big changes to the > > kernel to support this. > > This is why the changes that I am thinking about are going to be > mostly localized in a separate driver and do not alter the core mm > much. However, even with memdesc, today the Single Owner Memory is not > singled out from the rest of memory types (shared, anon, named), so I > do not expect the memdescs can provide saving or optimizations for > this specific use case. With memdescs, let's suppose the malloc library asks for a 256kB allocation. You end up using 8 bytes per page for the memdesc pointer (512 bytes) plus around 96 bytes for the folio that's used by the anon memory (assuming appropriate hinting / heuristics that says "Hey, treat this as a single allocation"). So that's 608 bytes of overhead for a 256kB allocation, or 0.23% overhead. About half the overhead of 8kB per 2MB (plus whatever overhead the SOM driver has to track the 256kB of memory). If 256kB isn't the right size to be doing this kind of analysis on, we can rerun it on whatever size you want. I'm not really familiar with what userspace is doing these days. > > The savings you'll see are 6 pages (24kB) per 2MB allocated (1.2%). > > That's not nothing, but it's not huge either. > > This depends on the scale, in our fleet 1.2% savings are huge. Then 1.4% will be better, yes? ;-)