Re: [LSF/MM/BPF TOPIC] Single Owner Memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 21, 2023 at 10:05 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
>
> On Tue, Feb 21, 2023 at 09:37:17AM -0500, Pasha Tatashin wrote:
> > Hey Matthew,
> >
> > Thank you for looking into this.
> >
> > On Tue, Feb 21, 2023 at 8:46 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> > >
> > > On Mon, Feb 20, 2023 at 02:10:24PM -0500, Pasha Tatashin wrote:
> > > > Within Google the vast majority of memory, over 90% has a single
> > > > owner. This is because most of the jobs are not multi-process but
> > > > instead multi-threaded. The examples of single owner memory
> > > > allocations are all tcmalloc()/malloc() allocations, and
> > > > mmap(MAP_ANONYMOUS | MAP_PRIVATE) allocations without forks. On the
> > > > other hand, the struct page metadata that is shared for all types of
> > > > memory takes 1.6% of the system memory. It would be reasonable to find
> > > > ways to optimize memory such that the common som case has a reduced
> > > > amount of metadata.
> > > >
> > > > This would be similar to HugeTLB and DAX that are treated as special
> > > > cases, and can release struct pages for the subpages back to the
> > > > system.
> > >
> > > DAX can't, unless something's changed recently.  You're referring to
> > > CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
> >
> > DAX has a similar optimization:
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v6.2&id=e3246d8f52173a798710314a42fea83223036fc8
>
> Oh, devdax, not fsdax.
>
> > > > The proposal is to discuss a new som driver that would use HugeTLB as
> > > > a source of 2M chunks. When user creates a som memory, i.e.:
> > > >
> > > > mmap(MAP_ANONYMOUS | MAP_PRIVATE);
> > > > madvise(mem, length, MADV_DONTFORK);
> > > >
> > > > A vma from the som driver is used instead of regular anon vma.
> > >
> > > That's going to be "interesting".  The VMA is already created with
> > > the call to mmap(), and madvise has not traditionally allowed drivers
> > > to replace a VMA.  You might be better off creating a /dev/som and
> > > hacking the malloc libraries to pass an fd from that instead of passing
> > > MAP_ANONYMOUS.
> >
> > I do not plan to replace VMA after madvise(), I showed the syscall
> > sequence to show how Single Owner Memory can be enforced today.
> > However, in the future we either need to add another mmap() flag for
> > single owner memory if that is proved to be important or as you
> > suggested  use ioctl() through /dev/som.
>
> Not ioctl().  Pass an fd from /dev/som to mmap and have the som driver
> set up the VMA.

Good point, using fd is indeed better, and can be made accessible to
more users without changes.

>
> > > > The discussion should include the following topics:
> > > > -  Interaction with folio and the proposed struct page {memdesc}.
> > > > - Handling for migrate_pages() and friends.
> > > > - Handling for FOLL_PIN and FOLL_LONGTERM.
> > > > - What type of madvise() properties the som memory should handle
> > >
> > > Obviously once we get to dynamically allocated memdescs, this whole
> > > thing goes away, so I'm not excited about making big changes to the
> > > kernel to support this.
> >
> > This is why the changes that I am thinking about are going to be
> > mostly localized in a separate driver and do not alter the core mm
> > much. However, even with memdesc, today the Single Owner Memory is not
> > singled out from the rest of memory types (shared, anon, named), so I
> > do not expect the memdescs can provide saving or optimizations for
> > this specific use case.
>
> With memdescs, let's suppose the malloc library asks for a 256kB
> allocation.  You end up using 8 bytes per page for the memdesc pointer
> (512 bytes) plus around 96 bytes for the folio that's used by the anon
> memory (assuming appropriate hinting / heuristics that says "Hey, treat
> this as a single allocation").

Also, the 256kB should be physically contiguous, right? Hopefully,
fragmentation is not going to be an issue, but we might need to look
into increasing the page migration enforcements in order to reduce
fragmentations  during allocs, and thus reduce the memory overheads.
Today, fragmentation can potentially reduce the performance when THPs
are not available but in the future with memdescs the fragmentation
might also effect the memory overhead. We might need to look into
changing some of the migration policies.

>  So that's 608 bytes of overhead for a
> 256kB allocation, or 0.23% overhead.  About half the overhead of 8kB
> per 2MB (plus whatever overhead the SOM driver has to track the 256kB
> of memory).

I like the idea of memdescs, and would like to stay involved in the
project development.  The potential memory savings are indeed
substantial.

> If 256kB isn't the right size to be doing this kind of analysis on, we
> can rerun it on whatever size you want.  I'm not really familiar with
> what userspace is doing these days.
>
> > > The savings you'll see are 6 pages (24kB) per 2MB allocated (1.2%).
> > > That's not nothing, but it's not huge either.
> >
> > This depends on the scale, in our fleet 1.2% savings are huge.
>
> Then 1.4% will be better, yes?  ;-)

Absolutely, 1.4% is even better. I mean 0% kernel  memory overhead
would be just about perfect :-)

Let me provide with a few more reasons how /dev/som can be helpful:

1. Independent memory pool.
While /dev/som itself always manages memory in 2M chunks it can be
configured to use memory from HugeTLB (2M or 1G), devdax, or kernel
external memory (i.e. memory that is not part of System RAM).

2. Low overhead
/dev/som will allocate memory from the pool in 1G chunks, and manage
it in 2M chunks. This will allow low memory overhead management via
bitmaps. List/tree of 2M chunks are going to be per user process, from
where the faults on som vmas are going to be handled.

3. All pages are migratable
Since som manages only user pages all pages are going to be required
to be migratable. In order to support FOLL_LONGTERM we will need to
make a decision either to migrate page to become a normal page (i.e.
Core-MM managed), or add a separate pool of long term pinned pages.
Even in today's kernel when we FOLL_LONGTERM a page it is migrated
outside of ZONE_MOVABLE.

4. 1G anonymous pages.
Since all pages are migratable, support for 1G anonymous pages can be
implemented. Unlike with Core-MM where THPs do not have struct page
optimizations, the som 4k, 2M, and 1G pages will all have reasonable
overhead from the beginning.

5. Performance benefit for running /dev/som in a virtual machine
The way extended page tables work is that translation cost in terms of
number of loads is not a simple summation of native page table +
extended page table, but actually (n * m + n +m) where n is number
page table levels on the guest, and m is number of page table levels
in extended page table. This is because the guest page table levels
themselves must be translated into host physical addresses using
extended page tables.

Since /dev/som allows for 1G anonymous pages, we can use guest
physical memory as virtual memory: i.e. only subset of 1G page in the
guest is actually backed by physical pages on the host, yet the access
to that subset is going to be substantially faster due to fewer page
table loads, and less TLB miss rate. I am proposing to have a separate
talk about this and other VM optimizations:
https://lore.kernel.org/linux-mm/CA+CK2bDr5Xii021JBXeyCEY4jjWCsZQ=ENa-s8MLkBv5hYUvsA@xxxxxxxxxxxxxx/

6. Security
There is a reduced risk of false sharing pages because they are
enforced to be single owner pages. This can help with avoiding some
bugs that we've seen in the past with refcount errors, for which I
wrote page_table_check, that since caught a few false sharing issues.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux