Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Swap Abstraction "the pony"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 28, 2024 at 11:50:47PM -0700, Chris Li wrote:
> On Tue, May 28, 2024 at 8:57 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> >
> > On Tue, May 21, 2024 at 01:40:56PM -0700, Chris Li wrote:
> > > > Filesystems already implemented a lot of solutions for fragmentation
> > > > avoidance that are more apropriate for slow storage media.
> > >
> > > Swap and file systems have very different requirements and usage
> > > patterns and IO patterns.
> >
> > Should they, though?  Filesystems noticed that handling pages in LRU
> > order was inefficient and so they stopped doing that (see the removal
> > of aops->writepage in favour of ->writepages, along with where each are
> > called from).  Maybe it's time for swap to start doing writes in the order
> > of virtual addresses within a VMA, instead of LRU order.
> 
> Well, swap has one fundamental difference than file system:
> the dirty file system cache will need to eventually write to file
> backing at least once, otherwise machine reboots you lose the data.

Yes, that's why we write back data from the page cache every 30 seconds
or so.  It's still important to not write back too early, otherwise
you need to write the same block multiple times.  The differences aren't
as stark as you're implying.

> Where the anonymous memory case, the dirty page does not have to write
> to swap. It is optional, so which page you choose to swap out is
> critical, you want to swap out the coldest page, the page that is
> least likely to get swapin. Therefore, the LRU makes sense.

Disagree.  There are two things you want and the LRU serves neither
particularly well.  One is that when you want to reclaim memory, you
want to find some memory that is likely to not be accessed in the next
few seconds/minutes/hours.  It doesn't need to be the coldest, just in
(say) the coldest 10% or so of memory.  And it needs to already be clean,
otherwise you have to wait for it to writeback, and you can't afford that.

The second thing you need to be able to do is find pages which are
already dirty, and not likely to be written to soon, and write those
back so they join the pool of clean pages which are eligible for reclaim.
Again, the LRU isn't really the best tool for the job.

> In VMA swap out, the question is, which VMA you choose from first? To
> make things more complicated, the same page can map into different
> processes in more than one VMA as well.

This is why we have the anon_vma, to handle the same pages mapped from
multiple VMAs.

> > Indeed, if we're open to radical ideas, the LRU sucks.  A physical scan
> > is 40x faster:
> > https://lore.kernel.org/linux-mm/ZTc7SHQ4RbPkD3eZ@xxxxxxxxxxxxxxxxxxxx/
> 
> That simulation assumes the page struct has access to information already.
> On the physical CPU level, the access bit is from the PTE. If you scan
> from physical page order, you need to use rmap to find the PTE to
> check the access bit. It is not a simple pfn page order walk. You need
> to scan the PTE first then move the access information into page
> struct.

We already maintain the dirty bit on the folio when we take a write-fault
for file memory.  If we do that for anon memory as well, we don't need
to do an rmap walk at scan time.

> > > One challenging aspect is that the current swap back end has a very
> > > low per swap entry memory overhead. It is about 1 byte (swap_map), 2
> > > byte (swap cgroup), 8 byte(swap cache pointer). The inode struct is
> > > more than 64 bytes per file. That is a big jump if you map a swap
> > > entry to a file. If you map more than one swap entry to a file, then
> > > you need to track the mapping of file offset to swap entry, and the
> > > reverse lookup of swap entry to a file with offset. Whichever way you
> > > cut it, it will significantly increase the per swap entry memory
> > > overhead.
> >
> > Not necessarily, no.  If your workload uses a lot of order-2, order-4
> > and order-9 folios, then the current scheme is using 11 bytes per page,
> > so 44 bytes per order-2 folio, 176 per order-4 folio and 5632 per
> > order-9 folio.  That's a lot of bytes we can use for an extent-based
> > scheme.
> 
> Yes, if we allow dynamic allocation of swap entry, the 24B option.
> Then sub entries inside the compound swap entry structure can share
> the same compound swap struct pointer.
> 
> >
> > Also, why would you compare the size of an inode to the size of an
> > inode?  inode is ~equivalent to an anon_vma, not to a swap entry.
> 
> I am not assigning inode to one swap entry. That is covered in my
> description of "if you map more than one swap entry to a file". If you
> want to map each inode to anon_vma, you need to have a way to map
> inode  and file offset into swap entry encoding. In your anon_vma as
> inode world, how do you deal with two different vma containing the
> same page? Once we have more detail of the swap entry mapping scheme,
> we can analyse the pros and cons.

Are you confused between an anon_vma and an anon vma?  The naming in
this area is terrible.  Maybe we should call it an mnode instead of an
anon_vma.  The parallel with inode would be more obvious ...




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux