Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Swap Abstraction "the pony"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 28, 2024 at 8:57 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
>
> On Tue, May 21, 2024 at 01:40:56PM -0700, Chris Li wrote:
> > > Filesystems already implemented a lot of solutions for fragmentation
> > > avoidance that are more apropriate for slow storage media.
> >
> > Swap and file systems have very different requirements and usage
> > patterns and IO patterns.
>
> Should they, though?  Filesystems noticed that handling pages in LRU
> order was inefficient and so they stopped doing that (see the removal
> of aops->writepage in favour of ->writepages, along with where each are
> called from).  Maybe it's time for swap to start doing writes in the order
> of virtual addresses within a VMA, instead of LRU order.

Well, swap has one fundamental difference than file system:
the dirty file system cache will need to eventually write to file
backing at least once, otherwise machine reboots you lose the data.

Where the anonymous memory case, the dirty page does not have to write
to swap. It is optional, so which page you choose to swap out is
critical, you want to swap out the coldest page, the page that is
least likely to get swapin. Therefore, the LRU makes sense.

In VMA swap out, the question is, which VMA you choose from first? To
make things more complicated, the same page can map into different
processes in more than one VMA as well.

> Indeed, if we're open to radical ideas, the LRU sucks.  A physical scan
> is 40x faster:
> https://lore.kernel.org/linux-mm/ZTc7SHQ4RbPkD3eZ@xxxxxxxxxxxxxxxxxxxx/

That simulation assumes the page struct has access to information already.
On the physical CPU level, the access bit is from the PTE. If you scan
from physical page order, you need to use rmap to find the PTE to
check the access bit. It is not a simple pfn page order walk. You need
to scan the PTE first then move the access information into page
struct.

>
> > One challenging aspect is that the current swap back end has a very
> > low per swap entry memory overhead. It is about 1 byte (swap_map), 2
> > byte (swap cgroup), 8 byte(swap cache pointer). The inode struct is
> > more than 64 bytes per file. That is a big jump if you map a swap
> > entry to a file. If you map more than one swap entry to a file, then
> > you need to track the mapping of file offset to swap entry, and the
> > reverse lookup of swap entry to a file with offset. Whichever way you
> > cut it, it will significantly increase the per swap entry memory
> > overhead.
>
> Not necessarily, no.  If your workload uses a lot of order-2, order-4
> and order-9 folios, then the current scheme is using 11 bytes per page,
> so 44 bytes per order-2 folio, 176 per order-4 folio and 5632 per
> order-9 folio.  That's a lot of bytes we can use for an extent-based
> scheme.

Yes, if we allow dynamic allocation of swap entry, the 24B option.
Then sub entries inside the compound swap entry structure can share
the same compound swap struct pointer.

>
> Also, why would you compare the size of an inode to the size of an
> inode?  inode is ~equivalent to an anon_vma, not to a swap entry.

I am not assigning inode to one swap entry. That is covered in my
description of "if you map more than one swap entry to a file". If you
want to map each inode to anon_vma, you need to have a way to map
inode  and file offset into swap entry encoding. In your anon_vma as
inode world, how do you deal with two different vma containing the
same page? Once we have more detail of the swap entry mapping scheme,
we can analyse the pros and cons.

Chris





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux