Re: [LSF/MM/BPF TOPIC] Large folios, swap and fscache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Matthew,

On Fri, Feb 2, 2024 at 6:29 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
>
> On Fri, Feb 02, 2024 at 09:09:49AM +0000, David Howells wrote:
> > The topic came up in a recent discussion about how to deal with large folios
> > when it comes to swap as a swap device is normally considered a simple array
> > of PAGE_SIZE-sized elements that can be indexed by a single integer.
> >
> > With the advent of large folios, however, we might need to change this in
> > order to be better able to swap out a compound page efficiently.  Swap
> > fragmentation raises its head, as does the need to potentially save multiple
> > indices per folio.  Does swap need to grow more filesystem features?
>
> I didn't mention this during the meeting, but there are more reasons
> to do something like this.  For example, even with large folios, it
> doesn't make sense to drive writing to swap on a per-folio basis.  We
> should be writing out large chunks of virtual address space in a single
> write to the swap device, just like we do large chunks of files in
> ->writepages.

I have thought about your proposal after the THP meeting. One
observation is that the swap write and swap read has some asymmetries.
For swap read, you always know which vma you are reading into.
However, the swap write that is based on the LRU list,
(shrink_folio_list) does not have the vma information in hand.
Actually the same folio might map by two different processes. It would
need to do the rmap walk to find out the VMA. So organizing the swap
write around VMA mapping is not convenient for the LRU reclaim write
back case.

Chris


> Another reason to do something different is that we're starting to see
> block devices with bs>PS.  That means we'll _have_ to write out larger
> chunks than a single page.  For reads, we can discard the extra data,
> but it'd be better to swap back in the entire block rather than
> individual pages.
>
> So my modest proposal is that we completely rearchitect how we handle
> swap.  Instead of putting swp entries in the page tables (and in shmem's
> case in the page cache), we turn swap into an (object, offset) lookup
> (just like a filesystem).  That means that each anon_vma becomes its
> own swap object and each shmem inode becomes its own swap object.
> The swap system can then borrow techniques from whichever filesystem
> it likes to do (object, offset, length) -> n x (device, block) mappings.
>
> > Further to this, we have at least two ways to cache data on disk/flash/etc. -
> > swap and fscache - and both want to set aside disk space for their operation.
> > Might it be possible to combine the two?
> >
> > One thing I want to look at for fscache is the possibility of switching from a
> > file-per-object-based approach to a tagged cache more akin to the way OpenAFS
> > does things.  In OpenAFS, you have a whole bunch of small files, each
> > containing a single block (e.g. 256K) of data, and an index that maps a
> > particular {volume,file,version,block} to one of these files in the cache.
>
> I think my proposal above works for you?  For each file you want to cache,
> create a swap object, and then tell swap when you want to read/write to
> the local swap object.  What you do need is to persist the objects over
> a power cycle.  That shouldn't be too hard ... after all, filesystems
> manage to do it.  All we need to do is figure out how to name the
> lookup (I don't think we need to use strings to name the swap object,
> but obviously we could).  Maybe it's just a stream of bytes.
>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux