Re: [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yosry,

On Tue, Feb 28, 2023 at 12:12:05AM -0800, Yosry Ahmed wrote:
> On Mon, Feb 27, 2023 at 8:54 PM Sergey Senozhatsky
> <senozhatsky@xxxxxxxxxxxx> wrote:
> >
> > On (23/02/18 14:38), Yosry Ahmed wrote:
> > [..]
> > > ==================== Idea ====================
> > > Introduce a data structure, which I currently call a swap_desc, as an
> > > abstraction layer between swapping implementation and the rest of MM
> > > code. Page tables & page caches would store a swap id (encoded as a
> > > swp_entry_t) instead of directly storing the swap entry associated
> > > with the swapfile. This swap id maps to a struct swap_desc, which acts
> > > as our abstraction layer. All MM code not concerned with swapping
> > > details would operate in terms of swap descs. The swap_desc can point
> > > to either a normal swap entry (associated with a swapfile) or a zswap
> > > entry. It can also include all non-backend specific operations, such
> > > as the swapcache (which would be a simple pointer in swap_desc), swap
> > > counting, etc. It creates a clear, nice abstraction layer between MM
> > > code and the actual swapping implementation.
> > >
> > > ==================== Benefits ====================
> > > This work enables using zswap without a backing swapfile and increases
> > > the swap capacity when zswap is used with a swapfile. It also creates
> > > a separation that allows us to skip code paths that don't make sense
> > > in the zswap path (e.g. readahead). We get to drop zswap's rbtree
> > > which might result in better performance (less lookups, less lock
> > > contention).
> > >
> > > The abstraction layer also opens the door for multiple cleanups (e.g.
> > > removing swapper address spaces, removing swap count continuation
> > > code, etc). Another nice cleanup that this work enables would be
> > > separating the overloaded swp_entry_t into two distinct types: one for
> > > things that are stored in page tables / caches, and for actual swap
> > > entries. In the future, we can potentially further optimize how we use
> > > the bits in the page tables instead of sticking everything into the
> > > current type/offset format.
> > >
> > > Another potential win here can be swapoff, which can be more practical
> > > by directly scanning all swap_desc's instead of going through page
> > > tables and shmem page caches.
> > >
> > > Overall zswap becomes more accessible and available to a wider range
> > > of use cases.
> >
> > I assume this also brings us closer to a proper writeback LRU handling?
> 
> I assume by proper LRU handling you mean:
> - Swap writeback LRU that lives outside of the zpool backends (i.e in
> zswap itself or even outside zswap).

Even outside zswap to support any combination on any heterogenous
multiple swap device configuration.

The indirection layer would be essential to support it but it would
be also great if we don't waste any memory for the user who don't
want the feature.

Just FYI, there was similar discussion long time ago about the
indirection layer.
https://lore.kernel.org/linux-mm/4DA25039.3020700@xxxxxxxxxx/




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux