Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Swap Abstraction "the pony"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Karim,

On Fri, May 17, 2024 at 5:12 AM Karim Manaouil <kmanaouil.dev@xxxxxxxxx> wrote:
>
> On Thu, Mar 07, 2024 at 03:03:44PM +0100, Jan Kara wrote:
> > Frankly as I'm reading the discussions here, it seems to me you are trying
> > to reinvent a lot of things from the filesystem space :) Like block
> > allocation with reasonably efficient fragmentation prevention, transparent
> > data compression (zswap), hierarchical storage management (i.e., moving
> > data between different backing stores), efficient way to get from
> > VMA+offset to the place on disk where the content is stored. Sure you still
> > don't need a lot of things modern filesystems do like permissions,
> > directory structure (or even more complex namespacing stuff), all the stuff
> > achieving fs consistency after a crash, etc. But still what you need is a
> > notable portion of what filesystems do.
> >
> > So maybe it would be time to implement swap as a proper filesystem? Or even
> > better we could think about factoring out these bits out of some existing
> > filesystem to share code?
>
> I definitely agree with you on this point. I had the same exact thought,
> reading the discussion.
>
> Filesystems already implemented a lot of solutions for fragmentation
> avoidance that are more apropriate for slow storage media.
>

Swap and file systems have very different requirements and usage
patterns and IO patterns.

> Also, writing chunks of any size (e.g. to directly write compressed
> pages) means slab-based management of swap space might not be ideal
> and will waste space for internal fragmentation. Also compaction
> for slow media is obviously harder and slower to implement compared
> to doing it in memory. You can do it in memory as well, but that is
> at the expense of more I/O.

I am not able to understand what you describe above. The current swap
entry is not allocated from slab. The compressed swap backend, zswap
or zram. both use zsmalloc as backend to store compressed pages.

>
> It sounds to me that all the problems above can be solved with an
> extent-based filesystem implementation of swap.

It looks good on paper, once you try to actually implement it  you
will find out a lot of new obstacles.

One challenging aspect is that the current swap back end has a very
low per swap entry memory overhead. It is about 1 byte (swap_map), 2
byte (swap cgroup), 8 byte(swap cache pointer). The inode struct is
more than 64 bytes per file. That is a big jump if you map a swap
entry to a file. If you map more than one swap entry to a file, then
you need to track the mapping of file offset to swap entry, and the
reverse lookup of swap entry to a file with offset. Whichever way you
cut it, it will significantly increase the per swap entry memory
overhead.

Chris





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux