Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Swap Abstraction "the pony"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Zi,

On Thu, May 16, 2024 at 8:04 AM Zi Yan <ziy@xxxxxxxxxx> wrote:
>
> On 14 Mar 2024, at 5:03, Jan Kara wrote:
>
> > On Fri 08-03-24 05:17:46, Barry Song wrote:
> >> On Fri, Mar 8, 2024 at 5:06 AM Jared Hulbert <jaredeh@xxxxxxxxx> wrote:
> >>>
> >>> On Thu, Mar 7, 2024 at 9:35 AM Jan Kara <jack@xxxxxxx> wrote:
> >>>>
> >>>> Well, but then if you fill in space of a particular order and need to swap
> >>>> out a page of that order what do you do? Return ENOSPC prematurely?
> >>>>
> >>>> Frankly as I'm reading the discussions here, it seems to me you are trying
> >>>> to reinvent a lot of things from the filesystem space :) Like block
> >>>> allocation with reasonably efficient fragmentation prevention, transparent
> >>>> data compression (zswap), hierarchical storage management (i.e., moving
> >>>> data between different backing stores), efficient way to get from
> >>>> VMA+offset to the place on disk where the content is stored. Sure you still
> >>>> don't need a lot of things modern filesystems do like permissions,> directory structure (or even more complex namespacing stuff), all the stuff
> >>>> achieving fs consistency after a crash, etc. But still what you need is a
> >>>> notable portion of what filesystems do.
> >>>>
> >>>> So maybe it would be time to implement swap as a proper filesystem? Or even
> >>>> better we could think about factoring out these bits out of some existing
> >>>> filesystem to share code?
> >>>
> >>> Yes.  Thank you.  I've been struggling to communicate this.
> >>>
> >>> I'm thinking you can just use existing filesystems as a first step
> >>> with a modest glue layer.  See the branch of this thread where I'm
> >>> babbling on to Chris about this.
> >>>
> >>> "efficient way to get from VMA+offset to place on the disk where
> >>> content is stored"
> >>> You mean treat swapped pages like they were mmap'ed files and use the
> >>> same code paths?  How big of a project is that?  That seems either
> >>> deceptively easy or really hard... I've been away too long and was
> >>> never really good enough to have a clear vision of the scale.
> >>
> >> I don't understand why we need this level of complexity. All we need to
> >> know are the offsets during pageout. After that, the large folio is
> >> destroyed, and all offsets are stored in page table entries (PTEs) or xa.
> >> Swap-in doesn't depend on a complex file system; it can make its own
> >> decision on how to swap-in based on the values it reads from PTEs.
> >
> > Well, but once compression chimes in (like with zswap) or if you need to
> > perform compaction on swap space and move swapped out data, things aren't
> > that simple anymore, are they? So as I was reading this thread I had the
> > impression that swap complexity is coming close to a complexity of a
> > (relatively simple) filesystem so I was brainstorming about possibility of
> > sharing some code between filesystems and swap...

There is a session for the filesystem as swap back end in LSF/MM.

>
> I think all the complexity comes from that we want to preserve folios as
> a whole, thus need to handle fragmentation issues. But Barry’s approach

Yes, we want to preserve the folio as a whole. The fragmentation is
one the swap entry on the swap file. These two are at two different
layers. It should  be possible to folio as a whole and write out
fragmented swap entries.

> is trying to get us away from it. The downside is what you mentioned
> about compression, since 64KB should give better compression ratio than
> 4KB. For swap without compression, we probably can use Barry’s
> approach to keep everything simple, just split all folios when they go
> into swap, but I am not sure about if there is disk throughput loss.

I have some ideas about writing out a large folio to non-contiguous
swap entry without breaking up the folio. It will have the same effect
in terms of swap entry and disk write side effects as Barry's folio
break out approach. We can still track back those fragmented swap
entries belonging to the compound swap entry. That is in the last page
of my talk slide (not the reference slide).

BTW, we can have the option to  swap in as large folio doesn't mean we
have to swap in as large folio all the time. It should be a policy
decision above the swap back end. The swap back end can support large
or small folio as requested.

For zram, I suppose it is possible to modify zram to compress
non-contiguous io vectors written as one internal compressed buffer
in zsmalloc.
If it is read back using the same io vectors, it will get the same data back.

Chris

> For zswap, there will be design tradeoff between better compression ratio
> and complexity.
>
> Best Regards,
> Yan, Zi





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux