Re: [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 21, 2023 at 3:34 PM Yang Shi <shy828301@xxxxxxxxx> wrote:
>
> On Tue, Feb 21, 2023 at 11:46 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> >
> > On Tue, Feb 21, 2023 at 11:26 AM Yang Shi <shy828301@xxxxxxxxx> wrote:
> > >
> > > On Tue, Feb 21, 2023 at 10:56 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > >
> > > > On Tue, Feb 21, 2023 at 10:40 AM Yang Shi <shy828301@xxxxxxxxx> wrote:
> > > > >
> > > > > Hi Yosry,
> > > > >
> > > > > Thanks for proposing this topic. I was thinking about this before but
> > > > > I didn't make too much progress due to some other distractions, and I
> > > > > got a couple of follow up questions about your design. Please see the
> > > > > inline comments below.
> > > >
> > > > Great to see interested folks, thanks!
> > > >
> > > > >
> > > > >
> > > > > On Sat, Feb 18, 2023 at 2:39 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > Hello everyone,
> > > > > >
> > > > > > I would like to propose a topic for the upcoming LSF/MM/BPF in May
> > > > > > 2023 about swap & zswap (hope I am not too late).
> > > > > >
> > > > > > ==================== Intro ====================
> > > > > > Currently, using zswap is dependent on swapfiles in an unnecessary
> > > > > > way. To use zswap, you need a swapfile configured (even if the space
> > > > > > will not be used) and zswap is restricted by its size. When pages
> > > > > > reside in zswap, the corresponding swap entry in the swapfile cannot
> > > > > > be used, and is essentially wasted. We also go through unnecessary
> > > > > > code paths when using zswap, such as finding and allocating a swap
> > > > > > entry on the swapout path, or readahead in the swapin path. I am
> > > > > > proposing a swapping abstraction layer that would allow us to remove
> > > > > > zswap's dependency on swapfiles. This can be done by introducing a
> > > > > > data structure between the actual swapping implementation (swapfiles,
> > > > > > zswap) and the rest of the MM code.
> > > > > >
> > > > > > ==================== Objective ====================
> > > > > > Enabling the use of zswap without a backing swapfile, which makes
> > > > > > zswap useful for a wider variety of use cases. Also, when zswap is
> > > > > > used with a swapfile, the pages in zswap do not use up space in the
> > > > > > swapfile, so the overall swapping capacity increases.
> > > > > >
> > > > > > ==================== Idea ====================
> > > > > > Introduce a data structure, which I currently call a swap_desc, as an
> > > > > > abstraction layer between swapping implementation and the rest of MM
> > > > > > code. Page tables & page caches would store a swap id (encoded as a
> > > > > > swp_entry_t) instead of directly storing the swap entry associated
> > > > > > with the swapfile. This swap id maps to a struct swap_desc, which acts
> > > > > > as our abstraction layer. All MM code not concerned with swapping
> > > > > > details would operate in terms of swap descs. The swap_desc can point
> > > > > > to either a normal swap entry (associated with a swapfile) or a zswap
> > > > > > entry. It can also include all non-backend specific operations, such
> > > > > > as the swapcache (which would be a simple pointer in swap_desc), swap
> > > > > > counting, etc. It creates a clear, nice abstraction layer between MM
> > > > > > code and the actual swapping implementation.
> > > > >
> > > > > How will the swap_desc be allocated? Dynamically or preallocated? Is
> > > > > it 1:1 mapped to the swap slots on swap devices (whatever it is
> > > > > backed, for example, zswap, swap partition, swapfile, etc)?
> > > >
> > > > I imagine swap_desc's would be dynamically allocated when we need to
> > > > swap something out. When allocated, a swap_desc would either point to
> > > > a zswap_entry (if available), or a swap slot otherwise. In this case,
> > > > it would be 1:1 mapped to swapped out pages, not the swap slots on
> > > > devices.
> > >
> > > It makes sense to be 1:1 mapped to swapped out pages if the swapfile
> > > is used as the back of zswap.
> > >
> > > >
> > > > I know that it might not be ideal to make allocations on the reclaim
> > > > path (although it would be a small-ish slab allocation so we might be
> > > > able to get away with it), but otherwise we would have statically
> > > > allocated swap_desc's for all swap slots on a swap device, even unused
> > > > ones, which I imagine is too expensive. Also for things like zswap, it
> > > > doesn't really make sense to preallocate at all.
> > >
> > > Yeah, it is not perfect to allocate memory in the reclamation path. We
> > > do have such cases, but the fewer the better IMHO.
> >
> > Yeah. Perhaps we can preallocate a pool of swap_desc's on top of the
> > slab cache, idk if that makes sense, or if there is a way to tell slab
> > to proactively refill a cache.
> >
> > I am open to suggestions here. I don't think we should/can preallocate
> > the swap_desc's, and we cannot completely eliminate the allocations in
> > the reclaim path. We can only try to minimize them through caching,
> > etc. Right?
>
> Yeah, reallocation should not work. But I'm not sure whether caching
> works well for this case or not either. I'm supposed that you were
> thinking about something similar with pcp. When the available number
> of elements is lower than a threshold, refill the cache. It should
> work well with moderate memory pressure. But I'm not sure how it would
> behave with severe memory pressure, particularly when  anonymous
> memory dominated the memory usage. Or maybe dynamic allocation works
> well, we are just over-engineered.

Yeah it would be interesting to look into whether the swap_desc
allocation will be a bottleneck. Definitely something to look out for.
I share your thoughts about wanting to do something about it but also
not wanting to over-engineer it.

>
> >
> > >
> > > >
> > > > WDYT?
> > > >
> > > > >
> > > > > >
> > > > > > ==================== Benefits ====================
> > > > > > This work enables using zswap without a backing swapfile and increases
> > > > > > the swap capacity when zswap is used with a swapfile. It also creates
> > > > > > a separation that allows us to skip code paths that don't make sense
> > > > > > in the zswap path (e.g. readahead). We get to drop zswap's rbtree
> > > > > > which might result in better performance (less lookups, less lock
> > > > > > contention).
> > > > > >
> > > > > > The abstraction layer also opens the door for multiple cleanups (e.g.
> > > > > > removing swapper address spaces, removing swap count continuation
> > > > > > code, etc). Another nice cleanup that this work enables would be
> > > > > > separating the overloaded swp_entry_t into two distinct types: one for
> > > > > > things that are stored in page tables / caches, and for actual swap
> > > > > > entries. In the future, we can potentially further optimize how we use
> > > > > > the bits in the page tables instead of sticking everything into the
> > > > > > current type/offset format.
> > > > > >
> > > > > > Another potential win here can be swapoff, which can be more practical
> > > > > > by directly scanning all swap_desc's instead of going through page
> > > > > > tables and shmem page caches.
> > > > > >
> > > > > > Overall zswap becomes more accessible and available to a wider range
> > > > > > of use cases.
> > > > >
> > > > > How will you handle zswap writeback? Zswap may writeback to the backed
> > > > > swap device IIUC. Assuming you have both zswap and swapfile, they are
> > > > > separate devices with this design, right? If so, is the swapfile still
> > > > > the writeback target of zswap? And if it is the writeback target, what
> > > > > if swapfile is full?
> > > >
> > > > When we try to writeback from zswap, we try to allocate a swap slot in
> > > > the swapfile, and switch the swap_desc to point to that instead. The
> > > > process would be transparent to the rest of MM (page tables, page
> > > > cache, etc). If the swapfile is full, then there's really nothing we
> > > > can do, reclaim fails and we start OOMing. I imagine this is the same
> > > > behavior as today when swap is full, the difference would be that we
> > > > have to fill both zswap AND the swapfile to get to the OOMing point,
> > > > so an overall increased swapping capacity.
> > >
> > > When zswap is full, but swapfile is not yet, will the swap try to
> > > writeback zswap to swapfile to make more room for zswap or just swap
> > > out to swapfile directly?
> > >
> >
> > The current behavior is that we swap to swapfile directly in this
> > case, which is far from ideal as we break LRU ordering by skipping
> > zswap. I believe this should be addressed, but not as part of this
> > effort. The work to make zswap respect the LRU ordering by writing
> > back from zswap to make room can be done orthogonal to this effort. I
> > believe Johannes was looking into this at some point.
>
> Other than breaking LRU ordering, I'm also concerned about the
> potential deteriorating performance when writing/reading from swapfile
> when zswap is full. The zswap->swapfile order should be able to
> maintain a consistent performance for userspace.

Right. This happens today anyway AFAICT, when zswap is full we just
fallback to writing to swapfile, so this would not be a behavior
change. I agree it should be addressed anyway.

>
> But anyway I don't have the data from real life workload to back the
> above points. If you or Johannes could share some real data, that
> would be very helpful to make the decisions.

I actually don't, since we mostly run zswap without a backing
swapfile. Perhaps Johannes might be able to have some data on this (or
anyone using zswap with a backing swapfile).

>
> >
> > > >
> > > > >
> > > > > Anyway I'm interested in attending the discussion for this topic.
> > > >
> > > > Great! Looking forward to discuss this more!
> > > >
> > > > >
> > > > > >
> > > > > > ==================== Cost ====================
> > > > > > The obvious downside of this is added memory overhead, specifically
> > > > > > for users that use swapfiles without zswap. Instead of paying one byte
> > > > > > (swap_map) for every potential page in the swapfile (+ swap count
> > > > > > continuation), we pay the size of the swap_desc for every page that is
> > > > > > actually in the swapfile, which I am estimating can be roughly around
> > > > > > 24 bytes or so, so maybe 0.6% of swapped out memory. The overhead only
> > > > > > scales with pages actually swapped out. For zswap users, it should be
> > > > > > a win (or at least even) because we get to drop a lot of fields from
> > > > > > struct zswap_entry (e.g. rbtree, index, etc).
> > > > > >
> > > > > > Another potential concern is readahead. With this design, we have no
> > > > > > way to get a swap_desc given a swap entry (type & offset). We would
> > > > > > need to maintain a reverse mapping, adding a little bit more overhead,
> > > > > > or search all swapped out pages instead :). A reverse mapping might
> > > > > > pump the per-swapped page overhead to ~32 bytes (~0.8% of swapped out
> > > > > > memory).
> > > > > >
> > > > > > ==================== Bottom Line ====================
> > > > > > It would be nice to discuss the potential here and the tradeoffs. I
> > > > > > know that other folks using zswap (or interested in using it) may find
> > > > > > this very useful. I am sure I am missing some context on why things
> > > > > > are the way they are, and perhaps some obvious holes in my story.
> > > > > > Looking forward to discussing this with anyone interested :)
> > > > > >
> > > > > > I think Johannes may be interested in attending this discussion, since
> > > > > > a lot of ideas here are inspired by discussions I had with him :)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux