Re: [PATCH 1/2] zswap: implement a second chance algorithm for dynamic zswap shrinker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 29, 2024 at 8:39 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> Seek is a fixed coefficient for the scan rate.
>
> We want to slow writeback when recent zswapouts dominate the zswap
> pool (expanding or thrashing), and speed it up when recent entries
> make up a small share of the pool (stagnating).
>
> This is what the second chance accomplishes.

Wow, this is something that I did not even consider. Thanks for
pointing this out, Johannes.

shrinker->seeks tuning allows you to adjust writeback pacing, as a
ratio of the pool size. When the pool is static (no/few zswpin or
zswpout), then these two are similar (on average). But with concurrent
activities (pages coming in and out), the dynamics can potentially be
different.

Second chance allows you to have different dynamics depending on
recent pool activities. The recent zswpouts will be protected by
virtue of the reference bits (and given another chance, which will be
taken if it's used again soon), and the pages concurrently zswapped in
obviously will be too, whereas the stale objects who have already been
touched by the shrinker once in the past will be evicted immediately.
IOW, all of the above activities (zswpin, zswpout, reclaim pressure)
can harmonize seamlessly to adjust the effective rate of writeback.

Without any additional heuristics (old or new), increasing seek (i.e
decreasing the writeback rate) by itself only has a static effect, and
definitely does not accomplish the aformentioned dynamic writeback
rate adjustment. Now, we can (and did try to) mimic the above behavior
somewhat with the protection size scheme: only return the unprotected
size, carefully increase it on zswpout and swpin (so that zswpouts are
not considered), carefully prevent shrinker from reclaiming into
protected area, etc.. But it's incredibly brittle - with all these
hacks, it becomes even more complicated and unintuitive than the
second chance algorithm. If it's performing well, then sure, but it's
not. Might as well do the simpler thing? :)

Besides, the problem with the haphazard aging (i.e protection
decaying) remains - at which point do we decay, and how much do we
decay? Compare this with the new second chance scheme, which gives you
a natural aging mechanism, and can elegantly adjust itself with
reclaim/memory pressure.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux