On Thu, Oct 12, 2023 at 10:13:16PM +0800, 贺中坤 wrote: > Hi Nhat, thanks for your detailed reply. > > > We're currently trying to solve this exact problem. Our approach is to > > add a shrinker that automatically shrinks the size of the zswap pool: > > > > https://lore.kernel.org/lkml/20230919171447.2712746-1-nphamcs@xxxxxxxxx/ > > > > It is triggered on memory-pressure, and can perform reclaim in a > > workload-specific manner. > > > > I'm currently working on v3 of this patch series, but in the meantime, > > could you take a look and see if it will address your issues as well? > > > > Comments and suggestions are always welcome, of course :) > > > > Thanks, I've seen both patches. But we hope to be able to reclaim memory > in advance, regardless of memory pressure, like memory.reclaim in memcg, > so we can offload memory in different tiers. Can you use memory.reclaim itself for that? With Nhat's shrinker, it should move the whole pipeline (LRU -> zswap -> swap). > Thanks for your review,we should update the store time when it was loaded. > But it confused me, there are two copies of the same page in memory > (compressed and uncompressed) after faulting in a page from zswap if > 'zswap_exclusive_loads_enabled' was disabled. I didn't notice any difference > when turning that option on or off because the frontswap_ops has been removed > and there is no frontswap_map anymore. Sorry, am I missing something? In many instances, swapins already free the swap slot through the generic swap code (see should_try_to_free_swap()). It matters for shared pages, or for swapcaching read-only data when swap isn't full - it could be that isn't the case in your tests.