On Fri, Nov 17, 2023 at 1:56 AM Zhongkun He <hezhongkun.hzk@xxxxxxxxxxxxx> wrote: > Hi Chris, thanks for your feedback. I have the same concerns, > maybe we should just move the zswap_invalidate() out of batches, > as Yosry mentioned above. As I replied in the previous email, I just want to understand the other side effects of the change better. To me, this patching is actually freeing the memory that does not require actual page IO write from zswap. Which means the memory is from some kind of cache. It would be interesting if we can not complicate the write back path further. Instead, we can drop those memories from the different cache if needed. I assume those caches are doing something useful in the common case. If not, we should have a patch to remove these caches instead. Not sure how big a mess it will be to implement separate the write and drop caches. While you are here, I have some questions for you. Can you help me understand how much memory you can free from this patch? For example, are we talking about a few pages or a few GB? Where does the freed memory come from? If the memory comes from zswap entry struct. Due to the slab allocator fragmentation. It would take a lot of zswap entries to have meaningful memory reclaimed from the slab allocator. If the memory comes from the swap cached pages, that would be much more meaningful. But that is not what this patch is doing, right? Chris