[..] > > This shows that in all cases, reclaim_high() is called only from the return > path to user mode after handling a page-fault. I am sorry I haven't been keeping up with this thread, I don't have a lot of capacity right now. If my understanding is correct, the summary of the problem we are observing here is that with high concurrency (70 processes), we observe worse system time, worse throughput, and higher memory_high events with zswap than SSD swap. This is true (with varying degrees) for 4K or mTHP, and with or without charging zswap compressed memory. Did I get that right? I saw you also mentioned that reclaim latency is directly correlated to higher memory_high events. Is it possible that with SSD swap, because we wait for IO during reclaim, this gives a chance for other processes to allocate and free the memory they need. While with zswap because everything is synchronous, all processes are trying to allocate their memory at the same time resulting in higher reclaim rates? IOW, maybe with zswap all the processes try to allocate their memory at the same time, so the total amount of memory needed at any given instance is much higher than memory.high, so we keep producing memory_high events and reclaiming. If 70 processes all require 1G at the same time, then we need 70G of memory at once, we will keep thrashing pages in/out of zswap. While with SSD swap, due to the waits imposed by IO, the allocations are more spread out and more serialized, and the amount of memory needed at any given instance is lower; resulting in less reclaim activity and ultimately faster overall execution? Could you please describe what the processes are doing? Are they allocating memory and holding on to it, or immediately freeing it? Do you have visibility into when each process allocates and frees memory?