On Fri, Mar 14, 2025 at 5:07 AM Minchan Kim <minchan@xxxxxxxxxx> wrote: > > On Thu, Mar 13, 2025 at 04:45:54PM +1300, Barry Song wrote: > > On Thu, Mar 13, 2025 at 4:09 PM Sergey Senozhatsky > > <senozhatsky@xxxxxxxxxxxx> wrote: > > > > > > On (25/03/12 11:11), Minchan Kim wrote: > > > > On Fri, Mar 07, 2025 at 08:01:02PM +0800, Qun-Wei Lin wrote: > > > > > This patch series introduces a new mechanism called kcompressd to > > > > > improve the efficiency of memory reclaiming in the operating system. The > > > > > main goal is to separate the tasks of page scanning and page compression > > > > > into distinct processes or threads, thereby reducing the load on the > > > > > kswapd thread and enhancing overall system performance under high memory > > > > > pressure conditions. > > > > > > > > > > Problem: > > > > > In the current system, the kswapd thread is responsible for both > > > > > scanning the LRU pages and compressing pages into the ZRAM. This > > > > > combined responsibility can lead to significant performance bottlenecks, > > > > > especially under high memory pressure. The kswapd thread becomes a > > > > > single point of contention, causing delays in memory reclaiming and > > > > > overall system performance degradation. > > > > > > > > Isn't it general problem if backend for swap is slow(but synchronous)? > > > > I think zram need to support asynchrnous IO(can do introduce multiple > > > > threads to compress batched pages) and doesn't declare it's > > > > synchrnous device for the case. > > > > > > The current conclusion is that kcompressd will sit above zram, > > > because zram is not the only compressing swap backend we have. > > Then, how handles the file IO case? I didn't quite catch your question :-) > > > > > also. it is not good to hack zram to be aware of if it is kswapd > > , direct reclaim , proactive reclaim and block device with > > mounted filesystem. > > Why shouldn't zram be aware of that instead of just introducing > queues in the zram with multiple compression threads? > My view is the opposite of yours :-) Integrating kswapd, direct reclaim, etc., into the zram driver would violate layering principles. zram is purely a block device driver, and how it is used should be handled separately. Callers have greater flexibility to determine its usage, similar to how different I/O models exist in user space. Currently, Qun-Wei's patch checks whether the current thread is kswapd. If it is, compression is performed asynchronously by threads; otherwise, it is done in the current thread. In the future, we may have additional reclaim threads, such as for damon or madv_pageout, etc. > > > > so i am thinking sth as below > > > > page_io.c > > > > if (sync_device or zswap_enabled()) > > schedule swap_writepage to a separate per-node thread > > I am not sure that's a good idea to mix a feature to solve different > layers. That wouldn't be only swap problem. Such an parallelism under > device is common technique these days and it would help file IO cases. > zswap and zram share the same needs, and handling this in page_io can benefit both through common code. It is up to the callers to decide the I/O model. I agree that "parallelism under the device" is a common technique, but our case is different—the device achieves parallelism with offload hardware, whereas we rely on CPUs, which can be scarce. These threads may also preempt CPUs that are critically needed by other non-compression tasks, and burst power consumption can sometimes be difficult to control. > Furthermore, it would open the chance for zram to try compress > multiple pages at once. We are already in this situation when multiple callers use zram simultaneously, such as during direct reclaim or with a mounted filesystem. Of course, this allows multiple pages to be compressed simultaneously, even if the user is single-threaded. However, determining when to enable these threads and whether they will be effective is challenging, as it depends on system load. For example, Qun-Wei's patch chose not to use threads for direct reclaim as, I guess, it might be harmful. Thanks Barry