On Wed, Feb 05, 2025 at 11:20:46PM -0800, Kanchana P Sridhar wrote: > IAA Compression Batching: > ========================= > > This patch-series introduces the use of the Intel Analytics Accelerator > (IAA) for parallel batch compression of pages in large folios to improve > zswap swapout latency. So, zswap is passed a large folio to swap out, and it divides it into 4K pages and compresses each independently. The performance improvement in this patchset comes entirely from compressing the folio's pages in parallel, synchronously, using IAA. Before even considering IAA and going through all the pain of supporting batching with an off-CPU offload, wouldn't it make a lot more sense to try just compressing each folio in software as a single unit? Compared to the existing approach of compressing the folio in 4K chunks, that should be much faster and produce a much better compression ratio. Compression algorithms are very much designed for larger amounts of data, so that they can find more matches. It looks like the mm subsystem used to always break up folios when swapping them out, but that is now been fixed. It looks like zswap just hasn't been updated to do otherwise yet? FWIW, here are some speed and compression ratio results I collected in a compression benchmark module that tests feeding vmlinux (uncompressed_size: 26624 KiB) though zstd in 4 KiB page or 2 MiB folio-sized chunks: zstd level 3, 4K chunks: 86 ms; compressed_size 9429 KiB zstd level 3, 2M chunks: 57 ms; compressed_size 8251 KiB zstd level 1, 4K chunks: 65 ms; compressed_size 9806 KiB zstd level 1, 2M chunks: 34 ms; compressed_size 8878 KiB The current zswap parameterization is "zstd level 3, 4K chunks". I would recommend "zstd level 1, 2M chunks", which would be 2.5 times as fast and give a 6% better compression ratio. What is preventing zswap from compressing whole folios? - Eric