On Tue, Dec 3, 2024 at 5:42 PM Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> wrote: > > On Tue, Dec 03, 2024 at 01:44:00PM -0800, Yosry Ahmed wrote: > > > > Does this mean that instead of zswap breaking down the folio into > > SWAP_CRYPTO_BATCH_SIZE -sized batches, we pass all the pages to the > > crypto layer and let it do the batching as it pleases? > > You provide as much (or little) as you're comfortable with. Just > treat the acomp API as one that can take as much as you want to > give it. In this case, it seems like the batch size is completely up to zswap, and not necessarily dependent on the compressor. That being said, Intel IAA will naturally prefer a batch size that maximizes the parallelization. How about this, we can define a fixed max batch size in zswap, to provide a hard limit on the number of buffers we preallocate (e.g. MAX_BATCH_SIZE). The compressors can provide zswap a hint with their desired batch size (e.g. 8 for Intel IAA). Then zswap can allocate min(MAX_BATCH_SIZE, compressor_batch_size). Assuming software compressors provide 1 for the batch size, if MAX_BATCH_SIZE is >= 8, Intel IAA gets the batching rate it wants, and software compressors get the same behavior as today. This abstracts the batch size needed by the compressor while making sure zswap does not preallocate a ridiculous amount of memory. Does this make sense to everyone or am I missing something?