On Tue, Jan 7, 2025 at 5:39 PM Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> wrote: > > On Mon, Jan 06, 2025 at 07:10:53PM -0800, Yosry Ahmed wrote: > > > > The main problem is memory usage. Zswap needs a PAGE_SIZE*2-sized > > buffer for each request on each CPU. We preallocate these buffers to > > avoid trying to allocate this much memory in the reclaim path (i.e. > > potentially allocating two pages to reclaim one). > > What if we allowed each acomp request to take a whole folio? > That would mean you'd only need to allocate one request per > folio, regardless of how big it is. Hmm this means we need to allocate a single request instead of N requests, but the source of overhead is the output buffers not the requests. We need PAGE_SIZE*2 for each page in the folio in the output buffer on each CPU. Preallocating this unnecessarily adds up to a lot of memory. Did I miss something? > > Eric, we could do something similar with ahash. Allow the > user to supply a folio (or scatterlist entry) instead of a > single page, and then cut it up based on a unit-size supplied > by the user (e.g., 512 bytes for sector-based users). That > would mean just a single request object as long as your input > is a folio or something similar. > > Is this something that you could use in fs/verity? You'd still > need to allocate enough memory to store the output hashes. > > Cheers, > -- > Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> > Home Page: http://gondor.apana.org.au/~herbert/ > PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt >