On Thu, Sep 15, 2022 at 02:09:11PM +0100, Giovanni Cabiddu wrote: > > > Here's a suggestion. Start with whatever value you want (e.g., > > src * 2), attempt the decompression, if it fails because the > > space is to small, then double it and retry the operation. > > I prototyped the solution you proposed and it introduces complexity, > still doesn't fully solve the problem and it is not performant. See > below*. I don't understand how it can be worse than your existing patch. I'm suggesting that you start with your current estimate, and only fallback to allocating a bigger buffer in case that overflows. So it should be exactly the same as your current patch as the fallback path would only activate in cases where your patch would have failed anyway. > We propose instead to match the destination buffer size used in scomp > for the NULL pointer use case, i.e. 128KB: > https://elixir.bootlin.com/linux/v6.0-rc5/source/include/crypto/internal/scompress.h#L13 > Since the are no users of acomp with this use-case in the kernel, we > believe this will be sufficient. Once we start imposing arbitrary limits in the driver, then users will forever be burdened with this. So that's why I want to avoid adding such limits. The whole point of having this feature in the acomp API is to avoid having users such as ipcomp preallocate huge buffers for the unlikely case of an exceptionally large decompressed result. Thanks, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt