Re: [PATCH 1/9] crypto: add zbufsize() interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 01, 2021 at 03:39:06PM -0800, Kees Cook wrote:
>
> Okay, I'm looking at this again because of the need in the module loader
> to know "worst case decompression size"[1]. I am at a loss for how (or
> why) the acomp interface is the "primary interface".

This is similar to how we transitioned from the old hash interface
to shash/ahash.

Basically the legacy interface is synchronous only and cannot support
hardware devices without having the CPU spinning while waiting for the
result to come back.

If you only care about synchronous support and don't need to access
these hardware devices then you should use the new scomp interface
that's equivalent to the old compress interface but built in a way
to allow acomp users to easily access sync algorithms, but if you
are processing large amounts of data and wish to access offload devices
then you should consider using the async acomp interface.

> For modules, all that would be wanted is this, where the buffer size can
> be allocated on demand:
> 
> u8 *decompressed = NULL;
> size_t decompressed_size = 0;
> 
> decompressed = decompress(decompressed, compressed, compressed_size, &decompressed_size);
> 
> For pstore, the compressed_size is fixed and the decompression buffer
> must be preallocated (for catching panic dumps), so the worst-case size
> needs to be known in advance:
> 
> u8 *decompressed = NULL;
> size_t decompressed_worst_size = 0;
> size_t decompressed_size = 0;
> 
> worst_case(&decompressed_worst_size, compressed_size);
> 
> decompressed = kmalloc(decompressed_worst_size, GFP_KERNEL);
> ...
> decompressed_size = decompressed_worst_size;
> decompress(decompressed, compressed, compressed_size, &decompressed_size);
> 
> 
> I don't see anything like this in the kernel for handling a simple
> buffer-to-buffer decompression besides crypto_comp_decompress(). The
> acomp interface is wildly over-complex for this. What the right
> way to do this? (I can't find any documentation that discusses
> compress/decompress[2].)

I think you're asking about a different issue, which is that we
don't have an interface for on-the-go allocation of decompressed
results so every user has to allocate the maximum worst-case buffer.

This is definitely an area that should be addressed but a lot of work
needs to be done to get there.  Essentially we'd need to convert
the underlying algorithms to a model where they decompress into a
list of pages and then they could simply allocate a new page if they
need extra space.

The result can then be returned to the original caller as an SG list.

Would you be willing to work on something like this? This would benefit
all existing users too.  For example, IPsec would no longer need to
allocate those 64K buffers for IPcomp.

Unfortunately not many people care deeply about compression so
volunteers are hard to find.

Cheers,
-- 
Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux