Re: [PATCH] lib/lzo: Avoid output overruns when compressing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 27, 2025 at 09:46:10AM +0800, Herbert Xu wrote:
> On Wed, Feb 26, 2025 at 02:00:37PM +0100, David Sterba wrote:
> >
> > Does it have to check for the overruns? The worst case compression
> > result size is known and can be calculated by the formula. Using big
> 
> If the caller is using different algorithms, then yes the checks
> are essential.  Otherwise the caller would have to allocate enough
> memory not just for LZO, but for the worst-case compression length
> for *any* algorithm.  Adding a single algorithm would have the
> potential of breaking all users.
>  
> > What strikes me as alarming that you insert about 20 branches into a
> > realtime compression algorithm, where everything is basically a hot
> > path.  Branches that almost never happen, and never if the output buffer
> > is big enough.
> 
> OK, if that is a real concern then I will add a _safe version of
> LZO compression alongside the existing code.

Makes sense, thanks. The in-kernel users are OK, but the crypto API also
exports the compression so there's no guarantee it's used correctly. As
it needs changes to the LZO code itself I don't see a better way than to
have 2 versions, conveniently done by the macros as yo did.




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux