Re: [PATCH] lib/lzo: Avoid output overruns when compressing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 28, 2025 at 10:55:35PM +0900, Sergey Senozhatsky wrote:
> On (25/02/28 13:43), Ard Biesheuvel wrote:
> > On Fri, 28 Feb 2025 at 06:24, Sergey Senozhatsky
> > <senozhatsky@xxxxxxxxxxxx> wrote:
> > >
> > > On (25/02/26 14:00), David Sterba wrote:
> > > > What strikes me as alarming that you insert about 20 branches into a
> > > > realtime compression algorithm, where everything is basically a hot
> > > > path.  Branches that almost never happen, and never if the output buffer
> > > > is big enough.
> > > >
> > > > Please drop the patch.
> > >
> > > David, just for educational purposes, there's only safe variant of lzo
> > > decompression, which seems to be doing a lot of NEED_OP (HAVE_OP) adding
> > > branches and so on, basically what Herbert is adding to the compression
> > > path.  So my question is - why NEED_OP (if (!HAVE_OP(x)) goto output_overrun)
> > > is a no go for compression, but appears to be fine for decompression?
> > >
> > 
> > Because compression has a bounded worst case (compressing data with
> > LZO can actually increase the size but only by a limited amount),
> > whereas decompressing a small input could produce gigabytes of output.
> 
> One can argue that sometimes we know the upper bound.  E.g. zswap
> and zram always compress physical pages, so decompression cannot
> result in a bigger (than the original physical page) data.

So for ZRAM it would make sense to have "unsafe" decompression as the
data never leave the kernel space and cannot be tampered with from the
outside, unlike what filesystem deals with. This can gain some speed up.




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux