Re: [PATCH v5 02/12] crypto: acomp - Define new interfaces for compress/decompress batching.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On (25/02/22 19:26), Barry Song wrote:
> After reviewing the zRAM code, I don't see why zram_write_page() needs
> to rely on
> comp_len to call write_incompressible_page().

[..]

> zram_write_page()
> {
>         ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm,
>                              mem, &comp_len);
>         kunmap_local(mem);
> 
>         if (unlikely(ret && ret != -ENOSP)) {
>                 zcomp_stream_put(zstrm);
>                 pr_err("Compression failed! err=%d\n", ret);
>                 return ret;
>         }
> 
>         if (comp_len >= huge_class_size || ret) {
>                 zcomp_stream_put(zstrm);
>                 return write_incompressible_page(zram, page, index);
>         }
> }

Sorry, I'm slower than usual now, but why should we?  Shouldn't compression
algorithms just never fail, even on 3D videos, because otherwise they won't
be able to validate their Weissman score or something :)

On a serious note - what is the use-case here?  Is the failure here due to
some random "cosmic rays" that taint the  compression H/W?  If so then what
makes us believe that it's uni-directional?  What if it's decompression
that gets busted and then you can't decompress anything previously stored
compressed and stored in zsmalloc.  Wouldn't it be better in this case
to turn the computer off and on again?

The idea behind zram's code is that incompressible pages are not unusual,
they are quite usual, in fact,  It's not necessarily that the data grew
in size after compression, the data is incompressible from zsmalloc PoV.
That is the algorithm wasn't able to compress a PAGE_SIZE buffer to an
object smaller than zsmalloc's huge-class-watermark (around 3600 bytes,
depending on zspage chain size).  That's why we look at the comp-len.
Anything else is an error, perhaps a pretty catastrophic error.

> As long as crypto drivers consistently return -ENOSP or a specific error
> code for dst_buf overflow, we should be able to eliminate the
> 2*PAGE_SIZE buffer.
> 
> My point is:
> 1. All drivers must be capable of handling dst_buf overflow.
> 2. All drivers must return a consistent and dedicated error code for
> dst_buf overflow.

Sorry, where do these rules come from?

> +Minchan, Sergey,
> Do you think we can implement this change in zRAM by using PAGE_SIZE instead
> of 2 * PAGE_SIZE?

Sorry again, what problem are you solving?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux