On (04/02/18 11:21), Maninder Singh wrote: [..] > >> static const char * const backends[] = { > >> "lzo", > >> #if IS_ENABLED(CONFIG_CRYPTO_LZ4) > >> "lz4", > >> +#if (PAGE_SIZE < (32 * KB)) > >> + "lz4_dyn", > >> +#endif > > > >This is not the list of supported algorithms. It's the list of > >recommended algorithms. You can configure zram to use any of > >available and known to Crypto API algorithms. Including lz4_dyn > >on PAGE_SIZE > 32K systems. > > > Yes, we want to integrate new compression(lz4_dyn) for ZRAM > only if PAGE_SIZE is less than 32KB to get maximum benefit. > so we added lz4_dyn to available list of ZRAM compression alhorithms. Which is not what I was talking about. You shrink a 2 bytes offset down to a 1 byte offset, thus you enforce that 'page should be less than 32KB', which I'm sure will be confusing. And you rely on lz4_dyn users to do the right thing - namely, to use that 'nice' `#if (PAGE_SIZE < (32 * KB))'. Apart from that, lz4_dyn supports only data in up to page_size chunks. Suppose my system has page_size of less than 32K, so I legitimately can enable lz4_dyn, but suppose that I will use it somewhere where I don't work with page_size-d chunks. Will I able to just do tfm->compress(src, sz) on random buffers? The whole thing looks to be quite fragile. -ss