Hi Benjamin, On Tue, Jan 30, 2018 at 04:08:57PM +0100, Benjamin Warnke wrote: > Currently ZRAM uses the compression-algorithms from the crypto-api. > None of the current compression-algorithms in the crypto-api is designed > to compress 4KiB chunks of data efficiently. > This patch adds a new compression-algorithm especially designed for ZRAM, > to compress small pieces of data more efficiently. > This is some interesting work, and I like the idea of doing transforms specialized for in-memory data. However, where can I find more information about this new compression algorithm? What does "zbewalgo" even stand for / mean? Googling it turns up nothing. You are going to have to be much more specific what you mean by "efficiently". Efficiency usually implies speed, yet even by your own numbers LZ4 is much faster than "zbewalgo", both for compression and decompression. If the goal is to provide an algorithm more tuned for compression ratio than speed in comparison to LZ4, then the omission of Zstandard from your benchmarks is strange, especially given that Zstandard is available in the kernel now. The proposed "zbewalgo" decompressor also doesn't handle invalid inputs, which means it cannot be used on untrusted data. This isn't acceptable without justification (since people may use it on untrusted data, creating security vulnerabilities), and it also makes it not really a fair comparison with LZ4 since the LZ4 decompressor does handle invalid inputs, at least in the mode exposed through the crypto API. Eric