Search Postgresql Archives

Re: Force re-compression with lz4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/18/21 06:41, Mladen Gogala wrote:

On 10/18/21 01:07, Michael Paquier wrote:
CPU-speaking, LZ4 is*much*  faster than pglz when it comes to
compression or decompression with its default options.  The
compression ratio is comparable between both, still LZ4 compresses in
average less than PGLZ.
--
Michael

LZ4 works much better with deduplication tools like Data Domain or Data Domain Boost (client side deduplication). With zip or gzip compression, deduplication ratios are much lower than with LZ4. Most of the modern backup tools (DD, Veeam, Rubrik, Commvault) support deduplication. LZ4 algorithm uses less CPU than zip, gzip or bzip2 and works much better with deduplication algorithms employed by the backup tools. This is actually a very big and positive change.

Not sure how much this applies to the Postgres usage of lz4. As I understand it, this is only used internally for table compression. When using pg_dump compression gzip is used. Unless you pipe plain text output through some other program.


Disclosure:

I used to work for Commvault as a senior PS engineer. Commvault was the first tool on the market to combine LZ4 and deduplication.

Regards




--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux