Hey ceph-users,
we've been using the default "snappy" to have Ceph compress data on
certain pools - namely backups / copies of volumes of a VM environment.
So it's write once, and no random access.
I am now wondering if switching to another algo (there is snappy, zlib,
lz4, or zstd) would improve the compression ratio (significantly)?
* Does anybody have any real world data on snappy vs. $anyother?
Using zstd is tempting as it's used in various other applications
(btrfs, MongoDB, ...) for inline-compression with great success.
For Ceph though there is a warning ([1]), about it being not recommended
in the docs still. But I am wondering if this still stands with e.g. [2]
merged.
And there was [3] trying to improve the performance, this this reads as
it only lead to a dead-end and no code changes?
In any case does anybody have any numbers to help with the decision on
the compression algo?
Regards
Christian
[1]
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#confval-bluestore_compression_algorithm
[2] https://github.com/ceph/ceph/pull/33790
[3] https://github.com/facebook/zstd/issues/910
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx