Hi,
ceph.conf is not used anymore the way it was before cephadm. Just add
the config to the config store (see my previous example) and it should
be applied to all OSDs.
Regards
Eugen
Zitat von Alam Mohammad <samdto987@xxxxxxxxx>:
Hi Eugen,
We are planning to build a cluster with an erasure-coded (EC) pool
to save some disk space. For that we have experimented with
compression settings on the RBD pool using the following parameters:
On pool we have set below parameters:
Compression mode: Aggressive
Compression type: lz4
Compression ratio: 0.85
Additionally, I have configured global settings in the Ceph
configuration file for BlueStore compression:
Bluestore_compression_mode aggressive
bluestore_compression_algorithm lz4
bluestore_compression_required_ratio 0.85
However, when executing the ceph tell command [ceph tell osd.0 perf
dump | grep -E '(compress_.*_count|bluestore_compressed_)'], we are
getting only below paramets:
"compress_success_count": 4,
"compress_rejected_count": 0,
not getting below parameters:
"bluestore_compressed_allocated": 12288,
"bluestore_compressed_original": 24576,
Is there a specific aspect of the configuration that I might be
overlooking? If so, could you please provide guidance on how to
properly configure compression settings to effectively save disk
space in the Ceph cluster?
Any guidance or insight would be greatly appreciated.
Regards,
Mohammad Saif
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx