Re: Request for Info: bluestore_compression_mode?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/8/22 20:30, Mark Nelson wrote:
Hi Folks,


We are trying to get a sense for how many people are using bluestore_compression_mode or the per-pool compression_mode options (these were introduced early in bluestore's life, but afaik may not widely be used).  We might be able to reduce complexity in bluestore's blob code if we could do compression in some other fashion, so we are trying to get a sense of whether or not it's something worth looking into more.

We also use per-pool compression (snappy) and have CephFS metadata pool uncompressed. Data / metadata pools share the same OSDs in our cluster.

A datapoint on compression settings: In Octopus (and newer I guess) data is compressed when it is enabled on the pool and set to "aggressive" (might be same for force, but haven't tried). We have no bluestore compression settings defined for OSDs (not in config files or Ceph config database), so the default applies which is "bluestore_compression_mode": "none". We use a mix of SSD / NVMe. NVMe seems to use the value for SSD: bluestore_compression_min_blob_size_ssd. With "bluestore_min_alloc_size_ssd" set to 4096 we obtain a compress ratio of 2.

Would it make sense to have a "bluestore_compression_min_blob_size_nvme" as the performance of such drives is often much higher than SATA / SAS SSDs (at least have more bandwith available)?

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux