Re: Request for Info: bluestore_compression_mode?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Frank,


Thank you very much for the reply!  If you don't mind me asking, what's the use case?  We're trying to determine if we might be able to do compression at a higher level than blob with the eventual goal of simplifying the underlying data structures.  I actually had no idea that you needed both the yaml option and the pool option configured (I figured the pool option just overrode the yaml).  That's definitely confusing!


Not sure what the right path is here or if we should even make any significant changes at this point, but we figured that the first step was to figure out if people are using it and how.


Mark


On 8/9/22 04:11, Frank Schilder wrote:
Hi Mark,

we are using per-pool aggressive compression mode on any EC data pool. We need it per pool as we also have un-compressed replicated meta data pools sharing the same OSDs. Currently, one needs to enable both for data compression, the bluestore option to enable compression on an OSD and the pool option to enable compression for a pool. Only when both options are active simultaneously is data actually compressed, which led to quite a bit of confusion in the past. I think per-pool compression should be sufficient and imply compression without further tweaks on the OSD side. I don't know what the objective with per-OSD bluestore compression was. We just enabled bluestore compression globally since the pool option selects the data for compression and its the logical way to select and enforce compression (per data type).

Just an enable/disable setting for pools would be sufficient (enabled=aggressive, and always treat bluestore_compression=aggressive implicitly). On the bluestore side the usual compression_blob_size/algorithm options will probably remain necessary, although one might better set them via a mask as in "ceph config set osd/class:hdd compression_min_blob_size XYZ" or better allow combination of masks as in "ceph config set osd/class:hdd,store:blue compression_min_blob_size XYZ" to prepare the config interface for future data stores.

I don't think the compression mode "passive" makes much sense as I have never heard of client software providing a meaningful hint. I think its better treated as an administrator's choice after testing performance and then enabled should simply mean "always compress" and disabled "never compress".

I believe currently there is an interdependence with min_alloc_size on the OSD data store, which makes tuning a bit of a pain. It would be great if physical allocation parameters and logical allocation sizes could be decoupled somewhat. If they need to be coupled, then at least make it possible to read important creation-time settings at run-time. At the moment it is necessary to restart an OSD and grep the log to find the min_alloc_size of an OSD that is actually used by the OSD. Also, with upgraded clusters it is more likely to have OSDs with different min_alloc_sizes in a pool, so it would be great if settings like this one have no/not so much influence on whether or not compression works as expected.

Summary:

- pool enable/disable flag for always/never compress
- data store flags for compression performance tuning
- make OSD create- and tune parameters as orthogonal as possible

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Mark Nelson <mnelson@xxxxxxxxxx>
Sent: 08 August 2022 20:30:49
To: ceph-users@xxxxxxx
Subject:  Request for Info: bluestore_compression_mode?

Hi Folks,


We are trying to get a sense for how many people are using
bluestore_compression_mode or the per-pool compression_mode options
(these were introduced early in bluestore's life, but afaik may not
widely be used).  We might be able to reduce complexity in bluestore's
blob code if we could do compression in some other fashion, so we are
trying to get a sense of whether or not it's something worth looking
into more.


Thanks,

Mark

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux