Hi Igor, Thanks! I think the code needs to be corrected - the choice criteria for which setting to use when cct->_conf->bluestore_cache_size == 0 should be as follows: 1) See what kind of storage you have. 2) Select type-appropriate storage. Is this code public-editable? I'll be happy to correct that. Regards, Boris. On Tue, Feb 4, 2020 at 12:10 PM Igor Fedotov <ifedotov@xxxxxxx> wrote: > Hi Boris, > > general settings (unless they are set to zero) override disk-specific > settings . > > I.e. bluestore_cache_size overrides both bluestore_cache_size_hdd and > bluestore_cache_size_ssd. > > Here is the code snippet in case you know C++ > > if (cct->_conf->bluestore_cache_size) { > cache_size = cct->_conf->bluestore_cache_size; > } else { > // choose global cache size based on backend type > if (_use_rotational_settings()) { > cache_size = cct->_conf->bluestore_cache_size_hdd; > } else { > cache_size = cct->_conf->bluestore_cache_size_ssd; > } > } > > Thanks, > > Igor > > On 2/4/2020 2:14 PM, Boris Epstein wrote: > > Hello list, > > > > As stated in this document: > > > > > https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/ > > > > there are multiple parameters defining cache limits for BlueStore. You > have > > bluestore_cache_size (presumably controlling the cache size), > > bluestore_cache_size_hdd (presumably doing the same for HDD storage only) > > and bluestore_cache_size_ssd (presumably being the equivalent for SSD). > My > > question is, does bluestore_cache_size override the disk-specific > > parameters, or do I need to set the disk-specific (or, rather, storage > type > > specific ones separately if I want to keep them to a certain value. > > > > Thanks in advance. > > > > Boris. > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx