Hi Brendan
in fact you can alter RocksDB settings by using
bluestore_rocksdb_options config parameter. And hence change
"max_bytes_for_level_base" and others.
Not sure about dynamic level sizing though.
Current defaults are:
"compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152"
Thanks,
Igor
On 11/13/2018 5:19 AM, Brendan Moloney
wrote:
Hi,
I have been reading up on this a bit, and found one
particularly useful mailing list thread [1].
The fact that there is such a large jump when your DB fits
into 3 levels (30GB) vs 4 levels (300GB) makes it hard to
choose SSDs of an appropriate size. My workload is all RBD, so
objects should be large, but I am also looking at purchasing
rather large HDDs (12TB). It seems wasteful to spec out 300GB
per OSD, but I am worried that I will barely cross the 30GB
threshold when the disks get close to full.
It would be nice if we could either enable "dynamic level
sizing" (done here [2] for monitors, but not bluestore?), or
allow changing the "max_bytes_for_level_base"
to something that better suits our use case. For example, if
it were set it to 25% of the default (75MB L0 and L1, 750MB
L2, 7.5GB L3, 75GB L4) then I could allocate ~85GB per OSD
and feel confident there wouldn't be any spill over onto the
slow HDDs. I am far from on expert on RocksDB, so I might be
overlooking something important here.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com