Hi Igor,
Thank you for that information. This means I would have to reduce the "write_buffer_size" in order to reduce the L0 size in addition to reducing "max_bytes_for_level_base" to make the L1 size match.
Does anyone on the list have experience making these kinds of modifications? Or better yet some benchmarks?
I found a mailing list reference [1] saying RBD workloads need about 24KB per onode, and average object size is ~2.8MB. Even taking advertised best case throughput for an HDD we only get ~70 objects per second, which would generate 1.6MB/s in writes to
RocksDB. If the write_buffer_size were set to 75MB (25% of default) that would take 45 seconds to fill. With a more realistic number for sustained write throughput on an HDD, it would take well over a minute. That sounds like a rather large buffer to me...
From: Igor Fedotov [ifedotov@xxxxxxx]
Sent: Tuesday, November 13, 2018 3:44 AM To: Brendan Moloney; ceph-users@xxxxxxxxxxxxxx Subject: Re: SSD sizing for Bluestore Hi Brendan in fact you can alter RocksDB settings by using bluestore_rocksdb_options config parameter. And hence change "max_bytes_for_level_base" and others. Not sure about dynamic level sizing though.
Current defaults are: "compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152" Thanks, Igor On 11/13/2018 5:19 AM, Brendan Moloney wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com