RocksDB configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Cephalopodians,

in the benchmarks done with many files, I noted that our bottleneck was mainly given by the MDS-SSD performance,
and notably, after deletion of the many files in CephFS, the RocksDB stayed large and did not shrink. 
Recreating an OSD from scratch and backfilling it, however, resulted in a smaller RocksDB. 

I noticed some interesting messages in the logs of starting OSDs:
 set rocksdb option compaction_readahead_size = 2097152
 set rocksdb option compression = kNoCompression
 set rocksdb option max_write_buffer_number = 4
 set rocksdb option min_write_buffer_number_to_merge = 1
 set rocksdb option recycle_log_file_num = 4
 set rocksdb option writable_file_max_buffer_size = 0
 set rocksdb option write_buffer_size = 268435456

Now I wonder: Can these be configured via Ceph parameters? 
Can / should one trigger compaction ceph-kvstore-tool - is this safe when the corresponding OSD is down, has anybody tested it? 
Is there a fixed time slot when compaction starts (e.g. low load average)? 

I'm especially curious if compression would help to reduce write load on the metadata servers - maybe not, since the synchronization of I/O has to happen in any case,
and this is more likely to be the actual limit than the bulk I/O. 

Just being curious! 

Cheers,
	Oliver

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux