Hi Erawn,
If I totally ignored performance and even functionality with the sole
goal of making the OSD fit within 512MB, these are the settings I'd
start out with to see if I could make it happen. This of course
requires a recent enough version of ceph to have the osd_memory_target
and bluestore would need to be used.
Dropping the PG count to 8 (maybe even 1?) should help considerably
given that replication is not required?
If you did all of this you might even want to set the memory target a
bit smaller than 512MB (384MB?) to help account for temporary overages.
Mark
On 11/30/18 4:09 AM, Erwan Velu wrote:
Mark,
To be sure I understand, do you want me to use this values as a try to
get the 512MB cluster being more RAM friendly ?
Erwan,
Le 29/11/2018 à 20:45, Mark Nelson a écrit :
Well, in that case go nuts. ;)
environment variable for OSD:
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=33554432
ceph.conf:
osd_memory_target = 536870912
osd_memory_base = 268435456
osd_memory_cache_min = 33554432
bluestore_cache_autotune_chunk_size = 8388608
osd_min_pg_log_entries = 10
osd_max_pg_log_entries = 10
osd_pg_log_dups_tracked = 10
osd_pg_log_trim_min = 10
bluestore_rocksdb_options=compression=kNoCompression,max_write_buffer_number=2,min_write_buffer_number_to_merge=1,recycle_log_file_num=2,write_buffer_size=8388608,writable_file_max_buffer_size=0,compaction_readahead_size=2097152
and maybe screw around with various other throttles and limits, though
I'm not sure how much it would matter if there are no replicas. If
it's 1 OSD you could use 8 PGs or so for the pool too. ;)