Hi Sam, On Tue, Mar 29, 2022 at 6:17 AM Clippinger, Sam <Sam.Clippinger@xxxxxxxxxx> wrote: > I'm trying to understand how best to tune OSD daemon memory usage. The two parameters I'm setting are osd_memory_target and bluestore_cache_size_ssd but I don't really understand what they do. The OSD daemons seem to use both values, but what exactly do they store in those memory areas? Should I increase one value more than the other, should I keep them equal? The documentation is maddeningly vague about it. https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/ may not be perfectly clear on this, but the idea is that osd_memory_target should be the total amount of memory consumed by the OSD, and then the rest of the cache, etc. parameters change the relative amounts of that memory target that are used for various purposes. In practice, OSDs often consume more than their memory target for a variety of reasons. Read the "Manual Cache Sizing" section for a bit more information about how the bluestore cache parameters work. > [osd] > bluestore_cache_autotune = 0 Why are you turning autotuning off? > bluestore_cache_size_ssd = 10Gi > osd_memory_target = 6Gi Setting the bluestore cache size to larger than the target runs contrary to the purpose of osd_memory_target; I'm actually not sure what the OSD's allocator does in this scenario. > After a day or two, each daemon uses between 15-20 GiB RAM. It can be helpful to get a mempool dump in this scenario to see what's using all the memory. On the OSD node, "ceph daemon osd.XXX dump_mempools" (or in Octopus+ you can "ceph tell osd.XXX dump_mempools" from any node that has admin access). Josh _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx