Hi,
there're several ways to retrieve config information:
osd-host:~ # ceph daemon osd.0 config show | grep bluestore_min_alloc_size
"bluestore_min_alloc_size": "0",
"bluestore_min_alloc_size_hdd": "65536",
"bluestore_min_alloc_size_ssd": "4096",
osd-host:~ # ceph daemon osd.0 config get bluestore_min_alloc_size
{
"bluestore_min_alloc_size": "0"
}
osd-host:~ # ceph config get osd bluestore_min_alloc_size
osd-host:~ # ceph config get osd bluestore_min_alloc_size_hdd
65536
osd-host:~ # ceph config get osd.0 bluestore_min_alloc_size_hdd
65536
Regards,
Eugen
Zitat von 胡 玮文 <huww98@xxxxxxxxxxx>:
Hi all,
I’ve read from this mail list that too high bluestore_min_alloc_size
will result in too much space wasted if I have many small objects.
But too low bluestore_min_alloc_size will reduce performance. I’ve
also read that this config can’t be changed after OSD creation.
Now I want to tune this config myself. I change
bluestore_min_alloc_size_hdd from default 64k to 32k, then deployed
some more OSDs with cephadm. I assume old OSD will continue to use
the old setting, and new OSDs will use the new setting. But how can
I verify that? To be specific, how can I query the effective
min_alloc_size for a specific OSD?
I’ve tried to use “ceph daemon osd.X ...” but cannot find the
appropriate command. I also searched the OSD logs.
Thanks.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx