Hi Dave,
Probably not complete but I know 2 interesting ways to get configuration
of a Bluestore OSD:
1/ the /show-label/ option of /ceph-bluestore-tool/ command
Ex:
$ sudo ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0/
2/ the /config show/ and /perf dump/ parameters of the OSD
/admin-daemon/ option
Ex:
$ sudo ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show
and
$ sudo ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok perf dump
The last both ones are really verbose, but you can grep for a specific
information...
Indeed, it could be really useful to have those kinds of information on
the Ceph dashboard :)
For example, the space used by the db (in my case 2 out of 80 GB on a
separate nvme WAL/DB):
$ sudo ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok perf dump |grep db
"db_total_bytes": 80015777792,
"db_used_bytes": 2092949504,
"max_bytes_db": 2092949504,
...
Hope this helps! Regards,
Hervé
On 05/05/2020 01:52, Dave Hall wrote:
Hello,
Sorry if this has been asked before...
A few months ago I deployed a small Nautilus cluster using
ceph-ansible. The OSD nodes have multiple spinning drives and a PCI
NVMe. Now that the cluster has been stable for a while it's time to
start optimizing performance.
While I can tell that there is a part of the NVMe associated with each
OSD, I'm trying to verify which BlueStore components are using the
NVMe - WAL, DB, Cache - and whether the configuration generated by
ceph-ansible (and my settings in osds.yml) is optimal for my hardware.
I've searched around a bit and, while I have found documentation on
how to configure, reconfigure, and repair a BlueStore OSD, I haven't
found anything on how to query the current configuration.
Could anybody point me to a command or link to documentation on this?
Thanks.
-Dave
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx