crushtool -i; more info from output?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear ceph-users,

We want to optimise our crush rules further and to test adjustments without impact to the cluster, we use crushtool to show the mappings.

eg:
crushtool -i crushmap.16 --test --num-rep 4 --show-mappings --rule 0|tail -n 10
CRUSH rule 0 x 1014 [121,125,195,197]
CRUSH rule 0 x 1015 [20,1,40,151]
CRUSH rule 0 x 1016 [194,244,158,3]
CRUSH rule 0 x 1017 [39,113,242,179]
CRUSH rule 0 x 1018 [131,113,199,179]
CRUSH rule 0 x 1019 [64,63,221,181]
CRUSH rule 0 x 1020 [26,111,188,179]
CRUSH rule 0 x 1021 [125,78,247,214]
CRUSH rule 0 x 1022 [48,125,246,258]
CRUSH rule 0 x 1023 [0,88,237,211]

The osd numbers in brackets are not the full story, of course...

It would be nice to see more info about the location hierarchy that is in the crushmap, because we want to make sure the redundancy is spread optimally accross our datacenters and racks/hosts. In the current output, this requires lookups to find out the locations for the osds before we can be sure.

Since the info is already known in the crushmap, I was wondering if someone has already hacked up some wrapper script that looks up the locations of the osds, or if work is ongoing to add an option to crushtool to output the locations with the osd numbers?

If not, I might write a wrapper myself...

Cheers

/Simon
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux