Re: crushtool -i; more info from output?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In case anyone is interested; I hacked up some more perl code to parse the tree output of crushtool to use the actual info from the new crushmap, instead of the production info from ceph itself.

See: https://gist.github.com/pooh22/53960df4744efd9d7e0261ff92e7e8f4

Cheers

/Simon

On 02/12/2021 13:23, Simon Oosthoek wrote:
On 02/12/2021 10:20, Simon Oosthoek wrote:
Dear ceph-users,

We want to optimise our crush rules further and to test adjustments without impact to the cluster, we use crushtool to show the mappings.

eg:
crushtool -i crushmap.16  --test --num-rep 4 --show-mappings --rule 0|tail -n 10
CRUSH rule 0 x 1014 [121,125,195,197]
CRUSH rule 0 x 1015 [20,1,40,151]
CRUSH rule 0 x 1016 [194,244,158,3]
CRUSH rule 0 x 1017 [39,113,242,179]
CRUSH rule 0 x 1018 [131,113,199,179]
CRUSH rule 0 x 1019 [64,63,221,181]
CRUSH rule 0 x 1020 [26,111,188,179]
CRUSH rule 0 x 1021 [125,78,247,214]
CRUSH rule 0 x 1022 [48,125,246,258]
CRUSH rule 0 x 1023 [0,88,237,211]

The osd numbers in brackets are not the full story, of course...

It would be nice to see more info about the location hierarchy that is in the crushmap, because we want to make sure the redundancy is spread optimally accross our datacenters and racks/hosts. In the current output, this requires lookups to find out the locations for the osds before we can be sure.

Since the info is already known in the crushmap, I was wondering if someone has already hacked up some wrapper script that looks up the locations of the osds, or if work is ongoing to add an option to crushtool to output the locations with the osd numbers?

If not, I might write a wrapper myself...


Dear list,

I created a very rudimentory parser; just pipe the output of the crushtool -i command to this script.

In the script you can either uncomment the full location tree info, or just the top level location.

The script is here:
https://gist.github.com/pooh22/5065d7c8777e6f07b0801d0b30c027d2

Please use as you like, I welcome comments and improvements of course...

/Simon
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux