Re: Editing Crush Map to fix osd_crush_chooseleaf_type = 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Although I’m confused about the error from crushtool it seems the results are fine. The show-mappings displays a long list of possible mappings to OSDs. If you provide num-rep 3 you should see three different OSDs in each line, and if the rule works correctly those OSDs never map to the same host. You can make sure by running ‚ceph osd find ID‘ for some of those sets. If bad-mappings doesn’t show anything it means crush finds a valid set of OSDs for each attempt.


Zitat von Matt Dunavant <mdunavant@xxxxxxxxxxxxxxxxxx>:

My replica size on the pool is 3, so I'll use that to test. There is no other type in my map like dc, rack, etc; just servers. Do you know what a successful run of the test command looks like? I just ran it myself and it spits out a number of crush rules (in this case 1024) and then ends with:

double free or corruption (out)
*** Caught signal (Aborted) **
 in thread 7f4915c58dc0 thread_name:crushtool

The show-bad-mappings version shows nothing and then spits out the same output as above. I'm going to assume that means it's working properly?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux