Re: Ceph cluster not recover after OSD down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 05.05.21 um 12:34 schrieb Andres Rojas Guerrero:
> Thanks for the answer.
> 
>> For the default redundancy rule and pool size 3 you need three separate
>> hosts.
> 
> I have 24 separate server nodes with  with 32 osd in everyone in total
> 768 osd, my question is why the mds suffer when only 4% of the osd goes
> down (in the same node). I need to modify the crush map?

With an unmodified crush map and the default placement rule this should
not happen.

Can you please show the output of "ceph osd crush rule dump"?

Regards
-- 
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux