Re: Ceph cluster not recover after OSD down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Nice observation, how can avoid this problem?


El 5/5/21 a las 14:54, Robert Sander escribió:
Hi,

Am 05.05.21 um 13:39 schrieb Joachim Kraftmayer:

the crush rule with ID 1 distributes your EC chunks over the osds
without considering the ceph host. As Robert already suspected.

Yes, the "nxtcloudAF" rule is not fault tolerant enough. Having the OSD
as failure zone will lead to data loss or at least intermediate
unavailability.

The situation is now that all copies (resp. EC chunks) for a PG are
stored on OSDs of the same host. These PGs will be unavailable if the
host is down.

Regards


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
*******************************************************
Andrés Rojas Guerrero
Unidad Sistemas Linux
Area Arquitectura Tecnológica
Secretaría General Adjunta de Informática
Consejo Superior de Investigaciones Científicas (CSIC)
Pinar 19
28006 - Madrid
Tel: +34 915680059 -- Ext. 990059
email: a.rojas@xxxxxxx
ID comunicate.csic.es: @50852720l:matrix.csic.es
*******************************************************
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux