Re: 6 PG's stuck not-active, remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On 10/21/20 10:01 PM, Mac Wynkoop wrote:

*snipsnap*
*up: 0: 1131: 1382: 303: 1324: 1055: 576: 1067: 1408: 161acting: 0: 721:
1502: 21474836473: 21474836474: 245: 486: 327: 1578: 103*

21474836473 is -1 as unsigned integer. This value means that the CRUSH algorithm did not produce enough OSDS to satisfy the PG requirements (e.g. less than three different OSDs for replicated pools with size=3).

You mentioned that some disks are currently offline; if they are marked out your current cluster setup might not be sufficient for your crush rules. Bring the disks back online or (in case of sufficient hosts/osds) change the number of iterations in crush before giving up. The pseudo random characteristics may not always be able to select three hosts out of three available ones ;-)

Regards,

Burkhard
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux