Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Phil,

thanks for the background info.

Am 17.11.20 um 01:51 schrieb Phil Merricks:

> 1:  Move off the data and scrap the cluster as it stands currently.
> (already under way)
> 2:  Group the block devices into pools of the same geometry and type (and
> maybe do some tiering?)
> 3. Spread the OSDs across all 3 nodes so recovery scope isn't so easily
> compromised by a loss at the bare metal level
> 4. Add more hosts/OSDs if EC is the right solution (this may be outside of
> the scope of this implementation, but I'll keep a-cobblin'!)

This looks like a plan.

> 
> The additional ceph outputs follow:
> ceph osd tree <https://termbin.com/vq63>
> ceph osd erasure-code-profile get cephfs-media-ec <https://termbin.com/h33h>

Your EC profile will not work on two hosts:

crush-device-class=
crush-failure-domain=host
crush-root=default
k=2
m=2

You need k+m=4 independent hosts for the EC parts, but your CRUSH map
only shows two hosts. This is why all your PGs are undersized and degraded.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux