Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 12.11.20 um 23:18 schrieb Phil Merricks:
> Thanks for the reply Robert.  Could you briefly explain the issue with
> the current setup and "what good looks like" here, or point me to some
> documentation that would help me figure that out myself?
> 
> I'm guessing here it has something  to do with the different sizes and
> types of dial, and possibly the EC crush rule setup?

The cluster is just too small to do anything useful. It can be used to
learn Ceph but really not for much more.

Erasure Coding needs a lot of CPU, this is usually achieved with a large
number of nodes (more than 10) and a proportional number of OSDs.

Mixed HDDs and SSDs in one pool is not good practice as a pool should
have OSDs of the same speed.

Kindest Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux