Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the reply Robert.  Could you briefly explain the issue with the
current setup and "what good looks like" here, or point me to some
documentation that would help me figure that out myself?

I'm guessing here it has something  to do with the different sizes and
types of dial, and possibly the EC crush rule setup?

Best regards

Phil Merricks

On Wed., Nov. 11, 2020, 1:30 a.m. Robert Sander, <
r.sander@xxxxxxxxxxxxxxxxxxx> wrote:

> Am 07.11.20 um 01:14 schrieb seffyroff@xxxxxxxxx:
> > I've inherited a Ceph Octopus cluster that seems like it needs urgent
> maintenance before data loss begins to happen. I'm the guy with the most
> Ceph experience on hand and that's not saying much. I'm experiencing most
> of the ops and repair tasks for the first time here.
>
> My condolences. Get the data from that cluster and put the cluster down.
>
> In the current setup it will never work.
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux