Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Am 11.11.20 um 11:20 schrieb Hans van den Bogert:
>> Hoping to learn from this myself, why will the current setup never work?

That was a bit harsh to have said.  Without seeing your EC profile and the topology,  it’s hard to say for sure, but I suspect that adding another node with at least one larger OSD might help.  The info that Hans asked for would help us say for sure.

> There are only 4 OSDs in the cluster, with a mix of HDD and SSD.
> And they try to use erasure coding on that small setup.

Agreed on both points, neither is ideal, but it’s not clear that even the OP thinks it is.

> Erasure coding starts to work with at least 7 to 10 nodes and a
> corresponding number of OSDs.
> 
> This cluster is too small to do any amount of "real" work.

Agreed, but again we don’t *know* that they expect it to.

Is this just a PoC / demo cluster cobbled together out of whatever was laying around?  The OSDs, by chance, aren’t running on top of RAID volumes are they?  How many nodes?  

Is it possible that this cluster previously had more nodes/OSDs that were removed?

I’ll speculate that the 4 OSDs are spread across a total of 2 nodes?

— aad




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux