Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> With this profile you can only loose one OSD at a time, which is really
> not that redundant.
That's rather situation dependent. I don't have really large disks, so
the repair time isn't that large.
Further, my SLO isn't that high that I need 99.xxx% uptime, if 2 disks
break in the same repair window, that would be unfortunate, but I'd just
grab a backup from a mirroring cluster. Looking at it from another
perspective, I came from a single host RAID5 scenario, I'd argue this is
better since I can survive a host failure.

Also this is a sliding problem right? Someone with K+3 could argue K+2
  is not enough as well.

Hans
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux