Re: How big an OSD disk could be?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you have a small cluster, without host redundancy, you are still
able to configure this in Ceph to be handled correctly by adding a
drive failure domain between host and OSD level. So yes you need to
change more then just failure-domain=OSD, as this would be a problem.
However it is absolutely the same as to having multiple OSDs per NVMe
as some people do it.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

Am Sa., 13. März 2021 um 13:11 Uhr schrieb Marc <Marc@xxxxxxxxxxxxxxxxx>:
>
> > Well, if you run with failure-domain=host, then if it says "I have 8
> > 14TB drives and one failed" or "I have 16 7TB drives and two failed"
> > isn't going to matter much in terms of recovery, is it?
> > It would mostly matter for failure-domain=OSD, otherwise it seems about
> > equal.
>
> Yes, but especially in small clusters, people are changing the failure domain to osd to be able to use EC (like I have ;))
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux