Re: Questions on Erasure Coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Dave,

you can configure Ceph to pick multiple OSDs per Host and therefore work
like a classic raid.
It will cause a downtime whenever you have to do maintenance on a system,
but when you plan to grow it quite fast, it's maybe an option for you.

--
Martin Verges
Managing director

Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://croit.io/training/4-days-ceph-in-depth-training.

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am So., 2. Feb. 2020 um 05:11 Uhr schrieb Dave Hall <kdhall@xxxxxxxxxxxxxx>:

> Hello.
>
> Thanks to advice from bauen1 I now have OSDs on Debian/Nautilus and have
> been able to move on to MDS and CephFS.  Also, looking around in the
> Dashboard I noticed the options for Crush Failure Domain and further
> that it's possible to select 'OSD'.
>
> As I mentioned earlier our cluster is fairly small at this point (3
> hosts, 24 OSDs) , but we want to get as much usable storage as possible
> until we can get more nodes.  SInce the nodes are brand new we are
> probably more concerned about disk failures than about node failures for
> the next few months.
>
> If I interpret Crush Failure Domain = OSD, this means it's possible to
> create pools that behave somewhat similar to RAID 6 - something like 8 +
> 2 except dispersed across multiple nodes.  With the pool spread around
> like this loosing any one disk shouldn't put the cluster into read-only
> mode - if a disk did fail, would the cluster re-balance and reconstruct
> the lost data until the failed OSD was replaced.
>
> Does this make sense?  Or is it just wishful thinking.
>
> Thanks.
>
> -Dave
>
> --
> Dave Hall
> Binghamton University
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux