Questions on Erasure Coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

Thanks to advice from bauen1 I now have OSDs on Debian/Nautilus and have been able to move on to MDS and CephFS.  Also, looking around in the Dashboard I noticed the options for Crush Failure Domain and further that it's possible to select 'OSD'.

As I mentioned earlier our cluster is fairly small at this point (3 hosts, 24 OSDs) , but we want to get as much usable storage as possible until we can get more nodes.  SInce the nodes are brand new we are probably more concerned about disk failures than about node failures for the next few months.

If I interpret Crush Failure Domain = OSD, this means it's possible to create pools that behave somewhat similar to RAID 6 - something like 8 + 2 except dispersed across multiple nodes.  With the pool spread around like this loosing any one disk shouldn't put the cluster into read-only mode - if a disk did fail, would the cluster re-balance and reconstruct the lost data until the failed OSD was replaced.

Does this make sense?  Or is it just wishful thinking.

Thanks.

-Dave

--
Dave Hall
Binghamton University

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux