Erasure coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold data) that we intend to grow later on as more storage is needed. We would very much like to use Erasure Coding for some pools but are facing some challenges regarding the optimal initial profile “replication” settings given the limited number of initial hosts that we can use to spread the chunks. Could somebody please help me with the following questions?

  1. Suppose we initially use replication in stead of erasure. Can we convert a replicated pool to an erasure coded pool later on?

  2. Will Ceph gain the ability to change the K and N values for an existing pool in the near future?

  3. Can the failure domain be changed for an existing pool? E.g. can we start with failure domain OSD and then switch it to Host after adding more hosts?

  4. Where can I find a good comparison of the available erasure code plugins that allows me to properly decide which one suits are needs best?

Many thanks for your help!

Tom

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux