Re: Erasure pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2017-11-08 22:05 GMT+01:00 Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:

Can anyone advice on a erasure pool config to store

- files between 500MB and 8GB, total 8TB
- just for archiving, not much reading (few files a week)
- hdd pool
- now 3 node cluster (4th coming)
- would like to save on storage space

I was thinking of a profile with jerasure  k=3 m=2, but maybe this lrc
is better? Or wait for 4th node and choose k=4 m=2?


Just to keep in mind:

In a three node setup with k=3 and m=2 you will have to set the failure domain to 'osd' (the default failure domain of 'host' would require 5 nodes)
Furthermore when using 'osd' as failure domain you would probably have (some) inaccessable data when a node reboots and/or fails since there is a chance 3 (or more) out of 5 chunks are on the same node.
Same goes for 4 nodes and k=4 m=2 (failure domain host would require 6 nodes)

Caspar


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux