Actually, it's more like 41TB. It's a bad idea to run at near full
capacity (by default past 85%) because you need some space where
Ceph can replicate data as part of its healing process in the event
of disk or node failure. You'll get a health warning when you exceed
this ratio. You can use erasure coding to increase the amount of data you can store beyond 41TB, but you'll still need some replicated disk as a caching layer in front of the erasure coded pool if you're using RBD. See: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-December/036430.html As to how much space you can save with erasure coding, that will depend on if you're using RBD and need a cache layer and the values you set for k and m (number of data chunks and coding chunks). There's been some discussion on the list with regards to choosing those values. -Steve On 03/12/2015 10:07 AM, Thomas Foster
wrote:
-- Steve Anthony LTS HPC Support Specialist Lehigh University sma310@xxxxxxxxxx |
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com