Re: total storage size available in my CEPH setup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>> My question is how much total CEPH storage does this allow me? Only 2.3TB? or does the way CEPH duplicates data enable more than 1/3 of the storage?
> 3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit.

To expand on this, you probably want to keep some margins and not run at your cluster 100% :) (especially if you are running RBD with thin provisioning). By default, “ceph status” will issue a warning at 85% full (osd nearfull ratio). You should also consider that you need some free space for auto healing to work (if you plan to use more than 3 OSDs on a size=3 pool).

Cheers,
Maxime 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux