Hi, Am 09.08.21 um 12:56 schrieb Jorge JP: > 15 x 12TB = 180TB > 8 x 18TB = 144TB How are these distributed across your nodes and what is the failure domain? I.e. how will Ceph distribute data among them? > The raw size of this cluster (HDD) should be 295TB after format but the size of my "primary" pool (2/1) in this moment is: A pool with a size of 2 and a min_size of 1 will lead to data loss. > 53.50% (65.49 TiB of 122.41 TiB) > > 122,41TiB multiplied by replication of 2 is 244TiB, not 295TiB. > > How can use all size of the class? If you have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x 18TB (72TB) the maximum usable capacity will not be the sum of all disks. Remember that Ceph tries to evenly distribute the data. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx