Re: Size of cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, this is my osd tree:

ID   CLASS  WEIGHT     TYPE NAME
 -1         312.14557  root default
 -3          68.97755      host pveceph01
  3    hdd   10.91409          osd.3
 14    hdd   16.37109          osd.14
 15    hdd   16.37109          osd.15
 20    hdd   10.91409          osd.20
 23    hdd   10.91409          osd.23
  0    ssd    3.49309          osd.0
 -5          68.97755      host pveceph02
  4    hdd   10.91409          osd.4
 13    hdd   16.37109          osd.13
 16    hdd   16.37109          osd.16
 21    hdd   10.91409          osd.21
 24    hdd   10.91409          osd.24
  1    ssd    3.49309          osd.1
 -7          68.97755      host pveceph03
  6    hdd   10.91409          osd.6
 12    hdd   16.37109          osd.12
 17    hdd   16.37109          osd.17
 22    hdd   10.91409          osd.22
 25    hdd   10.91409          osd.25
  2    ssd    3.49309          osd.2
-13          52.60646      host pveceph04
  9    hdd   10.91409          osd.9
 11    hdd   16.37109          osd.11
 18    hdd   10.91409          osd.18
 26    hdd   10.91409          osd.26
  5    ssd    3.49309          osd.5
-16          52.60646      host pveceph05
  8    hdd   10.91409          osd.8
 10    hdd   16.37109          osd.10
 19    hdd   10.91409          osd.19
 27    hdd   10.91409          osd.27
  7    ssd    3.49309          osd.7

Sorry, but how I check the failure domain? I seem to remember that my failure domain is host.

Regards.

________________________________
De: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
Enviado: lunes, 9 de agosto de 2021 13:40
Para: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Asunto:  Re: Size of cluster

Hi,

Am 09.08.21 um 12:56 schrieb Jorge JP:

> 15 x 12TB = 180TB
> 8 x 18TB = 144TB

How are these distributed across your nodes and what is the failure
domain? I.e. how will Ceph distribute data among them?

> The raw size of this cluster (HDD) should be 295TB after format but the size of my "primary" pool (2/1) in this moment is:

A pool with a size of 2 and a min_size of 1 will lead to data loss.

> 53.50% (65.49 TiB of 122.41 TiB)
>
> 122,41TiB multiplied by replication of 2 is 244TiB, not 295TiB.
>
> How can use all size of the class?

If you have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x
18TB (72TB) the maximum usable capacity will not be the sum of all
disks. Remember that Ceph tries to evenly distribute the data.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux