Re: loosing one node from a 3-node cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Robert,
thank you for your reply, so what am I missing?
I thought, that if I have 3-nodes, each 16TB on 4 OSDs, so 16 OSDs having in total 44T, that would leed me at size of 3/2 to:
Either nearly 14TB total pool size knowing, that in case of a lost node, there will be no re-distribution due to no OSD space left, so the cluster-state would be degreaded, which is acceptable for short-term hardware/software maintenance or
66% of 16GB (would be approx. 11TB) so that a re-distribution accross 2 nodes would be possible.
 
The point is,  the advertised pool-sizes of roughly 9 and 6 TB to the operating system which seamed valid sizes to me in terms of "near to 16TB" have never been parametrized from me.
In proxmox, all I did was create new pool and leave all to default.
Also I don't understand, why, if 2 pools, both same num of pgs, same size/min parameters end in beeing one 6 and the other 9 TB.
any clue to that?
regards,
felix
 
Gesendet: Dienstag, 05. April 2022 um 10:44 Uhr
Von: "Robert Sander" <r.sander@xxxxxxxxxxxxxxxxxxx>
An: ceph-users@xxxxxxx
Betreff:  Re: loosing one node from a 3-node cluster
Hi,

Am 05.04.22 um 02:53 schrieb Felix Joussein:

> As the command outputs below show, ceph-iso_metadata consume 19TB
> accordingly to ceph df, how ever, the mounted ceph-iso filesystem is
> only 9.2TB big.

The values nearly add up.

ceph-vm has 2.7 TiB stored and 8.3 TiB used (3x replication).
ceph-iso_data has 6.1 TiB stored and 19 TiB used (3x replication).

Your total capacity is 44 TiB, used is 27 TiB (19 + 8.3), leaving 17 TiB
capacity. Divided by 3 is ca 6 TiB. Your max avail space for the pools
is only 3 TiB.

AFAIK the available space computation takes data distribution and
nearfull ratio (85%) into account.

https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd/#no-free-drive-space

How are the individual OSDs used?
Can you post the output of "ceph osd df tree"?

"df -h" will show an artificial filesystem size which is the sum of the
used space and the available space of the CephFS data pool. I.e. 6.1 TiB
+ 3.0 TiB makes a df size of 9.2 TiB.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux