Re: loosing one node from a 3-node cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Am 05.04.22 um 02:53 schrieb Felix Joussein:

As the command outputs below show, ceph-iso_metadata consume 19TB accordingly to ceph df, how ever, the mounted ceph-iso filesystem is only 9.2TB big.

The values nearly add up.

ceph-vm has 2.7 TiB stored and 8.3 TiB used (3x replication).
ceph-iso_data has 6.1 TiB stored and 19 TiB used (3x replication).

Your total capacity is 44 TiB, used is 27 TiB (19 + 8.3), leaving 17 TiB capacity. Divided by 3 is ca 6 TiB. Your max avail space for the pools is only 3 TiB.

AFAIK the available space computation takes data distribution and nearfull ratio (85%) into account.

https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd/#no-free-drive-space

How are the individual OSDs used?
Can you post the output of "ceph osd df tree"?

"df -h" will show an artificial filesystem size which is the sum of the used space and the available space of the CephFS data pool. I.e. 6.1 TiB + 3.0 TiB makes a df size of 9.2 TiB.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux