Re: CephFS space usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



root@pmx101:/mnt/pve/iso# getfattr -n ceph.dir.rentries .
# file: .
ceph.dir.rentries="67"

On 15/03/2024 4:56 am, Bailey Allison wrote:
Hey All,

It might be easier to check using cephfs dir stats using getfattr, ex.

getfattr -n ceph.dir.rentries /path/to/dir

Regards,

Bailey

-----Original Message-----
From: Igor Fedotov<igor.fedotov@xxxxxxxx>
Sent: March 14, 2024 1:37 PM
To: Thorne Lawler<thorne@xxxxxxxxxxx>;ceph-users@xxxxxxx;
etienne.menguy@xxxxxxxxxxx;vbogdan@xxxxxxxxx
Subject:  Re: CephFS space usage

Thorn,

you might want to assess amount of files on the mounted fs by runnning "du
-h | wc". Does it differ drastically from amount of objects in the pool = ~3.8
M?

And just in case - please run "rados lssnap -p cephfs.shared.data".


Thanks,

Igor

On 3/14/2024 1:42 AM, Thorne Lawler wrote:
Igor, Etienne, Bogdan,

The system is a four node cluster. Each node has 12 3.8TB SSDs, and
each SSD is an OSD.

I have not defined any separate DB / WAL devices - this cluster is
mostly at cephadm defaults.

Everything is currently configured to have x3 replicas.

The system also does various RBD workloads from other pools.

There are no subvolumes and no snapshots on the CephFS volume in
question.
The CephFS volume I am concerned about is called 'shared'. For the
purposes of this question I am omitting information about the other pools.

[root@san1 ~]# rados df
POOL_NAME                     USED  OBJECTS  CLONES    COPIES
MISSING_ON_PRIMARY  UNFOUND  DEGRADED       RD_OPS RD
WR_OPS       WR
USED COMPR  UNDER COMPR cephfs.shared.data          41 TiB  3834689
0
11504067                   0        0         0   3219785418 175 TiB
9330001764  229 TiB     7.0 MiB       12 MiB cephfs.shared.meta
757 MiB       85       0
255                   0        0         0   5306018840    26 TiB
9170232158   24 TiB         0 B          0 B

total_objects    13169948
total_used       132 TiB
total_avail      33 TiB
total_space      166 TiB

[root@san1 ~]# ceph df detail
--- RAW STORAGE ---
CLASS     SIZE   AVAIL     USED  RAW USED  %RAW USED ssd    166 TiB
33 TiB  132 TiB   132 TiB      79.82 TOTAL  166 TiB  33 TiB  132 TiB
132 TiB      79.82

--- POOLS ---
POOL                       ID  PGS   STORED   (DATA)   (OMAP) OBJECTS
USED   (DATA)   (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS QUOTA
BYTES
DIRTY  USED COMPR  UNDER COMPR cephfs.shared.meta          3   32  251
MiB  208 MiB   42 MiB
84  752 MiB  625 MiB  127 MiB      0    3.4 TiB N/A          N/A
N/A         0 B          0 B cephfs.shared.data          4  512   14
TiB   14 TiB      0 B 3.83M   41 TiB   41 TiB      0 B  79.90    3.4
TiB N/A          N/A    N/A     7.0 MiB       12 MiB

[root@san1 ~]# ceph osd pool get cephfs.shared.data size
size: 3

...however running 'du' in the root directory of the 'shared' volume says:

# du -sh .
5.5T    .

So yeah - 14TB is replicated to 41TB, that's fine, but 14TB is a lot
more than 5.5TB, so... where is that space going?

On 14/03/2024 2:09 am, Igor Fedotov wrote:
Hi Thorn,

could you please share the output of "ceph df detail" command
representing the problem?


And please give an overview of your OSD layout - amount of OSDs,
shared or dedicated DB/WAL, main and DB volume sizes.


Thanks,

Igor


On 3/13/2024 5:58 AM, Thorne Lawler wrote:
Hi everyone!

My Ceph cluster (17.2.6) has a CephFS volume which is showing 41TB
usage for the data pool, but there are only 5.5TB of files in it.
There are fewer than 100 files on the filesystem in total, so where
is all that space going?

How can I analyze my cephfs to understand what is using that space,
and if possible, how can I reclaim that space?

Thank you.

--

Regards,

Thorne Lawler - Senior System Administrator
*DDNS* | ABN 76 088 607 265
First registrar certified ISO 27001-2013 Data Security Standard
ITGOV40172 P +61 499 449 170

_DDNS

/_*Please note:* The information contained in this email message and
any attached files may be confidential information, and may also be
the subject of legal professional privilege. _If you are not the
intended recipient any use, disclosure or copying of this email is
unauthorised. _If you received this email in error, please notify
Discount Domain Name Services Pty Ltd on 03 9815 6868 to report this
matter and delete all copies of this transmission together with any
attachments. /

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us athttps://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht
Munich HRB 231263 Web:https://croit.io  | YouTube:https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx To unsubscribe send an email
toceph-users-leave@xxxxxxx
--

Regards,

Thorne Lawler - Senior System Administrator
*DDNS* | ABN 76 088 607 265
First registrar certified ISO 27001-2013 Data Security Standard ITGOV40172
P +61 499 449 170

_DDNS

/_*Please note:* The information contained in this email message and any attached files may be confidential information, and may also be the subject of legal professional privilege. _If you are not the intended recipient any use, disclosure or copying of this email is unauthorised. _If you received this email in error, please notify Discount Domain Name Services Pty Ltd on 03 9815 6868 to report this matter and delete all copies of this transmission together with any attachments. /
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux