Re: cephfs inode backtrace information

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29/03/2024 04:18, Niklas Hambüchen wrote:
Hi Loïc, I'm surprised by that high storage amount, my "default" pool uses only ~512 Bytes per file, not ~32 KiB like in your pool. That's a 64x difference!

(See also my other response to the original post I just sent.)

I'm using Ceph 16.2.1.
>
Hello,
We actually traced the source of this issue: a configuration mistake (data pool not set properly on a client directory).

The directories for this client had "a few" large (tens of GiB) files, which were stored in the "default" pool and used up a lot of space.

With this client's data moved where they belong:
[ceph: root@NODE /]# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    6.1 PiB  3.8 PiB  2.3 PiB   2.3 PiB      37.15
ssd     52 TiB   49 TiB  3.2 TiB   3.2 TiB       6.04
TOTAL  6.1 PiB  3.9 PiB  2.3 PiB   2.3 PiB      36.89

--- POOLS ---
POOL                  ID   PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics  2     1  710 MiB      664  2.1 GiB      0     15 TiB
cephfs_EC_data         3  8192  1.7 PiB  606.79M  2.1 PiB  38.13    2.8 PiB
cephfs_metadata        4   128  101 GiB   14.55M  304 GiB   0.64     15 TiB
cephfs_default         5   128      0 B  162.90M      0 B      0     15 TiB
[...]

So the "correct" stored value for the default pool should be 0 bytes.


Loïc.
--
|       Loīc Tortay <tortay@xxxxxxxxxxx> - IN2P3 Computing Centre      |
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux