Re: cephfs inode backtrace information

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 31/01/2024 20:13, Patrick Donnelly wrote:
On Tue, Jan 30, 2024 at 5:03 AM Dietmar Rieder
<dietmar.rieder@xxxxxxxxxxx> wrote:

Hello,

I have a question regarding the default pool of a cephfs.

According to the docs it is recommended to use a fast ssd replicated
pool as default pool for cephfs. I'm asking what are the space
requirements for storing the inode backtrace information?

The actual recommendation is to use a replicated pool for the default
data pool. Regular hard drives are fine for the storage device.

Hello,
Is there a rule of thumb for the space requirements of the default pool (depending on the number of POSIX objects) ?

One of our CephFS clusters is configured with a replicated default pool, but we find the space usage on that pool to be very high with the (somewhat) moderate number of files:
[ceph: root@$NODE /]# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    5.9 PiB  3.9 PiB  2.0 PiB   2.0 PiB      33.53
ssd     51 TiB   35 TiB   15 TiB    15 TiB      30.08
TOTAL  5.9 PiB  3.9 PiB  2.0 PiB   2.0 PiB      33.50

--- POOLS ---
POOL                  ID   PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics  2     1  733 MiB      664  2.1 GiB      0    9.9 TiB
cephfs_EC_data         3  8192  1.5 PiB  515.56M  1.9 PiB  34.22    2.9 PiB
cephfs_metadata        4   128   86 GiB   10.28M  259 GiB   0.84    9.9 TiB
cephfs_default         5   512  4.2 TiB  131.17M   13 TiB  29.87    9.9 TiB
[...]

According to our statistics, there are about 132 million files and symlinks in the filesystem which is consistent with the number of objects for the "cephfs_default" pool.
(same for the metadata pool and the ~10 million directories)

But 4.2 TiB stored (~32 kiB per object) seems high, is this overhead expected ?

This is a Pacific cluster (16.2.14) if that matters.


Have a nice day,
Loïc.
--
|       Loīc Tortay <tortay@xxxxxxxxxxx> - IN2P3 Computing Centre      |
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux