cephfs file layouts, empty objects in first data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi,

running 14.2.6, debian buster (backports).

Have set up a cephfs with 3 data pools and one metadata pool:
myfs_data, myfs_data_hdd, myfs_data_ssd, and myfs_metadata.

The data of all files are with the use of ceph.dir.layout.pool either stored in the pools myfs_data_hdd or myfs_data_ssd. This has also been checked by dumping the ceph.file.layout.pool attributes of all files.

The filesystem has 1617949 files and 36042 directories.

There are however approximately as many objects in the first pool created for the cephfs, myfs_data, as there are files. They also becomes more or fewer as files are created or deleted (so cannot be some leftover from earlier exercises). Note how the USED size is reported as 0 bytes, correctly reflecting that no file data is stored in them.

POOL_NAME        USED OBJECTS CLONES  COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED  RD_OPS      RD   WR_OPS      WR USED COMPR UNDER COMPR
myfs_data         0 B 1618229      0 4854687                  0       0        0 2263590 129 GiB 23312479 124 GiB        0 B         0 B
myfs_data_hdd 831 GiB  136309      0  408927                  0       0        0  106046 200 GiB   269084 277 GiB        0 B         0 B
myfs_data_ssd  43 GiB 1552412      0 4657236                  0       0        0  181468 2.3 GiB  4661935  12 GiB        0 B         0 B
myfs_metadata 1.2 GiB   36096      0  108288                  0       0        0 4828623  82 GiB  1355102 143 GiB        0 B         0 B

Is this expected?

I was assuming that in this scenario, all objects, both their data and any keys would be either in the metadata pool, or the two pools where the objects are stored.

Is it some additional metadata keys that are stored in the first created data pool for cephfs? This would not be so nice in case the osd selection rules for it are using worse disks than the data itself...

Btw: is there any tool to see the amount of key value data size associated with a pool? 'ceph osd df' gives omap and meta for osds, but not broken down per pool.

Best regards,
Håkan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux