I'm afraid I was celebrating a bit too early. There is no improved accounting for data, specifically, alloc-size on disks. The same file system mounted: 10.41.24.13,10.41.24.14,10.41.24.15:/ 2.5T 173G 2.3T 7% /mnt/adm/cephfs 10.41.24.13,10.41.24.14,10.41.24.15:/data 2.0T 37G 2.0T 2% /mnt/cephfs The folder "data" contains all data. The difference is, that on "data" a quota is set, while on "/" there isn't. On "/" df simply reports the same as "ceph df" while on "/data" df reports "ceph.dir.rbytes". It would be really great to have alloc-size counters. I opened a feature request: https://tracker.ceph.com/issues/56949 Of course, a better solution would be tail-merging. Best regards, ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________________________ From: Frank Schilder <frans@xxxxxx> Sent: 27 July 2022 17:43:36 To: ceph-users@xxxxxxx Subject: ceph fs virtual attribute reporting bluestore allocation Hi all, testing octopus 15.2.16 and found an interesting difference to mimic. The file system usage reported by df seems now to show the proper bluestore allocation as opposed to cumulative file sizes. That is great. I assume there is a ceph virtual extended attribute collecting this information. Unfortunately, I cannot figure out what it is. Can anyone help me out? What I'm looking for is this: # df -h Filesystem Size Used Avail Use% Mounted on 10.41.24.13,10.41.24.14,10.41.24.15:/ 2.2T 170G 2.1T 8% /mnt/cephfs OK, 170G storage allocated. # getfattr -n ceph.dir.rbytes /mnt/cephfs getfattr: Removing leading '/' from absolute path names # file: mnt/cephfs ceph.dir.rbytes="38871650661" Only 38G file data. So, how do I get the 170G reported that are allocated as df shows? A pointer to the relevant code section would help in case code is the only documentation. Thanks and best regards, ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx