Large amount of empty objects in unused cephfs data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I created a cephfs using mgr dashboard, which created two pools: cephfs.fs.meta and cephfs.fs.data

We are using custom provisioning for user defined volumes (users provide yaml manifests with definition of what they want) which creates dedicated data pools for them, so cephfs.fs.data is never used for anything, it's literally empty despite dashboard reporting about 120kb in there (probably metadata files from new subvolume API that we are using).

I wanted to decrease number of PGs in that unused cephfs.fs.data pool and to my surprise I received warning about uneved object distribution (too many objects per PG).

So I dived deeper and figured out that there are in fact many objects in seemingly empty cephfs.fs.data:

PROD [root@ceph-drc-mgmt ~]# rados -p cephfs.fs.data ls | head
100002a3b4a.00000000
100001a1876.00000000
100001265b5.00000000
100002af216.00000000
100004e07ec.00000000
1000053a31f.00000000
10000455214.00000000
100003e4c36.00000000
1000049e91a.00000000
100005d0bc7.00000000

When I tried dumping any of those objects - they are empty, 0 bytes each of them. But there is over 7 millions of them:

PROD [root@ceph-drc-mgmt ~]# rados -p cephfs.fs.data ls | wc -l
7260394

Why is that unused pool containing 7 million empty objects? Is that some kind of bug in MDS? It's 18.2.2

Thanks
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux