Hi Frank, On Tue, Jan 18, 2022 at 4:54 AM Frank Schilder <frans@xxxxxx> wrote: > > Hi Dan and Patrick, > > this problem seems to develop into a nightmare. I executed a find on the file system and had some initial success. The number of stray files dropped by about 8%. Unfortunately, this is about it. I'm running a find now also on snap dirs, but I don't have much hope. There must be a way to find out what is accumulating in the stray buckets. As I wrote in another reply to this thread, I can't dump the trees: > > > I seem to have a problem. I cannot dump the mds tree: > > > > [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mdsdir/stray0' > > root inode is not in cache > > [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mds0/stray0' > > root inode is not in cache > > [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mds0' 0 > > root inode is not in cache > > [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mdsdir' 0 > > root inode is not in cache > > > > [root@ceph-08 ~]# ceph daemon mds.ceph-08 get subtrees | grep path > > "path": "", > > "path": "~mds0", > > > > However, this information is somewhere in rados objects and it should be possible to figure something out similar to > > # rados getxattr --pool=con-fs2-meta1 <OBJ_ID> parent | ceph-dencoder type inode_backtrace_t import - decode dump_json > # rados listomapkeys --pool=con-fs2-meta1 <OBJ_ID> > > What OBJ_IDs am I looking for? How and where can I start to traverse the structure? Version is mimic latest stable. You mentioned you have snapshots? If you've deleted the directories that have been snapshotted then they stick around in the stray directory until the snapshot is deleted. There's no way to force purging until the snapshot is also deleted. For this reason, the stray directory size can grow without bound. You need to either upgrade to Pacific where the stray directory will be fragmented or remove the snapshots. -- Patrick Donnelly, Ph.D. He / Him / His Principal Software Engineer Red Hat, Inc. GPG: 19F28A586F808C2402351B93C3301A3E258DD79D _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx