On Tue, Jul 27, 2021 at 10:02 AM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote: > > Dear all, > > I'm running Ceph 14.2.11. I have one cephfs file system and it used to > store all data on a pool called cephfs_data. I moved all data to a pool > called hdd_ec (which uses erasure coding which I prefer) with an > copy-file-delete-rename approach. I ended up with ~10TB of data on the > cephfs_data pool. > > I now try to locate the files that are still in the old pool. Here is what > *should* work according to what I found online: There will always be at least one object per file in the default data pool for storing backtraces. > First I list the objects in the cephfs_data pool. > > # rados -p cephfs_data ls | head > 100096fa552.00000000 > 1000b15ae43.00000000 > 1000b595619.00000000 > 1000b59f060.00000000 > 1000b1966b7.00000000 > 1000b06a749.00000000 > 1000b3e1ccd.00000000 > 1000b56d512.00000000 > 1000b67b76a.00000000 > 1000b32629a.00000000 You can try grepping for 000000001 to find files that have at least two objects. > Then I try to get an omap key ... but this fails. In CephFS, data pools do not use omap. If you want the backtrace of a file, you can do something like this: # rados --pool cephfs.teuthology.data getxattr 10014bb9b3a.00000000 parent | ceph-dencoder type inode_backtrace_t import - decode dump_json { "ino": 1099859467066, "ancestors": [ { "dirino": 1099859454351, "dname": "ceph-client.admin.27659.log.gz", "version": 36178 }, { "dirino": 1099859453575, "dname": "log", "version": 145720 }, { "dirino": 1099859453570, "dname": "smithi157", "version": 252 }, { "dirino": 1099859431752, "dname": "remote", "version": 803 }, { "dirino": 1099859111936, "dname": "6275606", "version": 132338 }, { "dirino": 1099511627776, "dname": "teuthology-2021-07-16_05:17:03-krbd-pacific-testing-basic-smithi", "version": 21141934 }, { "dirino": 1, "dname": "teuthology-archive", "version": 56333879 } ], "pool": 119, "old_pools": [ 114 ] } -- Patrick Donnelly, Ph.D. He / Him / His Principal Software Engineer Red Hat Sunnyvale, CA GPG: 19F28A586F808C2402351B93C3301A3E258DD79D _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx