On Thu, Oct 1, 2015 at 4:42 AM, John Spray <jspray@xxxxxxxxxx> wrote: > On Thu, Oct 1, 2015 at 12:35 PM, Florent B <florent@xxxxxxxxxxx> wrote: >> Thank you John, I think about it. I did : >> >> # get inodes in pool >> rados -p my_pool ls | cut -d '.' -f 1 | uniq >> -u # returns 132886 unique inodes >> >> # get inodes in cephfs >> find . -printf 'ibase=10;obase=16;%i\n' | bc | tr '[:upper:]' >> '[:lower:]' # returns 7169 inodes >> >> But, I would like to be sure that inodes from rados pool are not linked >> in CephFS metadata somewhere... I don't know, it could be possible, no ? >> >> Is there a way to print "readable" CephFS metadatas ? > > There is the extra case of a "stray" inodes (not visible to mounts), > but those are just hard links (which would be visible in your client > mount) or part-way-deleted files (where it doesn't matter if you > delete their data objects). Depending on how old your clients are (I *think* we've fixed this problem in all the latest ones!) it's also possible that unmounting and re-mounting CephFS from them will start the deleted files getting trimmed. -Greg _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com