On 14/03/2015 09:22, Florent B wrote:
Hi,
What do you call "old MDS" ? I'm on Giant release, it is not very old...
With CephFS we have a special definition of "old" that is anything that
doesn't have the very latest bug fixes ;-)
There have definitely been fixes to stray file handling[1] between giant
and hammer. Since with giant you're using a version that is neither
latest nor LTS, I'd suggest you upgrade to hammer. Hammer also includes
some new perf counters related to strays[2] that will allow you to see
how the purging is (or isn't) progressing.
If you can reproduce this on hammer, then please capture "ceph daemon
mds.<daemon id> session ls" and "ceph mds tell mds.<daemon id> dumpcache
/tmp/cache.txt", in addition to the procedure to reproduce. Ideally
logs with "debug mds = 10" as well.
Cheers,
John
1.
http://tracker.ceph.com/issues/10387
http://tracker.ceph.com/issues/10164
2.
http://tracker.ceph.com/issues/10388
And I tried restarting both but it didn't solve my problem.
Will it be OK in Hammer ?
On 03/13/2015 04:27 AM, Yan, Zheng wrote:
On Fri, Mar 13, 2015 at 1:17 AM, Florent B <florent@xxxxxxxxxxx> wrote:
Hi all,
I test CephFS again on Giant release.
I use ceph-fuse.
After deleting a large directory (few hours ago), I can see that my pool
still contains 217 GB of objects.
Even if my root directory on CephFS is empty.
And metadata pool is 46 MB.
Is it expected ? If not, how to debug this ?
Old mds does not work well in this area. Try umounting clients and
restarting MDS.
Regards
Yan, Zheng
Thank you.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com