Re: CephFS: delayed objects deletion ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 16, 2015 at 11:55 AM, Florent B <florent@xxxxxxxxxxx> wrote:
>
> On 06/16/2015 12:47 PM, Gregory Farnum wrote:
>> On Tue, Jun 16, 2015 at 11:38 AM, Florent B <florent@xxxxxxxxxxx> wrote:
>>> I still have this "problem" on Hammer.
>>>
>>> My CephFS directory contains 46MB of data, but the pool (configured with
>>> layout, not default one) is 6.59GB...
>>>
>>> How to debug this ?
>> On Mon, Mar 16, 2015 at 4:14 PM, John Spray <john.spray@xxxxxxxxxx> wrote:
>>> If you can reproduce this on hammer, then please capture "ceph daemon
>>> mds.<daemon id> session ls" and "ceph mds tell mds.<daemon id> dumpcache
>>> /tmp/cache.txt", in addition to the procedure to reproduce.  Ideally logs
>>> with "debug mds = 10" as well.
>> :)
>
> Yeah sorry I missed it.
>
> Here is session ls : http://pastebin.com/zDcx9V2Y
>
> Here is dumpcache : http://m.uploadedit.com/ba3d/1434452055183.txt

Well, in a quick skim it looks like all your stray files are still
being held alive by one of your clients maintaining capabilities on
them.

You're using ceph-fuse, right? I'm looking at where we do unlinks and
we do drop caps, but I think we still aren't preemptively asking other
clients to drop caps on deleted files, if they're shared. It looks
like you've got a bunch of web servers mounting the same directories
for shared read access?
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux