CephFS : rm file does not remove object in rados

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/18/2014 08:45 PM, Gregory Farnum wrote:
> On Thu, Sep 18, 2014 at 10:39 AM, Florent B <florent at coppint.com> wrote:
>> On 09/12/2014 07:38 PM, Gregory Farnum wrote:
>>> On Fri, Sep 12, 2014 at 6:49 AM, Florent Bautista <florent at coppint.com> wrote:
>>>> Hi all,
>>>>
>>>> Today I have a problem using CephFS. I use firefly last release, with
>>>> kernel 3.16 client (Debian experimental).
>>>>
>>>> I have a directory in CephFS, associated to a pool "pool2" (with
>>>> set_layout).
>>>>
>>>> All is working fine, I can add and remove files, objects are stored in
>>>> the right pool.
>>>>
>>>> But when Ceph cluster is overloaded (or for another reason, I don't
>>>> know), sometimes when I remove a file, objects are not deleted in rados !
>>> CephFS file removal is asynchronous with you removing it from the
>>> filesystem. The files get moved into a "stray" directory and will get
>>> deleted once nobody holds references to them any more.
>> My client is the only mounted and does not use files.
> "does not use files"...what?

Files are not opened by any process...

>
>> This problems occurs when I delete files with "rm", but not when I use
>> given rsync command.
>>
>>>> I explain : I want to remove a large directory, containing millions of
>>>> files. For a moment, objects are really deleted in rados (I see it in
>>>> "rados df"), but when I start to do some heavy operations (like moving
>>>> volumes in rdb), objects are not deleted anymore, "rados df" returns a
>>>> fixed number of objects. I can see that files are still deleting because
>>>> I use rsync (rsync -avP --stats --delete /empty/dir/ /dir/to/delete/).
>>> What do you mean you're rsyncing and can see files deleting? I don't understand.
>> When you run command I gave, syncing an empty dir with the dir you want
>> deleted, rsync is telling you "Deleting (file)" for each file to unlink.
>>
>>> Anyway, It's *possible* that the client is holding capabilities on the
>>> deleted files and isn't handing them back, in which case unmounting it
>>> would drop them (and then you could remount). I don't think we have
>>> any commands designed to hasten that, though.
>> unmounting does not help.
>>
>> When I unlink() via rsync, objects are deleted in rados (it makes all
>> cluster slow down, and have slow requests).
>>
>> When I use rm command, it is much faster but objects are not deleted in
>> rados !
> I think you're not doing what you think you're doing, then...those two
> actions should look the same to CephFS.

Maybe... but I do what I always did for any file system... and it seems
there's a problem with CephFS :)

>
>> When I re-mount root CephFS, there are no files, all empty.
>>
>> But still have 125 MB of objects in "metadata" pool and 21.57 GB in my
>> data pool (and it does not decrease...)...
> Well, the metadata pool is never going to be emptied; that holds your
> MDS journals. The data pool might not get entirely empty either; how
> many objects does it say it has?

data pool = 380862 objects (22622515 bytes) ... while root of CephFS is
empty...

To be more precise... I used the rsync method (because it was supposed
to be the fastest way for others FS)... which caused cluster to have
slow request (ceph -s).
So just to test, I used the "rm" command on the only directory in my
CephFS... and no more slow request, files are deleted in FS... but not
objects in rados ! (it explains why no more slow requests I think !).

If I am the only one to have this problem, maybe problem is me, I agree
:) but it seems very strange seriously :)

Finally, the goal of this operation was to move away files from CephFS
(not enough stable to us), so operation is realized, I deleted all data
pool myself.



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux