Re: "stray" objects in empty cephfs data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

On 10/08/2015 12:05 PM, John Spray wrote:
On Thu, Oct 8, 2015 at 10:21 AM, Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Hi,
*snipsnap*

I've moved all files from a CephFS data pool (EC pool with frontend cache
tier) in order to remove the pool completely.

Some objects are left in the pools ('ceph df' output of the affected pools):

     cephfs_ec_data           19      7565k         0 66288G           13

Listing the objects and the readable part of their 'parent' attribute:

# for obj in $(rados -p cephfs_ec_data ls); do echo $obj; rados -p
cephfs_ec_data getxattr parent | strings; done
10000f6119f.00000000
10000f6119f
stray9
10000f63fe5.00000000
10000f6119f
stray9
10000f61196.00000000
10000f6119f
stray9
.......

*snipsnap*

Well, they're strays :-)

You get stray dentries when you unlink files.  They hang around either
until the inode is ready to be purged, or if there are hard links then
they hang around until something prompts ceph to "reintegrate" the
stray into a new path.
Thanks for the fast reply. During the transfer of all files from the EC pool to a standard replicated pool I've copied the file to a new file name, removed the orignal one and renamed the copy. There might have been some processed with open files at that time, which might explain the stray files objects.

I've also been able to locate some processes that might be the reason for these leftover files. I've terminated these processes, but the objects are still present in the pool. How long does purging an inode usually take?

You don't say what version you're running, so it's possible you're
running an older version (pre hammer, I think) where you're
experiencing either a bug holding up deletion (we've had a few) or a
bug preventing reintegration (we had one of those too).  The bugs
holding up deletion can usually be worked around with some client
and/or mds restarts.
The cluster is running on hammer. I'm going to restart the mds to try to get rid of these objects.

It isn't safe to remove the pool in this state.  The MDS is likely to
crash if it eventually gets around to trying to purge these files.
That's bad. Does the mds provide a way to get more information about these files, e.g. which client is blocking purging? We have about 3 hosts working on CephFS, and checking every process might be difficult.

Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux