Re: cephfs automatic data pool cleanup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yan,

Zitat von "Yan, Zheng" <ukernel@xxxxxxxxx>:
[...]

It's likely some clients had caps on unlinked inodes, which prevent
MDS from purging objects. When a file gets deleted, mds notifies all
clients, clients are supposed to drop corresponding caps if possible.
You may hit a bug in this area, some clients failed to drop cap for
unlinked inodes.
[...]
There is a reconnect stage during MDS recovers. To reduce reconnect
message size, clients trim unused inodes from their cache
aggressively. In your case,  most unlinked inodes also got trimmed .
So mds could purge corresponding objects after it recovered

thank you for that detailed explanation. While I've already included the recent code fix for this issue on a test node, all other mount points (including the NFS server machine) still run thenon-fixed kernel Ceph client. So your description makes me believe we've hit exactly what you describe.

Seems we'll have to fix the clients :)

Is there a command I can use to see what caps a client holds, to verify the proposed patch actually works?

Regards,
Jens

PS: Is there a command I can use to see what caps a client holds, to verify the proposed patch actually works?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux