Btw:
root@deployer:~# cephfs-data-scan -v
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus
(stable)
On 19/09/2019 13:38, Guilherme Geronimo wrote:
Here it is: https://pastebin.com/SAsqnWDi
The command:
timeout 10 rm /mnt/ceph/lost+found/100002430c8 ; umount -f /mnt/ceph
On 17/09/2019 00:51, Yan, Zheng wrote:
please send me crash log
On Tue, Sep 17, 2019 at 12:56 AM Guilherme Geronimo
<guilherme.geronimo@xxxxxxxxx> wrote:
Thank you, Yan.
It took like 10 minutes to execute the scan_links.
I believe the number of Lost+Found decreased in 60%, but the rest of
them are still causing the MDS crash.
Any other suggestion?
=D
[]'s
Arthur (aKa Guilherme Geronimo)
On 10/09/2019 23:51, Yan, Zheng wrote:
On Wed, Sep 4, 2019 at 6:39 AM Guilherme
<guilherme.geronimo@xxxxxxxxx> wrote:
Dear CEPHers,
Adding some comments to my colleague's post: we are running Mimic
13.2.6 and struggling with 2 issues (that might be related):
1) After a "lack of space" event we've tried to remove a 40TB
file. The file is not there anymore, but no space was released. No
process is using the file either.
2) There are many files in /lost+found (~25TB|~5%). Every time we
try to remove a file, MDS crashes ([1,2]).
The Dennis Kramer's case [3] led me to believe that I need to
recreate the FS.
But I refuse to (dis)believe that CEPH hasn't a repair tool for it.
I thought "cephfs-table-tool take_inos" could be the answer for
my problem, but the message [4] were not clear enough.
Can I run the command without resetting the inodes?
[1] Error at ceph -w - https://pastebin.com/imNqBdmH
[2] Error at mds.log - https://pastebin.com/rznkzLHG
For the mds crash issue. 'cephfs-data-scan scan_link' of nautilus
version (14.2.2) should fix it.
snaptable. You don't need to upgrade whole cluster. Just install
nautilus in a
temp machine or compile ceph from source.
[3] Discussion -
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2018-July/027845.html
[4] Discussion -
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2018-July/027935.html
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx