Re: mds0: Metadata damage detected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 15, 2019 at 3:51 PM Sergei Shvarts <storm@xxxxxxxxxxxx> wrote:
>
> Hello ceph users!
>
> A couple of days ago I've got a ceph health error - mds0: Metadata damage detected.
> Overall ceph cluster is fine: all pgs are clean, all osds are up and in, no big problems.
> Looks like there is not much information regarding this class of issues, so I'm writing this message and hope somebody can help me.
>
> here is the damage itself
> ceph tell mds.0 damage ls
> 2019-01-15 07:47:04.651317 7f48c9813700  0 client.312845186 ms_handle_reset on 192.168.0.5:6801/1186631878
> 2019-01-15 07:47:04.656991 7f48ca014700  0 client.312845189 ms_handle_reset on 192.168.0.5:6801/1186631878
> [{"damage_type":"dir_frag","id":3472877204,"ino":1100954978087,"frag":"*","path":"\/public\/video\/3h\/3hG6X7\/screen-msmall"}]
>

Looks like object 1005607c727.00000000 in cephfs metadata pool is
corrupted. please run following commands and send mds.0 log to us

ceph tell mds.0 injectargs '--debug_mds 10'
ceph tell mds.0 damage rm 3472877204
ls <cephfs mount point>/public/video/3h/3hG6X7/screen-msmall
ceph tell mds.0 injectargs '--debug_mds 0'

Regards
Yan, Zheng

> Best regards,
> Sergei Shvarts
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux