Re: MDSs report damaged metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

damage_type backtrace is rather harmless and can indeed be repaired
with the repair command, but it's called scrub_path.
Also you need to pass the name and not the rank of the MDS as id, it should be

    # (on the server where the MDS is actually running)
    ceph daemon mds.mds3 scrub_path ...

But you should also be able to use ceph tell since nautilus which is a
little bit easier because it can be run from any node:

    ceph tell mds.mds3 scrub start 'PATH' repair


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Fri, Aug 16, 2019 at 8:40 AM Lars Täuber <taeuber@xxxxxxx> wrote:
>
> Hi all!
>
> The mds of our ceph cluster produces a health_err state.
> It is a nautilus 14.2.2 on debian buster installed from the repo made by croit.io with osds on bluestore.
>
> The symptom:
> # ceph -s
>   cluster:
>     health: HEALTH_ERR
>             1 MDSs report damaged metadata
>
>   services:
>     mon: 3 daemons, quorum mon1,mon2,mon3 (age 2d)
>     mgr: mon3(active, since 2d), standbys: mon2, mon1
>     mds: cephfs_1:1 {0=mds3=up:active} 2 up:standby
>     osd: 30 osds: 30 up (since 17h), 29 in (since 19h)
>
>   data:
>     pools:   3 pools, 1153 pgs
>     objects: 435.21k objects, 806 GiB
>     usage:   4.7 TiB used, 162 TiB / 167 TiB avail
>     pgs:     1153 active+clean
>
>
> # ceph health detail
> HEALTH_ERR 1 MDSs report damaged metadata
> MDS_DAMAGE 1 MDSs report damaged metadata
>     mdsmds3(mds.0): Metadata damage detected
>
> #ceph tell mds.0 damage ls
> 2019-08-16 07:20:09.415 7f1254ff9700  0 client.840758 ms_handle_reset on v2:192.168.16.23:6800/176704036
> 2019-08-16 07:20:09.431 7f1255ffb700  0 client.840764 ms_handle_reset on v2:192.168.16.23:6800/176704036
> [
>     {
>         "damage_type": "backtrace",
>         "id": 3760765989,
>         "ino": 1099518115802,
>         "path": "~mds0/stray7/100005161f7/dovecot.index.backup"
>     }
> ]
>
>
>
> I tried this without much luck:
> # ceph daemon mds.0 "~mds0/stray7/100005161f7/dovecot.index.backup" recursive repair
> admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
>
>
> Is there a way out of this error?
>
> Thanks and best regards,
> Lars
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux