MDS_DAMAGE: 1 MDSs report damaged metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph Experts,

 

I have recently deleted a very big directory on my cephfs and a few minutes after my dashboard start yelling :

Overall status: HEALTH_ERR

MDS_DAMAGE: 1 MDSs report damaged metadata

 

So I immediately log in my ceph admin node than do a ceph –s:

cluster:

    id:     472dfc88-84dc-4284-a1cf-0810ea45ae19

    health: HEALTH_ERR

            1 MDSs report damaged metadata

 

  services:

    mon: 3 daemons, quorum ceph-n1,ceph-n2,ceph-n3

    mgr: ceph-admin(active), standbys: ceph-n1

    mds: cephfs-2/2/2 up  {0=ceph-admin=up:active,1=ceph-n1=up:active}, 1 up:standby

    osd: 17 osds: 17 up, 17 in

    rgw: 1 daemon active

 

  data:

    pools:   9 pools, 1584 pgs

    objects: 1093 objects, 418 MB

    usage:   2765 MB used, 6797 GB / 6799 GB avail

    pgs:     1584 active+clean

 

  io:

    client:   35757 B/s rd, 0 B/s wr, 34 op/s rd, 23 op/s wr

 

and after a few research I tried : #ceph tell mds.0 damage ls :

        "damage_type": "backtrace",

        "id": 2744661796,

        "ino": 1099512314364,

        "path": "/M3/sogetel.net/t/te/testmda3/Maildir/dovecot.index.log.2"

 

And so I tried to do what I saw at https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg35682.html

But it did not work so now I don’t know how to fix it.

 

Can you help me ?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux