How to repair MDS damage?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph Experts,

after upgrading our Ceph cluster from Hammer to Jewel,
the MDS (after a few days) found some metadata damage:

   # ceph status
   [...]
   health HEALTH_ERR
         mds0: Metadata damage detected
   [...]

The output of

   # ceph tell mds.0 damage ls

is:

   [
      {
         "ino" : [...],
         "id" : [...],
         "damage_type" : "backtrace"
      },
      [...]
   ]

There are 5 such "damage_type" : "backtrace" entries in total.

I'm not really surprised, there were a very few instances in
the past where one or two (mostly empty directories) and
symlinks acted strangely, and couldn't be deleted
("rm results in "Invalid argument"). Back then, I moved them
all in a "quarantine" directory, but wasn't able to do anything
about it.

Now that CephFS does more rigorous checks and has spotted
the trouble - how do I go about repairing this?


Cheers and thanks for any help,

Oliver
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux