MDSs report damaged metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

My cephfs MDS is reporting damaged metadata following the addition (and
remapping) of 12 new OSDs. 
`ceph tell mds.database-0 damage ls` reports ~85 files damaged. All of type
"backtrace" which is very concerning. 
` ceph tell mds.database-0 scrub start / recursive repair` seems to have no
effect on the damage. What does this sort of damage mean? Is there anything
I can do to recover these files?


> ceph status reports:
  cluster:

    id:     692905c0-f271-4cd8-9e43-1c32ef8abd13

    health: HEALTH_ERR

            1 MDSs report damaged metadata

            630 pgs not deep-scrubbed in time

            630 pgs not scrubbed in time

 

  services:

    mon: 3 daemons, quorum database-0,file-server,webhost (age 37m)

    mgr: webhost(active, since 3d), standbys: file-server, database-0

    mds: cephfs:1 {0=database-0=up:active} 2 up:standby

    osd: 48 osds: 48 up (since 56m), 48 in (since 13d); 10 remapped pgs

 

  task status:

    scrub status:

        mds.database-0: idle

 

  data:

    pools:   7 pools, 633 pgs

    objects: 60.82M objects, 231 TiB

    usage:   336 TiB used, 246 TiB / 582 TiB avail

    pgs:     623 active+clean

             6   active+remapped+backfilling

             4   active+remapped+backfill_wait

 

Thanks for the help.

 

Best,

Ricardo

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux