Re: CEPH MDS Damaged Metadata - recovery steps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
How did this get damaged? You had 3x replication on the pool?



-----Original Message-----
From: Yan, Zheng [mailto:ukernel@xxxxxxxxx] 
Sent: dinsdag 4 juni 2019 1:14
To: James Wilkins
Cc: ceph-users
Subject: Re:  CEPH MDS Damaged Metadata - recovery steps

On Mon, Jun 3, 2019 at 3:06 PM James Wilkins 
<james.wilkins@xxxxxxxxxxxxx> wrote:
>
> Hi all,
>
> After a bit of advice to ensure we’re approaching this the right way.
>
> (version: 12.2.12, multi-mds, dirfrag is enabled)
>
> We have corrupt meta-data as identified by ceph
>
>     health: HEALTH_ERR
>             2 MDSs report damaged metadata
>
> Asking the mds via damage ls
>
>     {
>         "damage_type": "dir_frag",
>         "id": 2265410500,
>         "ino": 2199349051809,
>         "frag": "*",
>         "path": 
"/projects/17343-5bcdaf07f4055-managed-server-0/apache-echfq-data/html/s
hop/app/cache/prod/smarty/cache/iqitreviews/simple/21832/1"
>     }
>
>
> We’ve done the steps outlined here -> 
> http://docs.ceph.com/docs/luminous/cephfs/disaster-recovery/ namely
>
> cephfs-journal-tool –fs:all journal reset (both ranks) 
> cephfs-data-scan scan extents / inodes / links has completed
>
> However when attempting to access the named folder we get:
>
> 2019-05-31 03:16:04.792274 7f56f6fb5700 -1 log_channel(cluster) log 
> [ERR] : dir 0x200136b41a1 object missing on disk; some files may be 
> lost 
> (/projects/17343-5bcdaf07f4055-managed-server-0/apache-echfq-data/html
> /shop/app/cache/prod/smarty/cache/iqitreviews/simple/21832/1)
>
> We get this error followed shortly by an MDS failover
>
> Two questions really
>
> What’s not immediately clear from the documentation is should we/do 
we also need to run the below?
>
> # Session table
> cephfs-table-tool 0 reset session
> # SnapServer
> cephfs-table-tool 0 reset snap
> # InoTable
> cephfs-table-tool 0 reset inode
> # Root inodes ("/" and MDS directory)
> cephfs-data-scan init
>

No, don't do this.

> And secondly – our current train of thought is we need to grab the 
inode number of the parent folder and delete this from the metadata pool 
via rados rmomapkey – is this correct?
>

Yes, find inode number of directory 21832. check if omap key '1_head'
exist in object <inode of directory in hex>.00000000. If it exists, 
remove it.

> Any input appreciated
>
> Cheers,
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux