Re: xfs_rapair fails with err 117. Can I fix the fs or recover individual files somehow?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Eric

sent you the dd output you requested on a separate email. Here's the
xfs_info output of image of the damaged fs, and also another xfs_info
on the same md raid device where I already did a mkfs.xfs again, so it
should be a similar FS on top of the same LV where the damaged FS used
to live.

New fs  on same LV on the same RAID5 MD array:
meta-data=/dev/mapper/raid5--8tb--1-usbraid5--2
isize=512    agcount=33, agsize=10649472 blks
        =                       sectsz=4096  attr=2, projid32bit=1
        =                       crc=1        finobt=1, sparse=1, rmapbt=0
        =                       reflink=0    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=340787200, imaxpct=5
        =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=166400, version=2
        =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


meta-data=disk-dump-usbraid5-2   isize=512    agcount=42, agsize=8192000 blks
        =                       sectsz=4096  attr=2, projid32bit=1
        =                       crc=1        finobt=1, sparse=0, rmapbt=0
        =                       reflink=0    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=340787200, imaxpct=25
        =                       sunit=128    swidth=4294897408 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=128000, version=2
        =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

On Sat, Jun 24, 2023 at 12:26 AM Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:
>
> On 6/23/23 6:26 PM, Fernando CMK wrote:
> > On Fri, Jun 23, 2023 at 6:14 PM Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:
> >>
> >> On 6/23/23 3:25 PM, Fernando CMK wrote:
> >>> Scenario
> >>>
> >>> opensuse 15.5, the fs was originally created on an earlier opensuse
> >>> release. The failed file system is on top of a mdadm raid 5, where
> >>> other xfs file systems were also created, but only this one is having
> >>> issues. The others are doing fine.
> >>>
> >>> xfs_repair and xfs_repair -L both fail:
> >>
> >> Full logs please, not the truncated version.
> >
> > Phase 1 - find and verify superblock...
> >         - reporting progress in intervals of 15 minutes
> > Phase 2 - using internal log
> >         - zero log...
> >         - 16:14:46: zeroing log - 128000 of 128000 blocks done
> >         - scan filesystem freespace and inode maps...
> > stripe width (17591899783168) is too large
> > Metadata corruption detected at 0x55f819658658, xfs_sb block 0xfa00000/0x1000
> > stripe width (17591899783168) is too large
>
> <repeated many times>
>
> It seems that the only problem w/ the filesystem detected by repair is a
> ridiculously large stripe width, and that's found on every superblock.
>
> dmesg (expectedly) finds the same error when mounting.
>
> Pretty weird, I've never seen this before. And, xfs_repair seems unable
> to fix this type of corruption.
>
> can you do:
>
> dd if=<filesystem device or image> bs=512 count=1 | hexdump -C
>
> and paste that here?
>
> I'd also like to see what xfs_ifo says about other filesystems on the md
> raid.
>
> -Eric
>




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux