On Sat, Jun 24, 2023 at 12:26 AM Eric Sandeen <sandeen@xxxxxxxxxxx> wrote: > > On 6/23/23 6:26 PM, Fernando CMK wrote: > > On Fri, Jun 23, 2023 at 6:14 PM Eric Sandeen <sandeen@xxxxxxxxxxx> wrote: > >> > >> On 6/23/23 3:25 PM, Fernando CMK wrote: > >>> Scenario > >>> > >>> opensuse 15.5, the fs was originally created on an earlier opensuse > >>> release. The failed file system is on top of a mdadm raid 5, where > >>> other xfs file systems were also created, but only this one is having > >>> issues. The others are doing fine. > >>> > >>> xfs_repair and xfs_repair -L both fail: > >> > >> Full logs please, not the truncated version. > > > > Phase 1 - find and verify superblock... > > - reporting progress in intervals of 15 minutes > > Phase 2 - using internal log > > - zero log... > > - 16:14:46: zeroing log - 128000 of 128000 blocks done > > - scan filesystem freespace and inode maps... > > stripe width (17591899783168) is too large > > Metadata corruption detected at 0x55f819658658, xfs_sb block 0xfa00000/0x1000 > > stripe width (17591899783168) is too large > > <repeated many times> > > It seems that the only problem w/ the filesystem detected by repair is a > ridiculously large stripe width, and that's found on every superblock. If that's the issue, is there a way to set the correct stripe width? Also... the md array involved has 3 disks, if that's of any help. > > dmesg (expectedly) finds the same error when mounting. > > Pretty weird, I've never seen this before. And, xfs_repair seems unable > to fix this type of corruption. > > can you do: > > dd if=<filesystem device or image> bs=512 count=1 | hexdump -C > > and paste that here? > > I'd also like to see what xfs_ifo says about other filesystems on the md > raid. > > -Eric >