Re: mounting XFS RAID-1 disk partition that needs repair.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



haven't tried the suggestions yet, but here is some diagnostics on what
happens when I attempt to mount it:
upon running *mdadm --assemble /dev/md40 /mnt/dvd --run*, info from
/var/log/messages):
(note that /mnt/dvd is just an empty mount point that exists, used here for
convenience).

Nov 24 12:21:42 fcshome kernel: md: md40 stopped.
Nov 24 12:21:42 fcshome kernel: md/raid1:md40: active with 1 out of 2
mirrors
Nov 24 12:21:42 fcshome kernel: md40: detected capacity change from 0 to
996887429120

output from doing:
sudo mount /dev/md40 /mnt/dvd
mount: mount /dev/md40 on /mnt/dvd failed: Structure needs cleaning

corresponding items from /var/log/messages:
Nov 24 12:22:55 fcshome kernel: XFS (md40): Superblock earlier than Version
5 has XFS_[PQ]UOTA_{ENFD|CHKD} bits.
Nov 24 12:22:55 fcshome kernel: XFS (md40): Metadata corruption detected at
xfs_sb_read_verify+0x122/0x160 [xfs], xfs_sb block 0xff
ffffffffffffff
Nov 24 12:22:55 fcshome kernel: XFS (md40): Unmount and run xfs_repair
Nov 24 12:22:55 fcshome kernel: XFS (md40): First 128 bytes of corrupted
metadata buffer:
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0000: 58 46 53 42 00 00 10 00
00 00 00 00 0e 81 b1 e0  XFSB............
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0010: 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  ................
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0020: d2 22 a7 30 dd 88 48 8b
bd bb 9c 8b 2a 22 72 cc  .".0..H.....*"r.
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0030: 00 00 00 00 08 00 00 04
00 00 00 00 00 00 00 80  ................
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0040: 00 00 00 00 00 00 00 81
00 00 00 00 00 00 00 82  ................
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0050: 00 00 00 01 00 74 0d 8f
00 00 00 20 00 00 00 00  .....t..... ....
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0060: 00 00 80 00 30 c4 02 00
01 00 00 10 00 00 00 00  ....0...........
Nov 24 12:22:55 fcshome kernel: ffff8e0c8f4e0070: 00 00 00 00 00 00 00 00
0c 09 08 04 17 00 00 19  ................
Nov 24 12:22:55 fcshome kernel: XFS (md40): SB validate failed with error
-117.

running xfs_repair give:
sudo xfs_repair /dev/md40
Phase 1 - find and verify superblock...
xfs_repair: V1 inodes unsupported. Please try an older xfsprogs.

before proceeding with other experiments, I decided to use dd to create an
image file on my local disk of that partition so I could mess with it with
less chance of trashing the on-disk partition. when attempting to use it, I
get:

sudo mdadm --assemble /dev/md41 ./part4.img --run
mdadm: ./part4.img is not a block device.
mdadm: ./part4.img has no superblock - assembly aborted

So, I thought maybe the image had somehow become corrupted, so I did:

sudo md5sum /dev/sdd4
bd7cac3c886e7b3110e28100e119bb82  /dev/sdd4

and

md5sum part4.img
bd7cac3c886e7b3110e28100e119bb82  part4.img

which shows the partition and its disk image to be identical.

Why shouldn't a dd image of a partition work just as well (for my purposes)
as the actual disk partition? I've certainly done this before with EXTn and
NTFS filesystems, is XFS somehow different in this regard?

Do any of you know what I'm doing wrong here?

Thanks in advance!

Fred

On Tue, Nov 23, 2021 at 3:57 PM Fred <fred.fredex@xxxxxxxxx> wrote:

> I'll take a look at these two ideas soon. Thanks!
>
> On Tue, Nov 23, 2021 at 10:03 AM Simon Matter <simon.matter@xxxxxxxxx>
> wrote:
>
>> Hi,
>>
>> > I'm attempting to extract data from a HD that has a bunch of linux-raid
>> > partitions, including one large one with data I need to save off the
>> disk.
>> >
>> > I actually have two drives like that (both not from the same RAID pair),
>> > and one of them I was successful in creating a MD device so I could
>> mount
>> > it RO and copy off a ton of data.
>> >
>> > the second one fails to mount, saying the XFS filesystem is corrupted.
>> > Attempting to run XFS_repair I get a message that the filesystem is
>> XFS-1
>> > and I need an older version of XFS tools to do it.
>>
>> Are you sure the filesystem is really corrupt? Maybe it's only your kernel
>> which doesn't understand the old XFS version?
>>
>> To use older xfs_repair, you can just download an older version like
>> xfsprogs-2.9.4-1.el4.centos.x86_64.rpm, extract it to a directory and call
>> the xfs_repair binary from the package.
>>
>> I'm not sure whether you need a matching xfs kernel module to run
>> xfs_repair successfully.
>>
>> Regards,
>> Simon
>>
>> _______________________________________________
>> CentOS mailing list
>> CentOS@xxxxxxxxxx
>> https://lists.centos.org/mailman/listinfo/centos
>>
>
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux