Re: Problem with RAID1 - unable to read superblock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il giorno mer 5 feb 2025 alle ore 21:42 Pascal Hambourg
<pascal@xxxxxxxxxxxxxxx> ha scritto:
>
> On 05/02/2025 at 13:03, Raffaele Morelli wrote:
> >
> > Last week we found it was in read only mode, I've stopped and tried to
> > reassemble it with no success.
> > dmesg recorded this error
> >
> > [7013959.352607] buffer_io_error: 7 callbacks suppressed
> > [7013959.352612] Buffer I/O error on dev md126, logical block
> > 927915504, async page read
> > [7013959.352945] EXT4-fs (md126): unable to read superblock
>
> No error messages from the underlying drives ?

I have logs to scan for details

> > We've found one of the drive with various damaged sectors so we
> > removed both and created two images first ( using ddrescue -d -M -r 10
> > ).
>
> Is either image complete or do both have missing blocks ?

There are no errors, pct rescued is 100%, everything seems fine.

> > We've set up two loopback devices (using losetup --partscan --find
> > --show) and would like to recover as much as possible.
> >
> > Should I try to reassemble the raid with something like
> > mdadm --assemble --verbose /dev/md0 --level=1 --raid-devices=2
> > /dev/loop18 /dev/loop19
>
> If the RAID members were partitions you must use the partitions
> /dev/loopXpY, not the whole loop devices.
>
> If either ddrescue image is complete, you can assemble the array in
> degraded mode from a single complete image.
> If both images are incomplete and the array has a valid bad block list,
> you can try to assemble the array from both images.
>
> In either case, assemble the array read-only.

Actually we're here

/dev/md0:
           Version : 1.2
     Creation Time : Wed Feb  5 11:12:32 2025
        Raid Level : raid1
        Array Size : 3906885440 (3.64 TiB 4.00 TB)
     Used Dev Size : 3906885440 (3.64 TiB 4.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Feb  5 22:27:49 2025
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : aria-pcpl:0  (local to host aria-pcpl)
              UUID : 3b27a574:b12fa078:28872721:15bf710c
            Events : 7984

    Number   Major   Minor   RaidDevice State
       0       7       22        0      active sync   /dev/loop22
       1       7       23        1      active sync   /dev/loop23




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux