Re: raid5 revert-reshape issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks like when I first issue create I had passed --data-offset=2048
but it got set as 4096, so I stopped the array and give another try
making sure all disks were in right order:

# mdadm -C /dev/md/RAID --metadata=1.2 --raid-devices=6 --readonly
--level=5 --chunk=512 --data-offset=1024 --layout=ls /dev/sdf1
/dev/sde1 /dev/sda1 /dev/sdb1 /dev/sdd1 /dev/sdg1 --assume-clean

and it appears promising after checking /proc/mdstat:

Personalities : [raid6] [raid5] [raid4]
md127 : active raid5 read-only sdg1[0] sdb1[5] sde1[4] sdc1[3] sda1[2] sdf1[1]
      9767562240 blocks super 1.2 level 5, 512k chunk, algorithm 2
[6/6] [UUUUUU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>
-----------------------------------------------------------------------------------------
it also looks good after checking array details:

/dev/md127:
        Version : 1.2
  Creation Time : Sat Jan 19 21:10:20 2019
     Raid Level : raid5
     Array Size : 9767562240 (9315.07 GiB 10001.98 GB)
  Used Dev Size : 1953512448 (1863.01 GiB 2000.40 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Jan 20 01:02:46 2019
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : andromedahive:RAID  (local to host andromedahive)
           UUID : d13bcee8:4ad3bcd1:96794689:c00cd066
         Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       97        0      active sync   /dev/sdg1
       1       8       81        1      active sync   /dev/sdf1
       2       8        1        2      active sync   /dev/sda1
       3       8       33        3      active sync   /dev/sdc1
       4       8       65        4      active sync   /dev/sde1
       5       8       17        5      active sync   /dev/sdb1
-----------------------------------------------------------------------------------------
I just checked if first device data offset was properly set this time:

/dev/sdg1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : d13bcee8:4ad3bcd1:96794689:c00cd066
           Name : andromedahive:RAID  (local to host andromedahive)
  Creation Time : Sat Jan 19 21:10:20 2019
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 3907025039 (1863.01 GiB 2000.40 GB)
     Array Size : 9767562240 (9315.07 GiB 10001.98 GB)
  Used Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=143 sectors
          State : clean
    Device UUID : 054f6070:b6074cdd:5b887a73:5471e3ba

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Jan 20 01:35:15 2019
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : e879e431 - correct
         Events : 4

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
-----------------------------------------------------------------------------------------
as the first disk data offset and all parameters were looking good, I
tried to mount it with confidence:

# mount -t ext4 /dev/md/RAID /mnt/RAIDVol

and it worked! :)

Then I checked the array file system:
# fsck.ext4 -fn /dev/md127
and it passed all checks...

so, I did lots of data checking, before set it to readwrite by:

# mdadm --readwrite /dev/md127

and checked /proc/mdstat:

Personalities : [raid6] [raid5] [raid4]
md127 : active raid5 sdg1[0] sdb1[5] sde1[4] sdc1[3] sda1[2] sdf1[1]
      9767562240 blocks super 1.2 level 5, 512k chunk, algorithm 2
[6/6] [UUUUUU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>

then, I fixed mdadm.conf with the new array UUID, reboot and happily
verified that the array is now working without issues.

Thanks a lot guys for your help!

Romulo

On Sat, Jan 19, 2019 at 3:00 PM Andreas Klauer
<Andreas.Klauer@xxxxxxxxxxxxxx> wrote:
>
> On Sat, Jan 19, 2019 at 11:39:57AM -0500, Romulo Albuquerque wrote:
> > In short, I was able to get it clean, but I'm not able to mount it,
> > this is how it looks now:
>
> - old
> + new
>
>   /dev/sdb1:
> - Data Offset : 2048 sectors
> + Data Offset : 4096 sectors
> - Device Role : Active device 4
> + Device Role : Active device 3
>
> So. You have the wrong data offset. You have the wrong drive order
> or maybe, your device names changed.
>
> There could be more things, I didn't check.
>
> > so, checked it with fsck.ext4 -f:
>
> do not run the file system crunch killer.
>
> > Do I have any chance to mount it?
>
> Full stop.
>
> Overlays. Use them:
>
> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file
>
> Things to look out for when using mdadm --create:
>
> https://unix.stackexchange.com/a/131927/30851
>
> Good luck
> Andreas Klauer



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux