Re: 2 Disks Jumped Out While Reshaping RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have installed mdadm 3.0 and ran -Af and now it's continuing reshaping!!!

root@Adam:~# mdadm --version
mdadm - v3.0 - 2nd June 2009
root@Adam:~# mdadm -Af --verbose /dev/md0
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/sdi5: Device or resource busy
mdadm: /dev/sdi5 has wrong uuid.
mdadm: no recogniseable superblock on /dev/sdi2
mdadm: /dev/sdi2 has wrong uuid.
mdadm: cannot open device /dev/sdi1: Device or resource busy
mdadm: /dev/sdi1 has wrong uuid.
mdadm: cannot open device /dev/sdi: Device or resource busy
mdadm: /dev/sdi has wrong uuid.
mdadm: no RAID superblock on /dev/sdh
mdadm: /dev/sdh has wrong uuid.
mdadm: no RAID superblock on /dev/sdg
mdadm: /dev/sdg has wrong uuid.
mdadm: no RAID superblock on /dev/sdf
mdadm: /dev/sdf has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sde has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has wrong uuid.
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdc has wrong uuid.
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdb has wrong uuid.
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sda has wrong uuid.
mdadm: /dev/sdh1 is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdg1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 7.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 6.
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 4.
mdadm: forcing event count in /dev/sdf1(0) from 949204 upto 949214
mdadm: added /dev/sde1 to /dev/md0 as 1
mdadm: added /dev/sdc1 to /dev/md0 as 2
mdadm: added /dev/sdg1 to /dev/md0 as 3
mdadm: added /dev/sda1 to /dev/md0 as 4
mdadm: added /dev/sdh1 to /dev/md0 as 5
mdadm: added /dev/sdb1 to /dev/md0 as 6
mdadm: added /dev/sdd1 to /dev/md0 as 7
mdadm: added /dev/sdf1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 7 drives (out of 8).
root@Adam:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sdf1[0] sdd1[7] sdb1[6] sdh1[5] sda1[4] sdc1[2] sde1[1]
      5860558848 blocks super 0.91 level 5, 256k chunk, algorithm 2
[8/7] [UUU_UUUU]
      [=========>...........]  reshape = 49.1% (479633152/976759808)
finish=950.5min speed=8704K/sec

unused devices: <none>

sdg1 is not in the list. Is that correct?!  sdg1 was one of the
array's disks before expanding. So I guess now the array is degraded
yet is reshaping as if it had 8 disks, correct?

So after the reshaping process is over, I can add sdg1 again and it
will resync properly, right?

On Mon, Sep 7, 2009 at 2:55 AM, Majed B.<majedb@xxxxxxxxx> wrote:
> Thanks a lot Neil!
>
> I didn't know that it requires you to register to download. My bad.
> Here's the output of examine:
>
> /dev/sda1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 13:28:39 2009
>          State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 1
>  Spare Devices : 0
>       Checksum : 5b5a1163 - correct
>         Events : 949214
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     4       8        1        4      active sync   /dev/sda1
>
>   0     0       0        0        0      removed
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       0        0        3      faulty removed
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
> /dev/sdb1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 13:28:39 2009
>          State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 1
>  Spare Devices : 0
>       Checksum : 5b5a1177 - correct
>         Events : 949214
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     6       8       17        6      active sync   /dev/sdb1
>
>   0     0       0        0        0      removed
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       0        0        3      faulty removed
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
> /dev/sdc1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 13:28:39 2009
>          State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 1
>  Spare Devices : 0
>       Checksum : 5b5a117f - correct
>         Events : 949214
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     2       8       33        2      active sync   /dev/sdc1
>
>   0     0       0        0        0      removed
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       0        0        3      faulty removed
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
> /dev/sdd1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 13:28:39 2009
>          State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 1
>  Spare Devices : 0
>       Checksum : 5b5a1199 - correct
>         Events : 949214
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     7       8       49        7      active sync   /dev/sdd1
>
>   0     0       0        0        0      removed
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       0        0        3      faulty removed
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
> /dev/sde1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 13:28:39 2009
>          State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 1
>  Spare Devices : 0
>       Checksum : 5b5a119d - correct
>         Events : 949214
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     1       8       65        1      active sync   /dev/sde1
>
>   0     0       0        0        0      removed
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       0        0        3      faulty removed
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
> /dev/sdf1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 06:40:04 2009
>          State : clean
>  Active Devices : 7
> Working Devices : 7
>  Failed Devices : 1
>  Spare Devices : 0
>       Checksum : 5b59b1c1 - correct
>         Events : 949204
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     0       8       81        0      active sync   /dev/sdf1
>
>   0     0       8       81        0      active sync   /dev/sdf1
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       0        0        3      faulty removed
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
> /dev/sdg1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 2554257664 (2435.93 GiB 2615.56 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 00:10:39 2009
>          State : clean
>  Active Devices : 8
> Working Devices : 8
>  Failed Devices : 0
>  Spare Devices : 0
>       Checksum : fba3471a - correct
>         Events : 874530
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     3       8       97        3      active sync   /dev/sdg1
>
>   0     0       8       81        0      active sync   /dev/sdf1
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       8       97        3      active sync   /dev/sdg1
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
> /dev/sdh1:
>          Magic : a92b4efc
>        Version : 00.91.00
>           UUID : ed1a2670:03308d80:95a69c9f:ccf9605f
>  Creation Time : Sat May 23 00:22:49 2009
>     Raid Level : raid5
>  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
>     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
>   Raid Devices : 8
>  Total Devices : 8
> Preferred Minor : 0
>
>  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>  Delta Devices : 1 (7->8)
>
>    Update Time : Wed Sep  2 13:28:39 2009
>          State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 1
>  Spare Devices : 0
>       Checksum : 5b5a11d5 - correct
>         Events : 949214
>
>         Layout : left-symmetric
>     Chunk Size : 256K
>
>      Number   Major   Minor   RaidDevice State
> this     5       8      113        5      active sync   /dev/sdh1
>
>   0     0       0        0        0      removed
>   1     1       8       65        1      active sync   /dev/sde1
>   2     2       8       33        2      active sync   /dev/sdc1
>   3     3       0        0        3      faulty removed
>   4     4       8        1        4      active sync   /dev/sda1
>   5     5       8      113        5      active sync   /dev/sdh1
>   6     6       8       17        6      active sync   /dev/sdb1
>   7     7       8       49        7      active sync   /dev/sdd1
>
>
> I have already downloaded and compiled mdadm 3.0, but didn't install
> it, awaiting further instructions from you. I'll install it now and
> run -Af and report back what happens.
>
> Thank you again!
>
> On Mon, Sep 7, 2009 at 2:52 AM, Neil Brown<neilb@xxxxxxx> wrote:
>> On Sunday September 6, majedb@xxxxxxxxx wrote:
>>> I forgot to mention that I'm running mdadm 2.6.7.1 (latest in Ubuntu
>>> Server repositories).
>>
>> You will need at least 2.6.8 to be able to assemble arrays which are
>> in the middle of a reshape.  I would suggest 2.6.9 or 3.0.
>>
>>>
>>> I tried forcing the assembly, but as mentioned, I just got an error:
>>> root@Adam:/var/www# mdadm -Af /dev/md0
>>> mdadm: superblock on /dev/sdh1 doesn't match others - assembly aborted
>>
>> This incorrect message is fixed in 2.6.8 an later.
>>
>>>
>>> I know I could've pasted whatever I wrote here, but it seemed
>>> redundant. I'll keep your hint in mind for the next time, if any
>>> (hopefully not).
>>
>> Redundant?  How is that relevant?
>> If you want help, your goal should be to make it as easy as possible
>> for people to help you.  Having all information in one email message
>> is easy.  Having to use a browser to get some of it makes it hard.
>> Having to register on the website to down load an attachment makes it
>> nearly impossible.
>>
>>>
>>> This may be of an interest to you:
>>> root@Adam:/var/www# mdadm -E /dev/sd[a-h]1 | grep Reshap
>>> sda1  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>>> sdb1  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>>> sdc1  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>>> sdd1  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>>> sde1  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>>> sdf1   Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>>> sdg1  Reshape pos'n : 2554257664 (2435.93 GiB 2615.56 GB)
>>> sdh1  Reshape pos'n : 3357066496 (3201.55 GiB 3437.64 GB)
>>
>> If you can post that again but without the "grep" I might be able to
>> be more helpful.  (i.e. the complete output of "mdadm -E /dev/sd[a-h]1").
>>
>> NeilBrown
>>
>>
>>
>>>
>>> Note that sdd1 was the spare.
>>>
>>> The UUIDs are all the same and the superblock is all similar except
>>> for the reshaping position of sdg1.
>>>
>>> I didn't try to recreate the array as I've never faced this issue
>>> before, so I don't know what kind of repercussions it may have.
>>>
>>> What I do know, that at the worst case scenario, I can recreate the
>>> array out of 7 disks (all but sdg1), but lose about 2.3TB worth of
>>> data :(
>>>
>>> On Sun, Sep 6, 2009 at 12:32 AM, NeilBrown<neilb@xxxxxxx> wrote:
>>> > On Sun, September 6, 2009 6:22 am, Majed B. wrote:
>>> >> Hello all,
>>> >>
>>> >> I have posted my problem already here:
>>> >> http://ubuntuforums.org/showthread.php?p=7900571#post7900571
>>> >> It also has file attachments of the output of mdadm -E /dev/sd[a-h]1
>>> >
>>> > It seems that you need to log in to read the attachements... so I haven't.
>>> >
>>> >>
>>> >> I appreciate any help on this.
>>> >
>>> > Hopefully you just need to add "--force" to the assemble command
>>> > and it would all just work.  However I haven't tested that on an array
>>> > that is in the process of a reshape so I cannot promise.
>>> > I might try to reproduce your situation and with some scratch drives
>>> > and check that mdadm -Af does the right thing, but it won't be a day
>>> > or so, and as I cannot see the --examine output I might get the
>>> > situation a bit wrong ... (hint hint: it is always best to post
>>> > full information rather than pointers to it, unless said information is
>>> > really really big).
>>> >
>>> > NeilBrown
>>>
>>> --
>>>        Majed B.
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
>       Majed B.
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux