Re: RAID 5 array recovery - two drives errors in external enclosure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well, thank god I copied everything off the array this weekend, but strange:

I had gotten the array up finally with the correct order and missing drive:

mdadm -C /dev/md0 -l 5 -n 5 -c 256 /dev/sdf1 /dev/sdc1 /dev/sdb1
/dev/sdd1 missing

------------
After copying everything off, I power-cycled my server and tried to
bring the array back up again using:

mdadm -A /dev/md0 /dev/sdf1 /dev/sdc1 /dev/sdb1 /dev/sdd1 missing

I received the error : mdadm: superblock on /dev/sdc1 doesn't match
others - assembly aborted.

This is strange since I had this seemingly working and was able to
copy all the data offline this weekend.
Drives haven't changed order - I haven't unplugged anything or changed
any cords.

Another issue of the command that worked before on the array:
mdadm -C /dev/md0 -l 5 -n 5 -c 256 /dev/sdf1 /dev/sdc1 /dev/sdb1
/dev/sdd1 missing

yields my old problem of not being able to mount.
EXT3-fs: md0: couldn't mount because of unsupported optional features
(3fd18e00).
mount: wrong fs type, bad option, bad superblock on /dev/md0,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

What gives?  The drive order couldn't have changed just through a
reboot.   All the same number of drives and drive letters are there.


-Tim



On Fri, Sep 18, 2009 at 6:35 PM, Guy Watkins <guy@xxxxxxxxxxxxxxxx> wrote:
> If you are starting over, maybe you should use RAID6?
>
> } -----Original Message-----
> } From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
> } owner@xxxxxxxxxxxxxxx] On Behalf Of Tom Carlson
> } Sent: Friday, September 18, 2009 9:18 PM
> } To: Majed B.
> } Cc: Tim Bostrom; linux-raid
> } Subject: Re: RAID 5 array recovery - two drives errors in external
> } enclosure
> }
> } 2009/9/19 Majed B. <majedb@xxxxxxxxx>:
> } > If you're going to take all your data out, I would suggest you do a
> } > clean start and zero out all the disks to force the remapping of bad
> } > sectors, then run smartctl -t offline on all disks and after it's done
> } > (it will take A LONG time), create your filesystem on an array and put
> } > back your data.
> }
> } I'd recommend running the badblocks program on each device to be put
> } in the array too, just to be certain that none of your disks are going
> } to go horribly flakey on assembly and build... I'm not sure if the
> } offline smart test does a full scan like badblocks would.
> }
> } T
> } --
> } To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> } the body of a message to majordomo@xxxxxxxxxxxxxxx
> } More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>



-- 
-tim
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux