RE: RAID 5 - One drive dropped while replacing another

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
> owner@xxxxxxxxxxxxxxx] On Behalf Of Roman Mamedov
> Sent: Tuesday, February 01, 2011 5:36 PM
> To: Bryan Wintermute
> Cc: linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: RAID 5 - One drive dropped while replacing another
> 
> On Tue, 1 Feb 2011 15:27:50 -0800
> Bryan Wintermute <bryanwintermute@xxxxxxxxx> wrote:
> 
> > I have a RAID5 setup with 15 drives.
> 
> Looks like you got the problem you were so desperately asking for, with
> this
> crazy setup. :(
> 
> > Is there anything I can do to get around these bad sectors or force
> mdadm
> > to ignore them to at least complete the recovery?
> 
> I suppose the second failed drive is still mostly alive, just has some
> unreadable areas? If so, I suggest that you get another new clean drive,
> and
> while your mdadm array is stopped, copy whatever you can with e.g.
> dd_rescue
> from the semi-dead drive to this new one. Then remove the bad drive from
> the
> system, and start the array with the new drive instead of the bad one.

Before asking this, I would first ask, "How dead is the first dead drive?"
Using dd_rescue on the "dead" drives might recover more data.  Or not.  It
might be time to drag out the backups.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux