RE: How to un-degrade an array after a totally spurious failure?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
> owner@xxxxxxxxxxxxxxx] On Behalf Of NeilBrown
> Sent: Wednesday, May 20, 2009 9:49 PM
> To: Nix
> Cc: linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: How to un-degrade an array after a totally spurious failure?
> 
> On Thu, May 21, 2009 9:10 am, Nix wrote:
> 
> > So, anyone got a command that would help? I'm not even sure if this is
> > assembly or growth: it doesn't quite fit into either of those
> > categories. There must be a way to do this, surely?
> 
> It is neither.  It is management.
> 
>  mdadm --manage /dev/mdX --remove /dev/sdb6
>  mdadm --manage /dev/mdX --add /dev/sdb6
> 
> (The --manage is not actually needed, but it doesn't hurt).
> 
> NeilBrown

	I have exactly the same situation, except there are two "failed"
disks on a RAID5 array.  As for the OP, the "failures" are spurious.
Running the remove and then the add command puts the disks back in as spare
disks, not live ones, and then the array just sits there, doing nothing.  I
tried the trick of doing

echo repair > /sys/block/md0/md/sync_action

but the array still just sits there saying it is "clean, degraded", with 2
spare and 5 working devices.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux