Re: Degraded Array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You have a degraded array now with 1 disk down. If you proceed, more
disks might pop out due to errors.

It's best to backup your data, run a check on the array, fix it then
try to resume the reshape.

On Sat, Dec 4, 2010 at 5:42 AM, Leslie Rhorer <lrhorer@xxxxxxxxxxx> wrote:
>
> Hello everyone.
>
> ÂÂÂÂÂÂÂÂÂÂÂ I was just growing one of my RAID6 arrays from 13 to 14
> members. The array growth had passed its critical stage and had been
> growing for several minutes when the system came to a screeching halt. It
> hit the big red switch, and when the system rebooted, the array assembled,
> but two members are missing. One of the members is the new drive and the
> other is the 13th drive in the RAID set. Of course, the array can run well
> enough with only 12 members, but itâs definitely not the best situation,
> especially since the re-shape will take another day and a half. Is it best
> I go ahead and leave the array in its current state until the re-shape is
> done, or should I go ahead and add back the two failed drives?
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html



--
  Â Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux