RE: recovering from a controller failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > Once the array is assembled, the repair function will re-establish the
> > redundancy within the array.  Any stripes whose data does not match the
> > calculated value required to produce the upper layer information are
> > re-written.
> That's it - as you can see there are 15 drives in raid6 array. Examine
> on disks from sda to sdh shows drives active and event count is 0.159,
> sdi to sdp events count is 0.168 and show that sd[a-i] are faulty. So
> I'm guessing there is no way to know which part of array is "right"
> and i guess they are desynced.

	I deleted the original e-mails while cleaning out my in box a few
hours ago, so I can't look at your original response, but I've never seen
fractional event counts.  Some of mine are in the millions.

	In any case, if the corruption is bad enough, you may indeed lose
some data.  Remember, however, that unless this was a brand new array, or
the data on the array was undergoing a truly phenomenal amount of thrashing,
most of the data on the drives is probably consistent, or at least
consistent enough to allow recovery.  Some, however, possibly even a large
amount, may be toast.  That's on reason why you have backups.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux