Sorry @all, I had a few typos: Stefan /*St0fF*/ Hübner schrieb: > [...] > BUT: if the drive takes let's say 2 min for internal error recovery to > succeed of fail (whichever, doesn't matter), then the SG EH layer of the -> succeed OR fail > kernel will drop the disk, not md. This forces md to drop the disk, > also. The conclusion is: a technology is needed to prevent another > kernel level from dropping the disk. This technology exists, it's > called SCT-ERC (Smart Control Transport - Error Recovery Control). It's > the same as WD's TLER or Samsung's CCTL. But it is non-volatile. After -> But it is volatile. > a power on reset the timeout-values are reset to factory defaults. So > it needs to be set right before adding a disk to an array. > (for more info: check www.t13.org, find the ATA8-ACS documents) >> I do think we urgently need the hot reconstruction/recovery feature, so >> failing drives can be recovered to fresh drives with two sources of >> data, i.e. both the failing drive and the remaining drives in the array, >> giving us two chances of recovering every sector. > > I do not think this is easily possible. One would have to keep a map > about the "in sync" sectors of an array member and the "failed" sectors. > My guess is: this would need a partial redesign (again a new superblock > type containing information about "failed segments" probably). > Please correct me if I'm wrong and that is already included in 1.X (I'm > mostly working on 0.90 Superblocks). >> Cheers, >> >> John. >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > Cheers, > Stefan. > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html