Ralf Müller <ralf@xxxxxxxx> writes: > Am 18.06.2009 um 10:27 schrieb Goswin von Brederlow: >> Ralf Müller <ralf@xxxxxxxx> writes: >> >>> After I added a forth 1.5TB disk to the array yesterday and >>> reshaped the >>> former 3 disk raid5 to a 4 disk one, I wiped out the 300GB disk >>> raid5, >>> set one of the 1.2TB partitions faulty, removed it from its raid, >>> removed the 300GB partition and resized the former 1.2TB partition >>> (in place) to 1.5TB. Now I added this partition to its raid. >>> >>> It has been recognized as a former member of this raid - so far so >>> good - but for whatever reason the raid subsystem decided to start >>> a complete recovery. >>> So here my question: >>> >>> What went wrong and what do I have to do to avoid a full recovery for >>> the next disk? >> >> I guess something wrote to your raid5 while the one disk was >> removed. > > Thats possible - after re-add, the bitmap showed 2/275 pages as unclean. > >> When you added it back the event counter would differ and the >> disk needs to be resynced completly. A bitmap would help limiting this >> to the parts that have changed. Add an internal bitmap before you >> remove the next disk. > > There is an internal write intent bitmap at the array: > DatenGrab:/media # cat /proc/mdstat > Personalities : [raid6] [raid5] [raid4] [raid1] > md3 : active raid5 sdf1[3] sdi1[0] sdj1[4] sdh1[1] > 3457099008 blocks super 1.2 level 5, 256k chunk, algorithm 2 > [4/4] [UUUU] > bitmap: 0/275 pages [0KB], 2048KB chunk > > Do you have an idea why this bitmap has been ignored? No idea. MfG Goswin -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html