Hi, Rob Hagopian wrote: > Personalities : [linear] [raid0] [raid1] [raid5] > read_ahead 1024 sectors > md7 : active raid1 sdb6[2] sda6[0] > 1052160 blocks [2/1] [U_] > > md0 : active raid1 sdb1[2] sda1[0] > 128384 blocks [2/1] [U_] > [>....................] recovery = 0.0% (0/128384) > finish=3658.9min speed=0K/sec Have you been able to reproduce this problem with any regularity? Or have you tried? I have an idea what *might* be causing this, if you're willing to try out a patch... BTW, the md "BUG" you originally reported is not really a bug -- that always happens when you try to raidhotremove an active disk. -- Paul - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html