Re: raid bug in 2.4.20

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Rob Hagopian wrote:
 
> Personalities : [linear] [raid0] [raid1] [raid5]
> read_ahead 1024 sectors
> md7 : active raid1 sdb6[2] sda6[0]
>       1052160 blocks [2/1] [U_]
> 
> md0 : active raid1 sdb1[2] sda1[0]
>       128384 blocks [2/1] [U_]
>       [>....................]  recovery =  0.0% (0/128384)
> finish=3658.9min speed=0K/sec

Have you been able to reproduce this problem with any regularity? Or
have you tried? I have an idea what *might* be causing this, if you're
willing to try out a patch...

BTW, the md "BUG" you originally reported is not really a bug -- that
always happens when you try to raidhotremove an active disk.

--
Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux