Re: Failed drive in raid6 while doing data-check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4 June 2012 01:31, Krzysztof Adamski <k@xxxxxxxxxxx> wrote:
[…]
> The cat /proc/mdstat is:
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md7 : active raid6 sdd2[0] sdab2[11] sdaa2[10] sdz2[9] sdy2[8] sde2[7] sdh2[6] sdf2[5] sdg2[4] sdb2[3](F) sdc2[2] sda2[1]
>      29283121600 blocks super 1.2 level 6, 32k chunk, algorithm 2 [12/11] [UUU_UUUUUUUU]
>      [=============>.......]  check = 65.3% (1913765076/2928312160) finish=44345.9min speed=381K/sec
>      bitmap: 1/22 pages [4KB], 65536KB chunk
>
> I don't really want to wait 30 days for this to finish, what is correct
> thing to do before I replace the failed drive?

   Is stripe_cache_size reasonably adjusted?

--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux