Re: Need to remove failed disk from RAID5 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Roman Mamedov wrote:
On Wed, 18 Jul 2012 22:44:06 -0400
Alex <mysqlstudent@xxxxxxxxx> wrote:

I'm not sure what stats I could provide to troubleshoot this further.
At this rate, the 2.7T array will take a full day to resync. Is that
to be expected?
1) did you try increasing stripe_cache_size?

2) maybe it's an "Advanced Format" drive, the RAID partition is not properly
aligned?

That's a good argument for not using "whole disk" array members, a partition can be started at a good offset and may perform better. As for the speed, since it is reconstructing the array data (hope the other drives are okay), every block written requires three blocks read and a reconstruct in cpu and memory. You can use "blockdev" to increase readahead, and set the devices to use the deadline scheduler, that _may_ improve things somewhat, but you have to read three block to write one, so it's not going to be fast.

--
Bill Davidsen <davidsen@xxxxxxx>
  We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination.  -me, 2010


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux