Re: Feature request: Add flag for assuming a new clean drive completely dirty when adding to a degraded raid5 array in order to increase the speed of the array rebuild

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> Nope, I haven't read the code. I only see a low sync speed (fluctuating from 20
>> to 80MB/s) whilst the drives can perform much better doing sequential reading
>> and writing (250MB/s per drive and up to 600MB/s all 4 drives in total).
>> During the sync I hear a high noise caused by heads flying there and back and
>> that smells.
>
>Okay, so read performance from the array is worse than you would expect 
>from a single drive. And the heads should not be "flying there and back" 
>- they should just be streaming data. That's actually worrying - a VERY 
>plausible explanation is that your drives are on the verge of failure!!

Nope, the drives are new and OK ... of course I did a ton of tests
and the SMART is looking good ... no reallocated sectors, no pending sectors
and the array now (after the rebuild) works at the expected speed and
without noise ... just the resync was a total disaster.

>> The chosen drives have poor seeking performance and small caches and are
>> probably unable to reorder the operations to be more sequential. The whole
>> solution is 'economic' since the organisation owning the solution is poor and
>> cannot afford better hardware.
>
>The drives shouldn't need to reorder the operations - a rebuild is an 
>exercise in pure streaming ... unless there are so many badblocks the 
>whole drive is a mess ...

Yeah, I would expect that as well, but the reality was different.
As stated above, the drives are perfectly healthy.

>> That also means RAID6 is not an option. But we shouldn't search excuses what's
>> wrong on the chosen scenario when the code is potentially suboptimal :] We're
>> trying to make Linux better, right? :]
>> 
>> I'm searching for someone, who knows the code well and can confirm my findings
>> or who could point me at anything I could try in order to increase the rebuild
>> speed. So far I've tried changing the readahead, minimum resync speed, stripe
>> cache size, but it increased the resync speed by few percent only.
>
>Actually, you might find (counter-intuitive though it sounds) REDUCING 
>the max sync speed might be better ... I'd guess from what you say, 
>about 60MB/s.
>The other thing is, could you be confusing MB and Mb? Three 250Mb drives 
>would peak at about 80MB.

Nope, all units were Bytes.

>> The thing that worries me is your reference to repeated seeks. That 
>> should NOT be happening. Unless of course the system is in heavy use at 
>> the same time as the rebuild.

Nope, the MD device was NOT mounted and no process was touching it.
In case of this cheap HW I suspect a firmware bug in the SATA bridge, triggering
the issue somehow and therefore I'd like to focus on the second and better HW
I mentioned in my previous email addressed to Roger, where I hear no strange sounds,
but still, the resync speed is far below my expectations and as far as I can remember
I was never really satisfied with the RAID5 sync speed.

The assembled array can do over 700MB/s when I temporarilly freeze the sync,
but the sync speed is 100MB/s only ... why so?
Again, the MD device is completely idle ... not mounted and no process is touching it.

---
/dev/md3:
 Timing cached reads:   22440 MB in  1.99 seconds = 11285.30 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 2144 MB in  3.00 seconds = 713.91 MB/sec
---
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10] 
md3 : active raid5 sdi1[5] sdl1[6] sdk1[4] sdj1[2]
      46877237760 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [==================>..]  resync = 93.6% (14637814004/15625745920) finish=161.8min speed=101758K/sec
      bitmap: 5/59 pages [20KB], 131072KB chunk
---

So, what's wrong on this picture?

Thx,
Jaromir.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux