Re: slow 'check'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Raz Ben-Jehuda(caro) wrote:
I suggest you test all drives concurrently with dd.
load dd on sda , then sdb slowly one after the other and
see whether the throughput degrades. use iostat.
furtheremore, dd is not the measure for random access.
AFAIK 'check' does no do random access, which was the original question. My figures are related only to that.

For random access, read should access only one drive unless there's an error, and write two, data and updated parity. I don't have the tool I want to measure this properly, perhaps later this week I'll generate one.

On 2/10/07, Bill Davidsen <davidsen@xxxxxxx> wrote:

Wait, let's say that we have three drives and 1m chunk size. So we read
1M here, 1M there, and 1M somewhere else, and get 2M data and 1M parity
which we check. With five we would read 4M data and 1M parity, but have
4M checked. The end case is that for each stripe we read N*chunk bytes
and verify (N-1)*chunk. In fact the data is (N-1)/N of the stripe, and
the percentage gets higher (not lower) as you add drives. I see no
reason why more drives would be slower, a higher percentage of the bytes
read are data.

That doesn't mean that you can't run out of Bus bandwidth, but number of
drives is not obviously the issue.


--
bill davidsen <davidsen@xxxxxxx>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux