Re: The chunk size paradox

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/12/2013 19:48, Phillip Susi wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I believe that using a single "chunk size" causes a lose-lose tradeoff
when creating raid 5/6/10 arrays.

I don't think your analysis is correct.

Firstly you are forgetting that multiple requests are issued simultaneously to one disk by the kernel, and they can be served out-of-order via NCQ / TCQ by the disks. The kernel does not wait for sector N to be read before issuing the read for sector N+1, it issues a lot of them together since it knows how much data it has to read (via readahead, most of the times). The disk reorders read/write requests according to its angular position, so you almost never pay for the angular offset during a sequential read/write, not even when skipping redundant data from one component disk of the RAID.

Secondly, for writes, I suspect you are assuming that a whole stipe has to be read and rewritten in order for one small write to be performed, but it is not so. For a 4k write in raid5, two 4k sectors are read, then two 4k sectors are written, and this is completely independent from chunk size. It already behaves mostly like your "groups", which are the stripes actually.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux