Re: RAID-6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 13, 2002 at 02:33:46PM +1100, Neil Brown wrote:
...
> > The benchmark goes:
> > 
> > | some tests on raid5 with 4k and 128k chunk size. The results are as follows:
> > | Access Spec     4K(MBps)        4K-deg(MBps)    128K(MBps) 128K-deg(MBps)
> > | 2K Seq Read     23.015089       33.293993       25.415035  32.669278
> > | 2K Seq Write    27.363041       30.555328       14.185889  16.087862
> > | 64K Seq Read    22.952559       44.414774       26.02711   44.036993
> > | 64K Seq Write   25.171833       32.67759        13.97861   15.618126
> > 
> > So down from 27MB/sec to 14MB/sec running 2k-block sequential writes on
> > a 128k chunk array versus a 4k chunk array (non-degraded).
> 
> When doing sequential writes, a small chunk size means you are more
> likely to fill up a whole stripe before data is flushed to disk, so it
> is very possible that you wont need to pre-read parity at all.  With a
> larger chunksize, it is more likely that you will have to write, and
> possibly read, the parity block several times.

Except if one worked on 4k sub-chunks - right  ?   :)

> 
> So if you are doing single threaded sequential accesses, a smaller
> chunk size is definately better.

Definitely not so for reads - the seeking past the parity blocks ruin
sequential read performance when we do many such seeks (eg. when we have
small chunks) - as witnessed by the benchmark data above.

> If you are doing lots of parallel accesses (typical multi-user work
> load), small chunk sizes tends to mean that every access goes to all
> drives so there is lots of contention.  In theory a larger chunk size
> means that more accesses will be entirely satisfied from just one disk,
> so there it more opportunity for concurrency between the different
> users.
> 
> As always, the best way to choose a chunk size is develop a realistic
> work load and test it against several different chunk sizes.   There
> is no rule like "bigger is better" or "smaller is better".

For a single reader/writer, it was pretty obvious from the above that
"big is good" for reads (because of the fewer parity block skip seeks),
and "small is good" for writes.

So, by making a big chunk-sized array, and having it work on 4k
sub-chunks for writes, was some idea I had which I felt would just give
the best scenario in both cases.

Am I smoking crack, or ?  ;)

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux