Re: best base / worst case RAID 5,6 write speeds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 14, 2015 at 3:02 PM, Mark Knecht <markknecht@xxxxxxxxx> wrote:
>
>
> On Mon, Dec 14, 2015 at 12:55 PM, Dallas Clement
> <dallas.a.clement@xxxxxxxxx> wrote:
>>
>> On Mon, Dec 14, 2015 at 2:40 PM, Mark Knecht <markknecht@xxxxxxxxx> wrote:
>> >
>> >
>> > On Mon, Dec 14, 2015 at 12:14 PM, Dallas Clement
>> > <dallas.a.clement@xxxxxxxxx> wrote:
>> >>
>> >> <SNIP>
>> >>
>> >> Hi Phil,  I ran blktrace while writing with dd to a RAID 5 device with
>> >> 12 disks.  My chunk size is 128K.  So I set my block size to 128K *
>> >> (12-2) = 1280k.   Here is the dd command I ran.
>> >
>> > Just curious but for my own knowledge if it's RAID5 why is it 12-2?
>> >
>> > - Mark
>>
>> > Just curious but for my own knowledge if it's RAID5 why is it 12-2?
>>
>> Shouldn't be.  It should have been 12-1 or writing 1408k.  Boy do I
>> feel dumb.  Anyhow, when writing this value, no more RMWs.   Yay!
>
> I wasn't going to be so bold as to suggest the RMW's would go away but I'm
> glad they did.
>
> So, now you can presumably gather new data looking at speed and post that,
> correct?
>
> Cheers,
> Mark

Hmm, I think I may have spoke to soon.  I did a speed test using fio
this time, same bs=1408k.  I see lots of RMWs in the trace this time.
I did another larger dd transfer too, and I see some RMWs but not very
many - maybe 4 or 5 for a 20GB transfer.

It looks like the LBAs are increasing for the writes to the disks.

  9,10   2     2816     0.737523948 27410  Q  WS 965888 + 256 [dd]
  9,10   2     2817     0.737620583 27410  Q  WS 966144 + 256 [dd]
  9,10   2     2818     0.737630651 27410  Q  WS 966400 + 256 [dd]
  9,10   2     2819     0.737641625 27410  Q  WS 966656 + 256 [dd]
  9,10   2     2820     0.737651603 27410  Q  WS 966912 + 256 [dd]
  9,10   2     2821     0.737662735 27410  Q  WS 967168 + 256 [dd]
  9,10   2     2822     0.737672709 27410  Q  WS 967424 + 256 [dd]
  9,10   2     2823     0.737683881 27410  Q  WS 967680 + 256 [dd]
  9,10   2     2824     0.737693896 27410  Q  WS 967936 + 256 [dd]
  9,10   2     2825     0.737704484 27410  Q  WS 968192 + 256 [dd]
  9,10   2     2826     0.737714348 27410  Q  WS 968448 + 256 [dd]

The dd transfers do seem faster when using bs=1408k.  But need to
collect some more data.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux