Re: best base / worst case RAID 5,6 write speeds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 14, 2015 at 4:17 PM, Mark Knecht <markknecht@xxxxxxxxx> wrote:
>
>
> On Mon, Dec 14, 2015 at 2:05 PM, Dallas Clement <dallas.a.clement@xxxxxxxxx>
> wrote:
>>
>> <SNIP>
>>
>> The speeds I am seeing with dd are definitely faster.  I was getting
>> about 333 MB/s when writing bs=2048k which was not chunk aligned.
>> When writing bs=1408k I am getting at least 750 MB/s.  Reducing the
>> RMWs certainly did help.  But this write speed is still far short of
>> the (12 - 1) * 150 MB/s = 1650 MB/s I am expecting for minimal to no
>> RMWs.  I probably am not able to saturate the RAID device with dd
>> though.
>
> But then you get back to all the questions about where you are on the drives
> physically (inside vs outside) and all the potential bottlenecks in the
> hardware. It
> might not be 'far short' if you're on the inside of the drive.
>
> I have no idea about what vintage Cougar Point machine you have but there
> are some reports about bugs that caused issues with a couple of the
> higher hard drive interface ports on some earlier machines. Your nature
> seems to be to generally build the largest configurations you can but Phil
> suggested earlier and it might be appropriate here to disconnect a bunch of
> drives and then do 1 drive, 2 drives, 3 drives and measure speeds. I seem
> to remember you saying something about it working well until you added the
> last drive so if you go this way I'd suggest physically disconnecting drives
> you are not testing, booting up, testing, powering down, adding another
> drive, etc.

Hi Mark

> But then you get back to all the questions about where you are on the drives
> physically (inside vs outside) and all the potential bottlenecks in the
> hardware. It
> might not be 'far short' if you're on the inside of the drive.

Perhaps.  But I was getting about 95 MB/s on the inside when I
measured earlier.  Even with this number the write speed for RAID 5
should be around 11 * 95 = 1045 MB/s.  Also, when I was running fio on
individual disks concurrently, adding one in at a time, iostat was
showing wMB/s to be around 160-170 MB/s.

> I have no idea about what vintage Cougar Point machine you have but there
> are some reports about bugs that caused issues with a couple of the
> higher hard drive interface ports on some earlier machines.

Hmm, I will need to look into that some more.

> I'd suggest physically disconnecting drives you are not testing, booting up, testing, powering down, adding another drive, etc.

Yes, I haven't tried that yet with RAID 5 or 6.  I'll give it a shot
maybe starting with 4 disks, adding one at a time and measure the
write speed.

On another point, this blktrace program sure is neat!  A wealth of info here.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux