RE: RAID6 : Sequential Write Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The drives are connect with a true sas hba, Broadcom HBA 9400-16e, and writing zeros directly to each drive with dd gives me ~ 12 * 200MiB/s for the first 100GiB.

JDG

From: Feng Zhang [mailto:prod.feng@xxxxxxxxx] 
Sent: vendredi 15 février 2019 19:24
To: Roy Sigurd Karlsbakk <roy@xxxxxxxxxxxxx>
Cc: Wilson Jonathan <i400sjon@xxxxxxxxx>; Jean De Gyns <Jean.DeGyns@xxxxxxxxxx>; Linux Raid <linux-raid@xxxxxxxxxxxxxxx>
Subject: Re: RAID6 : Sequential Write Performance

Could this because of the ways of the connections of the hard drives? How did you connect your hard drives without the raid controller? One thing may be helpful is to connect the hard drives to the controller, while set it in JBOD mode, and build soft RAID on it and test?

Best,

Feng


Best,

Feng

On Fri, Feb 15, 2019 at 11:36 AM Roy Sigurd Karlsbakk <roy@xxxxxxxxxxxxx> wrote:
>> Greetings !
>> 
>> I created a MD RAID6 with a 512KiB chunk size out of 12 8TB drives, no internal
>> bitmap and no journal on quad xeon gold 6154 running kernel 4.18 (Ubuntu
>> 18.04.1) and set FIO to do a 1TiB sequential write to the device with a block
>> size of 5M, 3 processes and a QD of 64.
>> 
>> Each drive being able to achieve 215MiB/s at the beginning of the drive, I
>> expected the output to be somewhere around the 2GiB/s mark at the beginning of
>> the raid array.
>> After setting stripe_cache_size to 32768 and group_thread_cnt to 2, I only got
>> an average 1.4GiB/s out of my array and the throughput wasn't very stable.
>> 
>> I did the same test against a hardware raid controller, the Broadcom MegaRAID
>> 9480-8i8e, and it managed a nice flat 1.9 GiB/s.
>> 
>> I expected a modern cpu to easily win over a hardware controller but that wasn't
>> the case.
>> Am I missing something ?
> 
> At a wag... the 4GB ram cache on the raid card causing it to appear as
> if the disk access is faster?
> 
> I have to be honest, I've long since given up trying to test the
> performance of raid formats/layouts/chunks/etc... due to the multiple
> ways the system can "do stuff" that changes the results with even the
> exact same manual style tests. Then again, my workloads tend to be "good
> enough, is good enough". I guess, however, someone needing a high speed
> file server bonded 10Gb links to multiple workstations running video
> file editing software would be a whole different ballgame.

Well, something is bound to be wrong here when a RAID card is faster than using a far faster CPU for the work, with faster memory etc. Does anyone know how this can be debugged or fixed? Is there a possibility to choose which to use from SSE/AVX?

Vennlig hilsen

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
Hið góða skaltu í stein höggva, hið illa í snjó rita.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux