Re: Disappointing performance: 5-disk RAID6, 3.11.6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 21 Jan 2014 10:42:19 -0600 Jon Nelson
<jnelson-linux-raid@xxxxxxxxxxx> wrote:

> I have a 5-disk RAID6 using (5) 320GB SATA drives.
> I rarely see even sequential I/O approaching that of a single drive's
> performance.
> Example: (Rarely!) I'll see an aggregate 250MB/s read or write, but
> that translates to 50MB/s read or write per-drive. I was hoping for
> more.

A 5 disk RAID6 has 3 data drives (in each stripe), so 250MB/s translates to
250/3 or 83MB/s per drive (skipping over parity data isn't faster than
reading it unless you have a very large chunk size, which brings other costs).

What exactly where you hoping for?

If you run something like
  for i in a b c d e
  do dd if=/dev/sd${i}3 of=/dev/null bs=1M count=100 &
  done
while system is otherwise idle, what throughput does each dd report?

NeilBrown


> 
> The partition layout looks like this:
> 
> /dev/sda1 : start=     2048, size=  1024000, Id=83, bootable
> /dev/sda2 : start=  1026048, size=  1024000, Id=82
> /dev/sda3 : start=  2050048, size=623091712, Id=fd
> /dev/sda4 : start=        0, size=        0, Id= 0
> 
> on all 5 disks, and sd{whatever}3 is used to assemble the raid,
> specifically, /dev/md2.
> 
> mdadm -D /dev/md2:
> 
> /dev/md2:
>         Version : 1.2
>   Creation Time : Fri Nov  1 11:13:07 2013
>      Raid Level : raid6
>      Array Size : 934242816 (890.96 GiB 956.66 GB)
>   Used Dev Size : 311414272 (296.99 GiB 318.89 GB)
>    Raid Devices : 5
>   Total Devices : 5
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Tue Jan 21 10:33:52 2014
>           State : active
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            Name : turnip:2  (local to host turnip)
>            UUID : bece804d:eaaeb280:38d2d7f3:1e493146
>          Events : 21788
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       51        0      active sync   /dev/sdd3
>        1       8       35        1      active sync   /dev/sdc3
>        2       8        3        2      active sync   /dev/sda3
>        3       8       19        3      active sync   /dev/sdb3
>        4       8       67        4      active sync   /dev/sde3
> 
> The filesystem is ext4, and debugfs says:
> 
> RAID stride:              16
> RAID stripe width:        48
> 
> 
> The processor is an AMD Phenom 9150e (quad-core, x86_64) and the O/S
> is openSUSE 13.1, kernel 3.11.6. Some of the hardware looks like this:
> 
> 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] RS780 Host Bridge
> 00:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
> to PCI bridge (int gfx)
> 00:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
> to PCI bridge (PCIE port 3)
> 00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI]
> SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode]
> 
> 
> Settings:
> The stripe_cache_size is 4096 (see
> http://blog.jamponi.net/2013/12/sw-raid6-performance-influenced-by.html
> )
> readahead is 16384
> scheduler is deadline
> queue depth per-drive is 1.
> nr_requests is 256.
> 
> Does this seem out of line? Thoughts?
> 

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux