Re: Performance Characteristics of All Linux RAIDs (mdadm/bonnie++)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Justin Piszcz wrote:
Hardware:

1. Utilized (6) 400 gigabyte sata hard drives.
2. Everything is on PCI-e (965 chipset & a 2port sata card)

Used the following 'optimizations' for all tests.

# Set read-ahead.
echo "Setting read-ahead to 64 MiB for /dev/md3"
blockdev --setra 65536 /dev/md3

That's actually 65k x 512byte blocks so 32MiB

# Set stripe-cache_size for RAID5.
echo "Setting stripe_cache_size to 16 MiB for /dev/md3"
echo 16384 > /sys/block/md3/md/stripe_cache_size

# Disable NCQ on all disks.
echo "Disabling NCQ on all disks..."
for i in $DISKS
do
  echo "Disabling NCQ on $i"
  echo 1 > /sys/block/"$i"/device/queue_depth
done

Software:

Kernel: 2.6.23.1 x86_64
Filesystem: XFS
Mount options: defaults,noatime

Results:

http://home.comcast.net/~jpiszcz/raid/20080528/raid-levels.html
http://home.comcast.net/~jpiszcz/raid/20080528/raid-levels.txt

Note: 'deg' means degraded and the number after is the number of disks failed, I did not test degraded raid10 because there are many ways you can degrade a raid10; however, the 3 types of raid10 were benchmarked f2,n2,o2.

Each test was run 3 times and averaged--FYI.


Results are meaningless without a crucial detail - what was the chunk size used during array creation time? Otherwise interesting test :)

Cheers

Peter

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux