On Wed, 2 Apr 2008, Justin Piszcz wrote:
On Wed, 2 Apr 2008, Conway S. Smith wrote:The stripe_cache_size will make a huge difference with RAID5, try up to 32768.On Wed, 2 Apr 2008 08:16:16 -0400 (EDT) Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx> wrote:On Tue, 1 Apr 2008, Beolach wrote:<snip>Also, would you be willing to share your script for averaging 3 bonnie++ runs? I'm too lazy to write my own, so I've just been doing single runs.I do not have a single script to do it actually, it works like this: # Run bonnie 3 times (script). for i in 1 2 3 do /usr/bin/time /usr/sbin/bonnie++ -d /x/test -s 16384 -m p34 -n 16:100000:16:64 > $HOME/test"$i".txt 2>&1 done # then get the results $ cat test* | grep , p34,16G,80170,99,261521,43,109222,14,82516,99,527121,39,864.3,1,16:100000:16/6411428,83,+++++,+++,6603,30,7780,56,+++++,+++,8959,45 p34,16G,79428,99,266452,44,111190,14,82087,99,535667,39,884.3,1,16:100000:16/643388,26,+++++,+++,7185,34,6364,46,+++++,+++,4040,22 p34,16G,78346,99,255350,42,111591,14,82153,99,527210,38,850.4,1,16:100000:16/642916,21,+++++,+++,18495,81,5614,41,+++++,+++,15727,83 $ cat test* | grep , > results $ avgbonnie results p34,16G,79314.7,99,261108,43,110668,14,82252,99,529999,38.6667,866.333,1,16:100000:16/64,5910.67,43.3333,0,0,10761,48.3333,6586,47.6667,0,0,9575.33,50 Nothing special for the average, just a long awk statement hardcoded for 3 runs: grep ',' "$1" | awk -F',' '{print $1, $2, c += $3/3, d += $4/3, e += $5/3, f += $6/3, g += $7/3, h += $8/3, i += $9/3, j += $10/3, k += $11/3, l += $12/3, m += $13/3, n += $14/3, $15, p += $16/3, q += $17/3, r += $18/3, s += $19/3, t += $20/3, u += $21/3, v += $22/3, w += $23/3, x += $24/3, y += $25/3, z += $26/3, aa += $27/3}' | tail -n 1 | sed 's/\ /,/g' $ grep ',' results | awk -F',' '{print $1, $2, c += $3/3, d += $4/3, e += $5/3, f += $6/3, g += $7/3, h += $8/3, i += $9/3, j += $10/3, k += $11/3, l += $12/3, m += $13/3, n += $14/3, $15, p += $16/3, q += $17/3, r += $18/3, s += $19/3, t += $20/3, u += $21/3, v += $22/3, w += $23/3, x += $24/3, y += $25/3, z += $26/3, aa += $27/3}' | tail -n 1 | sed 's/\ /,/g' p34,16G,79314.7,99,261108,43,110668,14,82252,99,529999,38.6667,866.333,1,16:100000:16/64,5910.67,43.3333,0,0,10761,48.3333,6586,47.6667,0,0,9575.33,50 Hope this helps..Thanks! Although now I'll have to get around to learning awk so I can understand that. ;-)[1] My bonnie++ results: <http://www.xmission.com/~beolach/bonnie++_4disk-ls.html>Intriguing results you have there, nice sequential read speed. What FS are you using? Any special options?XFS, no special mkfs options, noatime,nodiratime mount options.What read-ahead are you using? What is your stripe_cache_size? These heavily affect performance.I haven't tried tweaking these yet. Are they likely to change which chunksize performs best? I was thinking I'd figure out a chunksize, and then look at other performance tweaks. But I'm worried that I might later find out a different chunksize would have been better, and chunksize is much harder to change than read-ahead. $ blockdev --getra /dev/md1 3072 $ cat /sys/block/md1/md/stripe_cache_size 256256,512,1024,2048,4096,16384,32768 The sweet spot is 16384 for my config.Thanks, Conway S. Smith-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
Here is my config: # Set read-ahead. echo "Setting read-ahead to 64 MiB for /dev/md3" blockdev --setra 65536 /dev/md3 # Set stripe-cache_size for RAID5. echo "Setting stripe_cache_size to 16 MiB for /dev/md3" echo 16384 > /sys/block/md3/md/stripe_cache_size # Disable NCQ on all disks. (for raptors it increases the speed 30-40MiB/s) echo "Disabling NCQ on all disks..." for i in $DISKS do echo "Disabling NCQ on $i" echo 1 > /sys/block/"$i"/device/queue_depth done -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html