Re: (was: Re: md: raid5 vs raid10 (f2,n2,o2) benchmarks [w/10 raptors])

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, 1 Apr 2008, Beolach wrote:

On Sat, Mar 29, 2008 at 12:20 PM, Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx> wrote:
<snip>

Thanks for sharing your benchmarking methods.  I've been running
similar benchmarks, trying to pick the best chunksize for my hardware,
and I also found your previous thread "Fastest Chunk Size w/XFS For MD
Software RAID = 1024k" in the archives.  In my benchmarks[1], I see
256k & 128k giving the best results.  So now I'm wondering if that's
just because I have different harddrives (Seagate ST31000340NS 1000GB
32MB cache), or if the number of drives is also important - I only
have 4 drives right now, but I plan on getting more when I can afford
to.  If I had more drives, would the larger chunksizes (512k or 1024k)
be more likely to perform better than 256k?

Also, would you be willing to share your script for averaging 3
bonnie++ runs?  I'm too lazy to write my own, so I've just been doing
single runs.
I do not have a single script to do it actually, it works like this:

# Run bonnie 3 times (script).
for i in 1 2 3
do
  /usr/bin/time /usr/sbin/bonnie++ -d /x/test -s 16384 -m p34 -n 16:100000:16:64 > $HOME/test"$i".txt 2>&1
done

# then get the results
$ cat test* | grep ,
p34,16G,80170,99,261521,43,109222,14,82516,99,527121,39,864.3,1,16:100000:16/6411428,83,+++++,+++,6603,30,7780,56,+++++,+++,8959,45
p34,16G,79428,99,266452,44,111190,14,82087,99,535667,39,884.3,1,16:100000:16/643388,26,+++++,+++,7185,34,6364,46,+++++,+++,4040,22
p34,16G,78346,99,255350,42,111591,14,82153,99,527210,38,850.4,1,16:100000:16/642916,21,+++++,+++,18495,81,5614,41,+++++,+++,15727,83

$ cat test* | grep , > results

$ avgbonnie results p34,16G,79314.7,99,261108,43,110668,14,82252,99,529999,38.6667,866.333,1,16:100000:16/64,5910.67,43.3333,0,0,10761,48.3333,6586,47.6667,0,0,9575.33,50

Nothing special for the average, just a long awk statement hardcoded for 3
runs:

grep ',' "$1" | awk -F',' '{print $1, $2, c += $3/3, d += $4/3, e += $5/3, f += $6/3, g += $7/3, h += $8/3, i += $9/3, j += $10/3, k += $11/3, l += $12/3, m += $13/3, n += $14/3, $15, p += $16/3, q += $17/3, r += $18/3, s += $19/3, t += $20/3, u += $21/3, v += $22/3, w += $23/3, x += $24/3, y += $25/3, z += $26/3, aa += $27/3}' | tail -n 1 | sed 's/\ /,/g'

$ grep ',' results | awk -F',' '{print $1, $2, c += $3/3, d += $4/3, e += $5/3, f += $6/3, g += $7/3, h += $8/3, i += $9/3, j += $10/3, k += $11/3, l += $12/3, m += $13/3, n += $14/3, $15, p += $16/3, q += $17/3, r += $18/3, s += $19/3, t += $20/3, u += $21/3, v += $22/3, w += $23/3, x += $24/3, y += $25/3, z += $26/3, aa += $27/3}' | tail -n 1 | sed 's/\ /,/g'
p34,16G,79314.7,99,261108,43,110668,14,82252,99,529999,38.6667,866.333,1,16:100000:16/64,5910.67,43.3333,0,0,10761,48.3333,6586,47.6667,0,0,9575.33,50

Hope this helps..


[1] My bonnie++ results:
<http://www.xmission.com/~beolach/bonnie++_4disk-ls.html>
Intriguing results you have there, nice sequential read speed.
What FS are you using?
Any special options?
What read-ahead are you using?
What is your stripe_cache_size?
These heavily affect performance.

Justin.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux