issue with fio on drbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

i'm getting  unexpected results benchmarking our new storages bandwidth running XFS in a DRBD RAID 1 over Ethernet.

The two peers are connected via DRBD 8.3 on Debian Wheezy x64.
I'm running Fio 2.0.8
The hardware is a hybrid RAID10 on an adaptec 71605Q. Build of 2x 3TB Seagate Conestellation CS SED and 2x Samsung 840 500GB.

My jobfile:
 [global]
disable_lat=1
disable_clat=1
disable_slat=1
clat_percentiles=0
direct=1
buffered=0
create_on_open=1

[randrw]
filename=rand.fio
rw=randrw
size=1g
runtime=180
stonewall

[randwrite]
filename=rand.fio
rw=randwrite
size=1g
runtime=180
stonewall

[seqwrite]
filename=seq.fio
rw=write
size=2g
runtime=180
stonewall

[dualseqwrite]
filename=seq.fio
rw=write
size=2g
numjobs=2
runtime=180
stonewall

[seqread]
filename=seq.fio
rw=read
size=2g
runtime=180
stonewall

I'm only measuring the bandwidth.

The results on the device without drbd are like this:
Randrw: 5564kb/s
Randrwr: 5575kb/s
Randw: 4520kb/s
Write: 74266kb/s
Dualwrite thread1: 44761kb/s
Dualwrite thread2: 44497kb/s
Read: 74619kb/s

The moment I start DRBD and set the node to primary the highest result is 500-700kb/s on every test. I tried various performance improving tweaks and even asynchronous sync for drbd.

Is there any way that these results emerge through the combination of fio and drbd?


kind regards

Tim Rohwedder


--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux