Tomasz Chmielewski schrieb:
Ronen Shitrit wrote:
The resync numbers you sent, looks very promising :)
Do you have any performance numbers that you can share for these set of
patches, which shows the Rd/Wr IO bandwidth.
I have some simple tests made with hdparm, with the results I don't
understand.
We see hdparm results are fine if we access the whole device:
thecus:~# hdparm -Tt /dev/sdd
/dev/sdd:
Timing cached reads: 392 MB in 2.00 seconds = 195.71 MB/sec
Timing buffered disk reads: 146 MB in 3.01 seconds = 48.47 MB/sec
But are 10 times worse (Timing buffered disk reads) when we access
partitions:
There seems to be another side effect when comparing DMA engine in
2.6.17-iop1 to 2.6.21-iop1: network performance.
For simple network tests, I use "netperf" tool to measure network
performance.
With 2.6.17-iop1 and all DMA offloading options enabled (selectable in
System type ---> IOP3xx Implementation Options --->), I get nearly 25
MB/s throughput.
With 2.6.21-iop1 and all DMA offloading optons enabled (moved to Device
Drivers ---> DMA Engine support --->), I get only about 10 MB/s
throughput.
Additionally, on 2.6.21-iop1, I get lots of "dma_cookie < 0" printed by
the kernel.
--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html