Ronald Moesbergen, on 07/08/2009 12:49 PM wrote:
2009/7/7 Vladislav Bolkhovitin <vst@xxxxxxxx>:
Ronald Moesbergen, on 07/07/2009 10:49 AM wrote:
I think, most likely, there was some confusion between the tested and
patched versions of the kernel or you forgot to apply the io_context
patch.
Please recheck.
The tests above were definitely done right, I just rechecked the
patches, and I do see an average increase of about 10MB/s over an
unpatched kernel. But overall the performance is still pretty bad.
Have you rebuild and reinstall SCST after patching kernel?
Yes I have. And the warning about missing io_context patches wasn't
there during the compilation.
Can you update to the latest trunk/ and send me the kernel logs from the
kernel's boot after one dd with any block size you like >128K and the
transfer rate the dd reported, please?
I think I just reproduced the 'wrong' result:
dd if=/dev/sdc of=/dev/null bs=512K count=2000
2000+0 records in
2000+0 records out
1048576000 bytes (1.0 GB) copied, 12.1291 s, 86.5 MB/s
This happens when I do a 'dd' on the device with a mounted filesystem.
The filesystem mount causes some of the blocks on the device to be
cached and therefore the results are wrong. This was not the case in
all the blockdev-perftest run's I did (the filesystem was never
mounted).
Why do you think the file system (which one, BTW?) has any additional
caching if you did "echo 3 > /proc/sys/vm/drop_caches" before the tests?
All block devices and file systems use the same cache facilities.
I've also long ago noticed that reading data from block devices is
slower than from files from mounted on those block devices file systems.
Can anybody explain it?
Looks like this is strangeness #2 which we uncovered in our tests (the
first one was earlier in this thread why the context RA doesn't work
with cooperative I/O threads as good as it should).
Can you rerun the same 11 tests over a file on the file system, please?
Ronald.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html